I thought this would be an easy operation...

I have a byte value that I want to make little larger by multiplying with a little higher than 1.

This is in a tight loop, so I'd like to avoid floating point math.

Let's say I want to multiply by 1.5. I could store 256*1.5 as an integer value in a word. Multiplying by that instead gives a value that is 256 times too large, so I'll just shift result down 8 steps. This is what's normally called "fixed point".

byte b = whatever;

word w = 0x180; // 1.5 in fixed point notation

result = (b * w) >> 8;

The above fails. Apparently, the 24 bit result of b*w is truncated to a 16 bit int

*before* shifting.

I tried to multiply b with the highbyte and lowbyte of w separately and then adding the results up, keeping only the upper 16 bits:

result = b * highByte(w) + highByte(b * lowByte(w));

That at least gives me the correct result, but I looked at the assembler... the expression above does 2 multiplications of bytes, which should map directly to the cpu's mul instruction. But the actual code generated has no less than 6 mul instructions in it.

Is there a way, short of writing inline assembler, to express a 8*16 bit multiplication that generates sane code?