From the ChipKit site:
I'm currently using the ChipKit Max32.
Long story short, I'm trying to convert an unsigned integer (e.g. 1110110, or 188) to a signed float, such that it shows as a negative number with di...
So, all this discussion is pointless.
The OP has 2 unsigned bytes, the upper and lower halves of a signed short.
He's putting them together into a signed integer on a 32 bit platform, thus half filling an int.
The sign bit, on a 32 bit system, is bit 31. On an 8 bit system with emulated 16-bit integer, the sign bit is bit 15.
He is ending up with the sign bit in bit 15 on a system where the sign bit is bit 31.
The content, scaling, size of the values, etc, is all completely irelevant.
On the ChipKit the integers are 32 bits