tmd3:
... load up the 10 bits into a 16-bit int and extend (copy) the sign in bit 9 from bit 10 to bit 15 ...
It's even easier than that. The device provides a 10-bit signed number, left-justified in a 16-bit word. For a 16-bit platform, it's as simple as loading the value into a signed int, and shifting it six bits to the right; the sign bit is automatically extended. But that's not what the OP asked for: he asked for help in writing code that could easily accommodate a different number of significant bits from the input device, and he expressed concern about algorithms that relied on a 16-bit integer size. Because the issue has practical implications, it's getting a bit of attention. It's certainly made me wonder how deeply I've embedded implementation-dependent parameters into some of my own favorite code.
For a 16-bit platform, it's as simple as loading the value into a signed int ** with the top bit as bit 15, maybe load it direct, shift 6 bits left after determining 6 bits for 16 bit int or some other way but there has to be that trick step or the next trick won't work **, and shifting it six bits to the right; the sign bit is automatically extended.
When you put in all the steps you get the true length of the code. Then the other simple ways don't seem so bad.
The best routine would run completely on CPU registers in less a microsecond, but that wouldn't be 'generic'.