Converting 10-bit unsigned to 16-bit signed. SOLVED

I'm trying to use the FHT library to produce a frequency spectrum of an audio signal by following an example here. After reading the ADCL and ADCH registers, the contents are combined into an int, k.

byte m = ADCL;
byte j = ADCH;
int k = (j << 8) | m;

I'm happy so far. This is then supposedly converted to a 16-bit signed integer, but I just cannot follow the logic.

k -= 0x0200;
k <<= 6;

I was expecting some bitwise inversion and then adding 1.

By hand, as far as I can tell...
0x3FF (Full scale ADC) converts to 0x7FC0 (32,704), which is close enough to 32,767 I suppose.
0x001 (smallest possible ADC ) converts to 0x8040 (I think), which is -32,704.

So, is this a widely accepted approximation or have I mis-calculated or missed the point? I've searched but haven't found this method anywhere else.

The "happy face" in your code will not compile, which is why we suggest to use code tags.

There are many acceptable ways of doing that conversion.

The ADC produces numbers in the range 0 - 1023.
The FHT library requires signed numbers. The range 0 - 1023 is converted to the range -512 - +511 by subtracting 512 (0x200). Then it is shifted up 6 bits (i.e. multiply by 64) resulting in a range of -32768 to 32767 which is the full range of a 16-bit number.

I was expecting some bitwise inversion and then adding 1.

Why? All that does is form the twos complement of a number which is easily written in C with the minus sign.

Pete

Thank you. That was succinct and understandable.