Go Down

### Topic: Signed number grief. (Read 4077 times)previous topic - next topic

#### krupski

##### Nov 11, 2012, 05:21 am
Hi all,

I have a device that returns info as a 10 bit number with the MSB being the sign bit. Therefore, the range is +511 to -512.

I'm trying to convert this to a float (i.e. actually negative or positive) and came up with this code:

Code: [Select]
`        x = (x & 0b0000001000000000) ? ((x & 0b0000000111111111) - 0b0000001000000000) : (x & 0b0000000111111111);`

Something is telling me that I'm doing this in a WAY too complicated manner and I wonder if there is a simpler method.

Another thing I don't like about my method is that it's not "generic". If the range of numbers had more or less bits, the above code would fail.

Ideas will be appreciated.

Thanks!

-- Roger
Gentlemen may prefer Blondes, but Real Men prefer Redheads!

#### nickgammon

#1
##### Nov 11, 2012, 06:17 am
How about (assuming x is an int):

Code: [Select]
`if (x & 0b0000001000000000)   x |= 0b1111110000000000;   // sign extend`

Then just convert it into a float.
Please post technical questions on the forum, not by personal message. Thanks!

#### krupski

#2
##### Nov 11, 2012, 06:41 am

How about (assuming x is an int):

Code: [Select]
`if (x & 0b0000001000000000)   x |= 0b1111110000000000;   // sign extend`

Then just convert it into a float.

That works... but it depends on an int to be 16 bit.... which seems "unclean" to me.

What I'm doing - if you're interested - is writing a custom driver for the Dallas / Maxim DS-3234 real time clock chip.

One of the things it returns is the temperature of the device. It returns 2 bytes... the MSB contains 7 bits of data and the high bit is the sign bit. The LSB returns the last 2 bits as bits 7 and 6 (screwy).

So I take MSB << 8 to put it into the "top half" of an int, then add in the LSB, then finally take the whole thing and >> 6 to bring it down to bits 0 through 9. Lastly multiply by 0.25 since the temperature is 0.25 degrees C per bit.

This is the code I'm describing:

Code: [Select]
`float DS3234_read_temp(void){        int x;        digitalWrite(CS, LOW);        SPI.transfer(0x11 + 0x00);        x = SPI.transfer(0) << 8; // read hi byte        digitalWrite(CS, HIGH);        digitalWrite(CS, LOW);        SPI.transfer(0x12 + 0x00); // read lo byte        x += SPI.transfer(0); // add it to hi byte        digitalWrite(CS, HIGH);        x = (x >> 6); // slide bits down        x = (x & 0b0000001000000000) ? ((x & 0b0000000111111111) - 0b0000001000000000) : (x & 0b0000000111111111); // convert sign :(        return (x * 0.25); // return temperature}`

I'm wondering... since the high byte has the sign bit as bit 7, could I do the signed conversion THERE before I slide everything down?

I've been racking my brain on this and I'm just not seeing it... I KNOW it must be simple but I just don't see it.
Gentlemen may prefer Blondes, but Real Men prefer Redheads!

#### tmd3

#3
##### Nov 11, 2012, 07:29 am
Another thing I don't like about my method is that it's not "generic". If the range of numbers had more or less bits, the above code would fail.

I don't see a way to make the code completely "generic," as you'd say, or "general," as I'd prefer to say.  There's no way for the program to divine the number of bits that the input device provides - you have to tell it that number somewhere.

Here's my favorite way to do it:
Code: [Select]
`const long deviceBits = 10;const long deviceMask = -(1 << (deviceBits - 1));int x;...    if (x & deviceMask) {      x |= deviceMask;    }    someFloatVariable = x;`

Here's why I like it:

• There's only one place to describe the sensor input, and it's easy to find, and easy to determine what number to use

• It works whenever the input device provides something that's not bigger than a long

• It works for an int sized at something other than two bytes - it just has to be no bigger than a long

• The compiler remembers and uses - but doesn't need to allocate memory for - the constants

#### tmd3

#4
##### Nov 11, 2012, 07:56 amLast Edit: Nov 11, 2012, 04:21 pm by tmd3 Reason: 1

I'm wondering... since the high byte has the sign bit as bit 7, could I do the signed conversion THERE before I slide everything down?
I've been racking my brain on this and I'm just not seeing it... I KNOW it must be simple but I just don't see it.

It may be even simpler than you think.

From the Arduino Refrence page describing the bitshift operators at http://arduino.cc/en/Reference/Bitshift:

Quote
When you shift x right by y bits (x >> y), and the highest bit in x is a 1, the behavior depends on the exact data type of x. If x is of type int, the highest bit is the sign bit, determining whether x is negative or not, as we have discussed above. In that case, the sign bit is copied into lower bits, for esoteric historical reasons...

For your program, x is indeed of type int.  You're starting with the sign bit in the MSB.  Based on this snippet of the reference, all you have to do is shift the value to the right until it's scaled right, and the program will automatically extend the sign bit.  Looks like it's done for more than historical reasons - with this behavior, we can divide a negative integer by 2N by shifting right by N bits, just like we do for a positive integer.  If x were declared as unsigned int, the sign bit wouldn't be extended, and zeroes would shift in at the MSB.

Also - and this might be the best reason - when I tested your code, I found that this line
Code: [Select]
`x = (x & 0b0000001000000000) ? ((x & 0b0000000111111111) - 0b0000001000000000) : (x & 0b0000000111111111); // convert sign :(`
didn't have any effect.  I got the same correct answers with it, and without it

You might be overthinking this.
Edit:  Or maybe not.  After reflection, I recall that you want a general solution that doesn't rely on a 16-bit length for an integer.  The method shown in this post relies on that, since it relies on the fact that the sign bit is in the MSB.  Back to the drawing board for me.

#### PeterH

#5
##### Nov 11, 2012, 01:29 pm

Here's my favorite way to do it:

That's a good solution.

#### stimmer

#6
##### Nov 11, 2012, 02:11 pm
Another approach is to add 512 to the number, then AND with 1023. This converts the number to an unsigned value in the range 0 - 1023. Then convert that to a float, and subtract 512.0 to get back to a signed value.

float f = float((x+512)&1023) - 512.0 ;
Due VGA library - http://arduino.cc/forum/index.php/topic,150517.0.html

#### el_supremo

#7
##### Nov 11, 2012, 04:08 pm
Quote
the sign bit is copied into lower bits, for esoteric historical reasons

The Arduino reference needs to be changed. What is esoteric or historical about an arithmetic right shift?

Pete
Don't send me technical questions via Private Message.

#### fungus

#8
##### Nov 11, 2012, 04:35 pm

Another approach is to add 512 to the number, then AND with 1023. This converts the number to an unsigned value in the range 0 - 1023. Then convert that to a float, and subtract 512.0 to get back to a signed value.

float f = float((x+512)&1023) - 512.0 ;

Well...that's an extra floating point subtract when the exact same thing will work with integers:

float f = int((x+512)&1023) - int(512);
No, I don't answer questions sent in private messages (but I do accept thank-you notes...)

#### tmd3

#9
##### Nov 11, 2012, 05:56 pm
Rethinking my earlier post about dealing with a left-justified, less-than-16-bit signed integer:  you want a general method of managing that value.  Specifically, you want something that doesn't rely on the size of an integer for any particular platform.  Not a bad idea, if you want to be able to port an application to the Arduino Due, with its 32-bit integers.

If you want that particular generality - if that's not an oxymoron - I don't see a way to avoid examining the sign bit, and then doing something to the data.  You can do that as described previously in this post:

I'll note that you can avoid the 6-bit shift by folding the shift operation into the floating-point multiplication, like this:
Code: [Select]
`const long deviceBits = 10;const long deviceWordLength = 16;const long deviceMask = -(1 << (deviceWordLength - 1));const float deviceScaleFactor = 0.25;int x;...    if (x & deviceMask) {      x |= deviceMask;    }    someFloatVariable = x * (deviceScaleFactor * (1.0/(1 << (deviceWordLength-deviceBits))));`
It has the same advantages as the version in the post referenced above.  I think - but don't know - that it will eliminate the 6-bit shift from the compiled code, because the compiler will recognize the factor in the last statement as being composed entirely of constants, and will do that calculation at compile time.  It adds a characteristic of the sensor to the code - 0.25 degrees C per tick - but that was already embedded in the program, and it might as well be at the top where it's easy to find and modify.  It also adds the characteristic that the device output is a 16-bit word.

I've tested that code with the characteristics of the device described, and I think it works.  I haven't tested it with other characteristics to verify its general-ness.

Using the stimmer-fungus technique described above, the general code might look like this:
Code: [Select]
`const long deviceBits = 10;const long deviceWordLength = 16;const long deviceMask = (1 << deviceBits) - 1;const float deviceScaleFactor = 0.25;int x;...    x >>= (deviceWordLength-deviceBits);    x += 1 << (deviceBits - 1);    x &= deviceMask;    x -= 1 << (deviceBits - 1);     someFloatVariable = x * deviceScaleFactor;`
That code works in the specific case, too.  It's not checked for generality.

For more complete generality, you could add a const someType deviceOffset, to manage gizmos whose output values aren't centered on zero.  That might be the input code that corresponds to a zero reading, in which case the constant would be int and added to x, or it might be the reading that a zero input describes, in which case the constant would be float and added to someFloatVariable.  Then, with just a couple of value changes to constants, you could get your readings in, say, Kelvin, or - oh joy of joys - Farenheit or Rankine; or, you could get your readings from an analog conversion on something like a 4-20mA temperature transducer, or a single-supply LM35 circuit.

#### majenko

#10
##### Nov 11, 2012, 10:36 pmLast Edit: Nov 11, 2012, 10:39 pm by majenko Reason: 1
You want to generalise it even more?

Code: [Select]
`typedef long device_t;const device_t deviceBits = 10;const device_t deviceWordLength = sizeof(device_t)<<2;const device_t deviceMask = (1 << deviceBits) - 1;const float deviceScaleFactor = 0.25;`

#### tmd3

#11
##### Nov 12, 2012, 04:17 am
I don't see how this helps:
Code: [Select]
`typedef long device_t;...const device_t deviceWordLength = sizeof(device_t)<<2;`
deviceWordLength is a characteristic of the input device.  You can call it the length of the device's reply, or you can call it the position of the reply's sign bit.  In this code, it depends only on the characteristics of the long data type.

What am I missing?

#### GoForSmoke

#12
##### Nov 12, 2012, 01:30 pm
Oh great. A slow imprecise floating point divide to replace a fast precise bit shift.

You're just trying to turn a 10-bit value into 16 bits to convert to float.

How about, insane as this might seem, load up the 10 bits into a 16-bit int and extend (copy) the sign in bit 9 from bit 10 to bit 15 then convert to float.
2) http://gammon.com.au/serial
3) http://gammon.com.au/interrupts

#### dhenry

#13
##### Nov 12, 2012, 01:57 pm
Here is my proposal:

(signed short) ((x & 0x0200)?(x|0xfc00):x)

Essentially, left extending the sign.

#### pYro_65

#14
##### Nov 12, 2012, 02:07 pm
Here is a generic solution; it is standard conforming as long as T is some variant of int ( signed, short, long...).

Code: [Select]
`template <typename T, unsigned B>inline T sign_extend(const T x){  struct {T x:B;} s;  return s.x = x;}`

So for 10-bits extended to 16-bits:
Code: [Select]
`int result = sign_extend< signed int ,10 >(x);`
Forum Mod anyone?
https://arduino.land/Moduino/

Go Up