Go Down

Topic: Signed number grief. (Read 1 time) previous topic - next topic

billroy

Noting that -1<<10 will generate the correct bit mask to sign-extend the ten-bit quantity to whatever word sizeā€¦ here you can see Bitlash doing the calculation on OS X with a 64 bit word size:
Code: [Select]

$ bitlash
bitlash here! v2.0 (c) 2012 Bill Roy -type HELP- 1000 bytes free
> print -1<<10:x
FFFFFFFFFFFFFC00


So, dhenry's proposal can be generalized to:

Code: [Select]

(signed short) ((x & 0x0200)?(x|(-1<<10)):x)


-br

majenko


I don't see how this helps:
Code: [Select]

typedef long device_t;
...
const device_t deviceWordLength = sizeof(device_t)<<2;

deviceWordLength is a characteristic of the input device.  You can call it the length of the device's reply, or you can call it the position of the reply's sign bit.  In this code, it depends only on the characteristics of the long data type.

What am I missing?

Sorry, I missed what that bit was doing.

Ignore me, I know nothing :P
Get 10% off all 4D Systems TFT screens this month: use discount code MAJENKO10

billroy

A little test code.
Code: [Select]

> ls
function extend10 {if arg(1)&0x200 return arg(1)|(-1<<10); else return arg(1); };
function px10 {print extend10(arg(1)):x;};
> px10(0x200)
FFFFFFFFFFFFFE00
> px10(0x1ff)
1FF
> px10(0x10)
10


-br

tmd3

A slow imprecise floating point divide to replace a fast precise bit shift.

I think the poster refers to this line:
Code: [Select]
    someFloatVariable = x * (deviceScaleFactor * (1.0/(1 << (deviceWordLength-deviceBits))));

Looking at these concerns one by one, in reverse order:

  • floating point divide:  Indeed, there's a floating point division explicitly coded in that line.  But, the arguments are all constants, along with everything else inside the parentheses.  Based on the explorations of compiler's output for other complicated floating point expressions, I believe that the compiler will detect that the expression evaluates to a constant, optimize the internal calculations out of existence, and wind up with a single floating point constant. I don't see that there will be a division in the executed code.

  • imprecise:  I don't think so.  The operation that's being replaced is a bit shift right, equivalent to division by 2N, for an integer N.  The floating point representation of 2-N is a sign bit, an exponent, an implied one, and 24 zeroes.  Division will result in a change of exponent only, and won't result in any loss of precision.  It's true that, in general, precision is lost in a floating point division, but not in this case, and that's a not consequence of the particular parameters of this calculation, but rather of the very operation that's being performed.

  • slow:  Yes, a floating point division is slow.  But, because everything involved is a constant, it happens at compile time, rather than at runtime.  The Arduino won't execute a floating point division based on this code.



If you're skeptical about the compiler optimizing away the division, you can force it like this:
Code: [Select]
...
const float deviceScaleFactorComplete = deviceScaleFactor * (1.0/(1 << (deviceWordLength-deviceBits)));
...
    someFloatVariable = x * deviceScaleFactorComplete;


Quote
... just trying to turn a 10-bit value into 16 ...

Yes.  But, the OP expressed concern about the fact that some of the proposed techniques rely on the fact that an integer has a 16-bit representation - an implementation-dependent size - and wanted to know how to make his code more general.  That concern is of more than merely academic interest, because "on the Arduino Due, an int stores a 32-bit (4-byte) value." See it here:  http://arduino.cc/en/Reference/Int.  It's not at all unlikely that some of us will be porting code we write today to that platform, so there's certainly value in coding for the general case.

Quote
... load up the 10 bits into a 16-bit int and extend (copy) the sign in bit 9 from bit 10 to bit 15 ...

It's even easier than that.  The device provides a 10-bit signed number, left-justified in a 16-bit word.  For a 16-bit platform, it's as simple as loading the value into a signed int, and shifting it six bits to the right; the sign bit is automatically extended.  But that's not what the OP asked for:  he asked for help in writing code that could easily accommodate a different number of significant bits from the input device, and he expressed concern about algorithms that relied on a 16-bit integer size.  Because the issue has practical implications, it's getting a bit of attention.  It's certainly made me wonder how deeply I've embedded implementation-dependent parameters into some of my own favorite code.

dhenry

I put this to some exercise:

Code: [Select]
#define FLP(pin)  {PORTD |= 1<<1; PORTD &=~1<<1;}

void setup(void) {
  PORTD &=~(1<<1);
  DDRD |= (1<<1);
}

const long deviceBits = 10;
const long deviceWordLength = 16;
const long deviceMask = -(1l << (deviceWordLength - 1));
const float deviceScaleFactor = 0.25;
int x=511;
float someFloatVariable;

void loop(void) {

  FLP(OUT_PIN); FLP(OUT_PIN);  FLP(OUT_PIN); //flip out pin
  if (x & deviceMask) {
    x |= deviceMask;
  }
  someFloatVariable = x * (deviceScaleFactor * (1.0/(1 << (deviceWordLength-deviceBits))));
  FLP(OUT_PIN); FLP(OUT_PIN); //flip out pin
  someFloatVariable = (signed short) ((x & 0x0200)?(x|(-1<<10)):x);
  FLP(OUT_PIN);
  delay(10);
}

 


Question: how much time does the floating point approach take than the integer shift approach?
1: less time;
2: the same;
3: 8x;
4: all the above.

Go Up