Long story short, I'm trying to convert an unsigned integer (e.g. 1110110, or 188) to a signed float, such that it shows as a negative number with digits after the decimal. Problem is: I'm still getting positive values, but in floating decimal. How can I convert them to signed types? I've tried using float(), I've tried converting to a signed integer using int() then converting to a float using float(), but to no avail. Here's a snippet of what I'm doing:
EDIT: I accidentally mis-defined the struct in the beginning. It has been corrected.
struct IMU{
float data;
} phi, theta, psi;
#define scale 0.0109863
void setup(){
//stuff
}
void loop(){
unsigned int data[6];
int phi_data = 0, theta_data = 0, psi_data = 0;
//Later on, we read data serially from an angular position sensor. This data is assigned to the array cells of data[].
for (byte i = 0; i < 6; i++){
data[i] = Serial1.read();
}
phi_data = ((data[0] << 8) | data[1]);//Concatenate two bytes, comprising one signed integer value
theta_data = ((data[2] << 8) | data[3);
psi_data = ((data[4] << 8) | data[5]);
}
phi.data = float(phi_data)*scale;
theta.data = float(theta_data)*scale;
psi.data = float(psi_data)*scale;
}//end void loop
The bits are properly shifted, I just can't make them negative! Any help would be appreciated!
PeterH:
What is the value of phi_data? What float value are you expecting to derive from it?
I am trying to estimate angular position from an external sensor. If I have, for example, some large signed integer, 0000000010100000, then I want to convert that to 160 using the 'scale' multiplier. phi_data is supposed to contain the signed integer value, ready for float conversion. The idea was to read two unsigned values serially, perform bit-shift operations to concatenate them together to form one single value (phi_data), then convert that unsigned value to a signed value. Last, that signed value would be converted to a signed floating-point value (phi.data).
The reason I have the struct is because phi.data, theta.data and psi.data all need to be global values, as they will be used in multiple functions.
EDIT: I accidentally mis-defined the struct. It has been corrected.
PeterH:
There is quite a bit going on before you do anything involving floats. Am I right in assuming that this is the code you're having trouble with?
phi.data = float(phi_data)*scale;
That's the only piece of code I can imagine that's giving me trouble. I initially tried the following:
If you have the number 188, how can you hope to get anything other than the floating point representation of 188, which is just 188.0000001 ish. It is always going to be positive as you are starting with a positive number and not doing anything to make it negative.
Slappy:
I am trying to estimate angular position from an external sensor. If I have, for example, some large signed integer, 0000000010100000, then I want to convert that to 160 using the 'scale' multiplier. phi_data is supposed to contain the signed integer value, ready for float conversion. The idea was to read two unsigned values serially, perform bit-shift operations to concatenate them together to form one single value (phi_data), then convert that unsigned value to a signed value. Last, that signed value would be converted to a signed floating-point value (phi.data).
The reason I have the struct is because phi.data, theta.data and psi.data all need to be global values, as they will be used in multiple functions.
I'm sorry, I don't understand any of that. You want to do what?
Slappy:
The reason I have the struct is because phi.data, theta.data and psi.data all need to be global values, as they will be used in multiple functions.
Long story short, I'm trying to convert an unsigned integer (e.g. 1110110, or 188) to a signed float, such that it shows as a negative number with di...
So, all this discussion is pointless.
The OP has 2 unsigned bytes, the upper and lower halves of a signed short.
He's putting them together into a signed integer on a 32 bit platform, thus half filling an int.
The sign bit, on a 32 bit system, is bit 31. On an 8 bit system with emulated 16-bit integer, the sign bit is bit 15.
He is ending up with the sign bit in bit 15 on a system where the sign bit is bit 31.
The content, scaling, size of the values, etc, is all completely irelevant.
Did you read the code? That's not what he's doing at all.
The code he has is perfectly fine apart from one small point, which I have mentioned extensively above
short phi_data = 0; // <<<<<< USE SHORT, NOT INT, ON A 32 BIT SYSTEM TO FORCE 16 BITS
phi_data = ((data[0] << 8) | data[1]);//Concatenate two bytes, comprising one signed integer value