Go Down

Topic: Converting Binary from a Sensor into decimal (Read 932 times) previous topic - next topic

Richard_N

Dec 13, 2012, 12:57 am Last Edit: Dec 13, 2012, 01:04 am by Richard_N Reason: 1
I have a temperature sensor which will be presenting data in the serial monitor.

To convert from the raw data the eqn is as follows:

T = C x 2n

C = the sensor output in decimal ( the output is a 9bit and I have it in the form of a 16 bit integer)
n is defined by what resolution you are getting from the sensor ( 9, 10, 11, 12) in my case n = -1

My question is this how do I convert my integer "temp" into a decimal to then use in this formula.

Thanks

Coding Badly


Richard_N

That will multiply the binary value by 0.5 though, do I not need to convert it to decimal first?
Or does the later serial print do that for me?

Thanks

PeterH

#3
Dec 13, 2012, 01:14 am Last Edit: Dec 13, 2012, 01:16 am by PeterH Reason: 1
Decimal is a way of representing a number textually, it is not an attribute of the integer value. If you have the temperature value as a 16-bit integer as you say, then you need simply multiply it by your constant (2^-1) to get the result. And that calculation is not going to be too complicated.
I only provide help via the forum - please do not contact me for private consultancy.

Delta_G

A number is a number whether you write it in binary or decimal or whatever.  To humans it is different.  To mathematical operations it is not. 

In decimal, 2 * 2 = 4

In Binary  10 * 10 = 100

In roman numerals  II * II = IV

There's no difference in any of it as far as the mathematical operation is concerned. A computer can only use binary representations no matter what.  Save converting to decimal until you have something you want to show to a human. 

Go Up