# Function return value changing sign

I am playing with a SN74LV8154 counter with an Arduino Mega2560 board. After some wrangling I seem to have it happily counting. The counter chip is a 32 bit counter that outputs the 32 bits in 4 x 8 bit parts. To get the full 32 bit binary number you have to switch the 8 outputs bits between the 4 blocks of 8 bits. I believe I have this working.

To convert the 8 bit binary to decimal I have written a function that reads the output bits, calculates the decimal value and returns it to the calling routine. Within the function this is being calculated correctly.

All works well until i get to 00000001 (128)

The value that gets returned suddenly changes sign and is returned as a negative number. this is despite it being calculated correctly within the function

11111110 (127) is returned correctly as +ve.

Any value that is greater than 127 is returned as a negative value.

00000001 (128) is returned as -128

I have attached the output of the results as shown by a bunch of print statements.

The value calculated within the function is correct and +ve but the number that is returned to the calling routine interprets it as -ve if it's greater than 127.

## The output is in the following order.

The next three lines are printed within the function binary number value of 0,1,2,3,4,5,6,7 digits

# sum of digits calculated within the function

This is printed in the calling routine after being returned.

## sum of digits returned in function

11111110

1.00 2.00 4.00 8.00 16.00 32.00 64.00 0.00

# 127.00

## 127

00000001

0.00 0.00 0.00 0.00 0.00 0.00 0.00 128.00

# 128.00

-128.00

10000001

1.00 0.00 0.00 0.00 0.00 0.00 0.00 128.00

# 129.00 # This is calculated within the function as the sum of all the digits It is correct

-127.00 # This return value is a value that results from the sum of 1 and -128 from10000001 ??????

Is there something that I don't understand here.

Well, only you can see the code. If I had to gamble, I would bet that you bungled one of the data types.

aarg. You are just too smart for this world. :) :)

I had changed what I was doing with the result of the function and forgot to change the data type of the function. The function was of type char as i was originally returning a char but now I was doing calculations with the result. I just changed the function to float and life is wonderful.

Many thanks

Peter

You have the bit order backward, 127 is 01111111, 128 is 10000000, if you are using a signed type (char) 10000000 will print as -128. Use "byte" (unsigned char).

JCA34F :- Thanks for pointing that out. At the moment I think that was just a display issue. I had my read loop going the wrong way. I've fixed it up now. I believe the basic calc is working correctly. I'm pretty sure i had the significance of the digits correct. I'll keep an eye on it.

Your comment " if you are using a signed type (char) 10000000 will print as -128. Use "byte" (unsigned char)." was on the money.

This is the debug output now.

00000000-00000000-00000000-01111010---122-- 00000000-00000000-00000000-01111011---123-- 00000000-00000000-00000000-01111100---124-- 00000000-00000000-00000000-01111101---125-- 00000000-00000000-00000000-01111110---126-- 00000000-00000000-00000000-01111111---127-- 00000000-00000000-00000000-10000000---128-- 00000000-00000000-00000000-10000001---129-- 00000000-00000000-00000000-10000010---130-- 00000000-00000000-00000000-10000011---131--

Many thanks

peter