Bit shifting two bytes into one signed int

I am reading audio wav file data from SD.

Two bytes represent one signed 16 bit Int. (includes negative numbers).
My code works for positive numbers but not for negatives.

byte a = 16; byte b =0;
int ret = (b << 8) + a ;
ret becomes 16. this is CORRECT. Horray!

byte a = 253; byte b = 255;
int ret = (b << 8) + a ;
ret becomes 65533. this is NOT correct.... ret should be -3. Arrrrrg.

I read about the "Two's complement" method of signed binary representation of negatives but can't seem to wrap my head around it.

Show how you print the int variable.

int intSigned() {
byte a =;
byte b =;

  Serial.print(a); Serial.print(","); Serial.print(b);
  int r = (b << 8) + a ;

  Serial.print("="); Serial.print(r); Serial.print("..."); 
  return r;


I also have a program in C# on my PC that is giving me the correct numbers for the same data using .net's BitConverter.ToInt16() function. So i know the indexes i am reading are correct.

You mean char ± 128/127 ?

Which platform are you using? Is the int data type 32 or 16 bit ?

standard 16 bit.. signed with the industry standard "two's complement" method. I believe its universal across platforms.

Problem is: an int can have different sizes on different platforms. If you want truly universal datatypes, use something like int16_t: that is guaranteed to have exactly 16 bits, everywhere.

Take a look at this (tested on a PC):

#include <stdio.h>
#include <stdint.h>

uint8_t a = 253, b = 255;
int ret;
int16_t ret_16;

int main (void)
    ret_16 = ret = (b << 8) + a;
    printf ("An int is %lu bytes long.\n", sizeof (int));
    printf ("An int16_t is %lu bytes long.\n",sizeof (int16_t));
    printf ("The value as int is: %d\n", ret);
    printf ("The value as int16_t is: %d\n", ret_16);
    ret = (int16_t) ((b << 8) + a);
    printf ("With typecast, the int value is: %d\n", ret);
    return 0;


An int is 4 bytes long.
An int16_t is 2 bytes long.
The value as int is: 65533
The value as int16_t is: -3
With typecast, the int value is: -3

So, either save the value into an int16_t or use typecast.

Arduinos are not “standard” either. Which Arduino did you run your test program on?

What do you think the compiler will do with this?

Will it shift the byte b 8 bits to the left, effectively making b zero before adding a and converting the result to an int, or will it convert b to an int first before carrying out the shift?

65533 and -3 are the same bit pattern, if you only have 16 bits (say, on an AVR)
Since "byte" is unsigned, the expression "(b<<8) + 1" will also be treated as unsigned, and won't be sign-extended when assigned to a 32bit "signed int"


 x = (int16_t)(b << 8) + a ;

works on a desktop:

#include <stdio.h>
#include <stdint.h>

uint8_t a=253, b=255;

int main() {
  int x = (b << 8) + a ;
  printf("result of int x = (b << 8) + a = %d\n", x);
  x = (int16_t)(b << 8) + a ;
  printf("result of     x = (int16_t)(b << 8) + a = %d\n", x);
result of int x = (b << 8) + a = 65533
result of     x = (int16_t)(b << 8) + a = -3

Look into using the union structure for this.

Bullet Proof:

void setup() {
  uint16_t temp;
  int16_t ret;
  uint8_t a = 253;
  uint8_t b = 255;


  temp = ((uint16_t) b) << 8 | a;
  memcpy (&ret, &temp, sizeof(int16_t));

void loop() {

Serial Output:


Thank you so much. Your answer seems to be working for me!
Actually, seem I only needed the one cast. here is the final code i am using

  byte a =;
  byte b =;  
   int r = (int16_t)(b << 8) + a ;

and thanks to everyone else for responding as well.