Bitwise shift behaviour

Hi all
I am getting hopelessly confused
Here is some test code

void setup() {
  // initialize the serial communication:
  Serial.begin(9600);
  uint8_t  x=B11110000;
  uint8_t a,b,c;
  a=x<<2;
  b=a>>2;
  c=(x<<2)>>2;
  Serial.println(x,BIN);
  Serial.println(a,BIN);
  Serial.println(b,BIN);
  Serial.println(c,BIN);

}

void loop() {

}

and here is the output

11110000
11000000
110000
11110000

This is not what I expected. Surely the last two lines of output should be the same as each other.
Please help, thanks

Possible explanation: It seems that internally a 16 bits register is used for shifting

first 3 lines make 100% sense

thry this variation to confirm ?

void setup() {
  // initialize the serial communication:
  Serial.begin(9600);
  uint8_t  x=B11110000;
  uint8_t a,b,c,d;
  a=x<<2;
  b=a>>2;
  c=(x<<2)>>2;
  d = (uint8_t(x<<2)) >> 2;   ////
  Serial.println(x,BIN);
  Serial.println(a,BIN);
  Serial.println(b,BIN);
  Serial.println(c,BIN);
  Serial.println(d,BIN);  ////
}

void loop() {}

In the 'real' world, with unlimited bits, this:

c=(x<<2)>>2;

is the same as this:

c=x;

Since the two shift operations cancel each other out - it might be that the compiler 'optimized' things for you, resulting in the shorter code and unexpected behavior.

Thanks for your replies - sort of confirms what I had suspected. Either compiler or CPU actions not well documented

Its a bit unnerving though. Like when you push the brake and the thing goes faster!
So now I have to double check every line of code in case something like this crops up again.

Bryan

blt:

While perhaps not intuitively obvious, it is the result of a well documented behavior in C.
In C, expressions will be promoted to integers when performing mathematical operations.
ints are 16 bits on the AVR
The difference is that when you did the calculations individually the results were then assigned back to
an 8 bit value. This assignment caused the loss of any bits beyond the 8 bits saved in the variable.
But when you did it all at once the calculation remained in 16 bits until the full calculation
was done before it was stored in the 8 bit variable.

One way pushed the interrum results into 8 bit variables along the way and the other
did the full calculation with more than 8 bits and then stuffed the final results in an 8 bit value.

To look closer, if you look at the individual calculations you take an 8 bit value, promote it 16 bits for
the calculations, shift it twice the left than take the lower 8 bits of the resulting calculation and
stuff it into the 8 bit variable. (The upper 2 bits are now lost). Now if you take that
8 bit value and shift it back to the right twice. It again gets promoted to 16 bits for the operation
but since the upper 2 bits were lost the resulting 8 bit value after the right shift
will not contain the original upper 2 bits.

Now if you look at the operation that did it all in a single calculation, the 8 bit variable is promoted to
16 bits then the left shift occurs, then the right shift occurs and because 16 bits were available for the first
left shift result, when you take the lower 8 bit result after the right shift,
you are exactly where you started.

--- bill