union int16_byte {
int16_t i;
byte b[2];
} rtcTemp;
He can then access the 2 bytes read in as an INT16.
As you can see from the datasheet segment that you posted, the "raw" temperature will be 64x bigger because the bottom data bit is bit 6 of the LSB. He divides it by 64 so that the LSB of the "raw" temperature is now in the LSB of the INT16.
At this point he returns the result - which as the comment says is still 4x bigger. The reason it's still 4x bigger is explained in the datasheet where it describes the Temperature Registers:
Temperature is represented as a 10-bit code with a resolution of 0.25°C and is accessible at location 11h and 12h. The temperature is encoded in two’s complement format. The upper 8 bits, the integer portion, are at location 11h and the lower 2 bits, the fractional portion, are in the upper nibble at location 12h. For example, 00011001 01b = +25.25°C.
The LSB of the returned temperature actually represents 0.25degC so to get the "real" temperature you still need to do a further divide by 4.
Note that you can just read in the signed byte from address 0x11 to get the temperature in degrees. Whether reading in the rest of the temperature is of any value is somewhat debatable as the datasheet says that the temperature accuracy is +/-3 degrees C.