int is a type that is defined as different lengths on different systems. One the AVR Arduinos like UNO and Mega it is 16 bits. On a Due it's 32 bits. On your computer it might be 64 bits.
Just using int causes a huge number of issues when you move from one architecture to another, as the size of int is usually defined as the 'natural' size for the architecture. This makes it 8 bits for 8 bit architectures, 16 for 16, etc.
if is a signed datatype it means the the variable can also store negative numbers so a uint8_t is the same size as a int8_t, the difference is the signed variable stores values from -127 to 127.
The C standard has required the minimum size of an int be 16 bits for many decades.So there is no C implementation where an int is 8 bits.(I saw one for the 6502 back in the very early 80's but those days are long past...)
However, the reverse can also happen.i.e. if you define something as uint8_t it may be more efficient on the AVR but less efficient on other processors that may have to do lots of masking on the register contents to ensure that the registers upper bits are never set.stdint also has a solution to help with that as well.they are the "least" types.i.e.uint_least8_tuint_least16_tWhen code is used across diverse platforms, these can be a better choice for certain things like loops as it ensures it to be at least the minimum number of bits but allows it to be larger if that is more efficient for the target processor.
The 8-bit variable (assume that the result is in 2's complement form)