uint16_t vs unsigned short

So what's the difference between uint16_t and unsigned short? Is there a difference? I've been Googling it and searching for the answer, but I cannot find it anywhere. :s

fuzzball27: So what's the difference between uint16_t and unsigned short? Is there a difference? I've been Googling it and searching for the answer, but I cannot find it anywhere. :s

On the Arduino none, and most systems you are likely to program on none. However, uint16_t says that you must be given an integer that is unsigned and exactly 16 bits. Unsigned short says that you will be given an unsigned value that is at least 16 bits, but could be more than 16 bits. This is more useful for int32_t, uint32_t, int64_t, and uint64_t since those vary more by the system. For example, on Arduino's ints are 16 bits, but on other systems, int might be 32 bits or even 64 bits. Long can be 32 bits or 64 bits. So in C99 they added stdint.h to give you the ability to specify exact sizes of int types. C++ has adapted this as well.

Stdint.h also has types to say at least N bits, but possibly more (uint_least16_t), and at least N bits and as fast as possible which is important on machines that do arithmetic in larger sizes (uint_fast16_t).

Awesome! Thanks for the reply.

The original definition of C and C++ said nothing about the size of integer and float types (back then there were machines with other than a power of two bits-per-word, for instance 36 bits in a word and 6 6-bit characters in a word). This made all C code inherently non-portable...

The convention was developed by people writing operating systems to overcome this frailty of the language definition.