What's the diffrence between using "byte" and using "uint8_t in functions?
I've discovered that this function works with both, but I've seen a lot of people using uint8_t instead of byte.
What's the diffrence between using "byte" and using "uint8_t in functions?
The size of a byte is up to the compiler writer. The size of a uint8_t is constant. Every compiler in the world needs to allocate 8 bits and treat the value as an unsigned.
On the Arduino, they are both implemented the same way.
So does that mean that uint8_t and uint16_t is universal to all (most) C/C++/C# compilers?
So with the arduino compiler uint8_t and byte are the same, and uint16_t and int are the same.
I'll think i'll stick with the uint_t variant, since we're going to program in C# the next semester at school. Better keep it universal
hansibull:
So does that mean that uint8_t and uint16_t is universal to all (most) C/C++/C# compilers? No. However, writers of embedded software often define these types, so that they have types that they know are e.g. exactly (or at least) 16-bits long, regardless of whether the compiler implements 'int' as 16 or 32 bits. The issue doesn't arise in C# or Java, because the size of all the basic types is defined by the language.
So with the arduino compiler uint8_t and byte are the same yes, and uint16_t and int are the same no, uint16_t is the same as 'unsigned int' on the Arduino.
The uintXX_t and intXX_t types are defined in a header called "inttypes.h" (as far as I've seen anyway, this is a standard thing -- though sometimes it gets hard to tell between de-facto and mandated). They came along relatively recently (C'99 I think it was) because people got tired of not being able to count on the size and signed-ness of a particular type.
"int" is typically a signed integer of whatever that platform's native data type is. On a 32-bit PC, it's usually a 32-bit signed integer. But the compiler is free to decide exactly how big to make it, so you can't ever guarantee it. The C standard says short <= int <= long. They could all be 8-bit, all be 64-bit, or anything in between, provided each size fits in that "less then or equal to" relationship.
The other thing that trips people up is "char". It can be equivalent to uint8_t or int8_t. Or it might not be 8-bits at all, but that's fairly rare. On Arduino, char is int8_t, byte is uint8_t. There are only a few official data types, everything else is typedef'd to define a new type that is an alias to a "real" one. In that "inttypes.h" file, you'll find a section like this:
typedef unsigned short uint8_t
typedef short int8_t
typedef unsigned int uint16_t
typedef int int16_t
typedef unsigned long uint32_t
typedef long int32_t
From this you can see, e.g., uint8_t and unsigned short can be used interchangeably because they are literally the same type. It's just an alias.
hansibull:
So with the arduino compiler * * * uint16_t and int are the same.
Depends on what you mean by "arduino compiler".
On AVR based processors the compiler uses 16 bits for an "int".
On ARM, or pic32 based processors the compiler will use 32 bits for "int".
(I believe the C standard requires that int must be at least 16 bits)
There are also other types than can be useful.
The "least" and "fast" types.
Integer types having at least the specified width
uint_leastXX_t
int_leastXX_t
Integer types being usually fastest having at least the specified width
uint_fastXX_t
int_fastXX_t
These give the compiler some flexibility to make things more efficient.
In most situations for simple small integers "int" was best.
But this isn't true for the 8 bit AVR. A uint8_t or int8_t is often much better and faster on AVR.
But those are often worse on larger processors.
The least and fast types allow the code writer to give hints to compiler
to allow it to choose the optimal size based on what is actually needed.