Arduino Due: 16-bit int and others data types

Hi!

how can i get an 16 bit signed int in arduino Due?

i'm working with 16-bit i2c and 32 bit integers make my life very dificult.

word is 16-bit, but it is unsigned.

Please, help me.

i forgot!

It would be great to have a data type table for Arduino Due.

I would encourage you to use stdint.h. This has types such as uint16_t, int16_t etc if you need to be explicit about bit widths.

Thank you!

As a side note, use the sizeof() function if you're not sure how many bytes a data type is.

#include <stdint.h>

void setup() {
  Serial.begin(96200);
}

void loop() {
  Serial.print("sizeof(byte)="); Serial.println(sizeof(byte));
  Serial.println();
  Serial.print("sizeof(char)="); Serial.println(sizeof(char));
  Serial.println();
  Serial.print("sizeof(short)="); Serial.println(sizeof(short));
  Serial.println();
  Serial.print("sizeof(int)="); Serial.println(sizeof(int));
  Serial.println();
  Serial.print("sizeof(long)="); Serial.println(sizeof(long));
  Serial.println();
  Serial.print("sizeof(long long)="); Serial.println(sizeof(long long));
  Serial.println();
  Serial.print("sizeof(bool)="); Serial.println(sizeof(bool));
  Serial.println();
  Serial.print("sizeof(boolean)="); Serial.println(sizeof(boolean));
  Serial.println();
  Serial.print("sizeof(float)="); Serial.println(sizeof(float));
  Serial.println();
  Serial.print("sizeof(double)="); Serial.println(sizeof(double));
  Serial.println();
  Serial.print("sizeof(int8_t)="); Serial.println(sizeof(int8_t));
  Serial.println();
  Serial.print("sizeof(int16_t)="); Serial.println(sizeof(int16_t));
  Serial.println();
  Serial.print("sizeof(int32_t)="); Serial.println(sizeof(int32_t));
  Serial.println();
  Serial.print("sizeof(int64_t)="); Serial.println(sizeof(int64_t));
  Serial.println();
  Serial.print("sizeof(uint8_t)="); Serial.println(sizeof(uint8_t));
  Serial.println();
  Serial.print("sizeof(uint16_t)="); Serial.println(sizeof(uint16_t));
  Serial.println();
  Serial.print("sizeof(uint32_t)="); Serial.println(sizeof(uint32_t));
  Serial.println();
  Serial.print("sizeof(uint64_t)="); Serial.println(sizeof(uint64_t));
  Serial.println();
  Serial.print("sizeof(char*)="); Serial.println(sizeof(char*));
  Serial.println();
  Serial.print("sizeof(int*)="); Serial.println(sizeof(int*));
  Serial.println();
  Serial.print("sizeof(long*)="); Serial.println(sizeof(long*));
  Serial.println();
  Serial.print("sizeof(float*)="); Serial.println(sizeof(float*));
  Serial.println();
  Serial.print("sizeof(double*)="); Serial.println(sizeof(double*));
  Serial.println();
  Serial.print("sizeof(void*)="); Serial.println(sizeof(void*));
  Serial.println();
  delay(10000);
}

I was having problems too with data type length in my program, it is not very well documented for the Arduino Due, the weird thing is that int is 4 bytes long and so is long, so in the DUE there is no difference between int and long ? weird...
Anyway here is the output of the above code posted by iyahdub for anyone that needs it, again this is the output for the Arduino Due.

sizeof(byte)=1

sizeof(char)=1

sizeof(short)=2

sizeof(int)=4

sizeof(long)=4

sizeof(long long)=8

sizeof(bool)=1

sizeof(boolean)=1

sizeof(float)=4

sizeof(double)=8

sizeof(int8_t)=1

sizeof(int16_t)=2

sizeof(int32_t)=4

sizeof(int64_t)=8

sizeof(uint8_t)=1

sizeof(uint16_t)=2

sizeof(uint32_t)=4

sizeof(uint64_t)=8

sizeof(char*)=4

sizeof(int*)=4

sizeof(long*)=4

sizeof(float*)=4

sizeof(double*)=4

sizeof(void*)=4

I must admit that I see nothing strange about the data sizes in this list.
The C language has been ambiguous about the sizes of "int", "long" float" and the like, right from the start. The whole idea was that C would be portable across multiple platforms and it would interpret the sizes to be the most appropriate for the hardware.
All the C language has ever said is that the size of a char is less than or equal to the size of a short which is less than or equal to the size of an int which is less than or equal to the size of a long etc. and similarly for the floating point data types. It is up to the code generator in the compiler to select the most appropriate size of the operation.
As has already been mentioned, the "stdint.h" header was an attempt to create typedefs for variables that let you specify an exact number of bits when this was required. However the "stdint.h" must be specific to the compiler and the target hardware to be correct.
Having said all of that, the only time I've really needed to play with specific data type sizes is when interacting with the hardware registers of a particular device and even then, there a almost always header files that will specify exactly what data sizes those registers are and the compiler will warn you if you try to (say) write too many bits to a hardware register or mix signed and unsigned values.
Susan

Thanks Susan for clarifying that, I realize now that there is really no difference between an int and a long data type in the DUE. I was just getting confused because I'm running a program in a computer that receives and sends data to the Arduino and so I had some bugs due to the different lengths for data types in various devices.

In C(++) in general, "int" is always the size of the default register width of any given platform, hence never being "portable" between different processors, with a minimum of 16 bit.
"long" is likewise a bit ambiguous, though at least 32 bit.

Because of this not very clearly defined in the earlier specifications, in C99 the already mentioned stdint.h was specifically added to provide a clear size definition for all possible variations of signed and unsigned integer data types.
http://pubs.opengroup.org/onlinepubs/009695399/basedefs/stdint.h.html

Ralf

Also remember than when C was first designed it was relatively common to run on
machines where bytes were 6 bits and word length was 36... Nowadays everything
is in octets (octet = "8-bit byte", more precise than just saying byte) so we never see
"int6_t" - which is just as well!

Interesting, I didn't know at the beginning a byte had 6 bits. Too bad humans don't have 8 fingers, would make binary much easier to understand and conversion would be super easy, as then we'd probably be using base 8 in everyday life.

On some computers, a byte was the minimum number of bits required to represent meaningful data. There were even computers with a byte of only four bits. Word length typically is a multiple of byte length, but I've come across one computer that had a word length that was not a multiple of the byte length. I guess bits were expensive in the old days, and the cost of extra bits did not outweigh the risk of data becoming misaligned.