Is this really the best way to read and write large numbers with Serial?

Serial.write(x/256);Serial.write(x-(x/256)*256);//Write two byte integer to serial.

Serial.read()*16777216+Serial.read()*65536+Serial.read()*256+Serial.read(); //Read 4 byte long from Serial

It look sooo ugly...

Well, that's headed in the right direction. What you want to do is break your integer into a series of bytes. That's what you appear to be doing. This is also where bit masks can be your friend. BTW, these are frequently referred to as multi-byte values.

Using bit-masks, the following will break a 16-bit integer into a two 8-bit values.

int x = 0x1234 ;
Serial.write( (uint8_t)((x & 0xFF00) >> 8) ) ;
Serial.write( (uint8_t)x & 0x00FF ) ;

Conceptually the above works but still has potential issues. This is where CPU Endianness comes into play. This is important because while you think you may be communicating the value of 0x1234, you may actually be communicate 0x3412, or even several other values depending on MSB/LSB ordering on to whatever it is you're communicating.

Generally speaking, people create functions which perform these conversions based on the type of argument (e.g. short, int, long, long long). Also, while I've not looked at the networking interfaces for IP, such needs are common. Which is to say, such ordering has long been standardized for efficient and interoperable communication. So you might want to take a peek on the socket utilities and see if such support functions already exist.

Edit: It just occurred to me I made an error. Came back to check. Sure did. Added bit shifts.

I prefer shifts, because the numbers are smaller and less likely to be mistyped.
A union can also work, though watch out for endian issues.

Greg, in the case of Atmel 8 bit processors, does GCC use big or little endian?

Shamefully, off the top of my head, I don't know. A quick check of Wikipedia doesn't really shed light - shame on them. I did find a reference to little endian, which is what I expected.

Also, unless a CPU has bi-endian support, the CPU always dictates the ordering defined by the compiler, rather than the other way around.

Unverified, I believe its little endian.

gerg:
Shamefully, off the top of my head, I don't know. A quick check of Wikipedia doesn't really shed light - shame on them. I did find a reference to little endian, which is what I expected.

Also, unless a CPU has bi-endian support, the CPU always dictates the ordering defined by the compiler, rather than the other way around.

Unverified, I believe its little endian.

Well don't be ashamed, I did check a bit before I asked. In fact a friend of mine and I are working up a binary communications protocol to make it talk to a PC, so we're trying to decide how to handle it. But, I didn't realize an 8-bit processor had native endian-ness. I thought it was imposed by the compiler. That's not true I guess?

But, I didn't realize an 8-bit processor had native endian-ness

Way back, the Intel 8080 was little endian, and the Motorola 6800 was big endian.
Maybe even earlier.

skyjumper:

gerg:
Shamefully, off the top of my head, I don't know. A quick check of Wikipedia doesn't really shed light - shame on them. I did find a reference to little endian, which is what I expected.

Also, unless a CPU has bi-endian support, the CPU always dictates the ordering defined by the compiler, rather than the other way around.

Unverified, I believe its little endian.

Well don't be ashamed, I did check a bit before I asked. In fact a friend of mine and I are working up a binary communications protocol to make it talk to a PC, so we're trying to decide how to handle it. But, I didn't realize an 8-bit processor had native endian-ness. I thought it was imposed by the compiler. That's not true I guess?

Keep in mind bit-width only describes the width of data of its bus. It does not necessarily describe the width of data on which its instructions can operate. This is why AVR has instructions which typically take 1-cycle. That is, when its operating on 8-bit values. But, it also seems to recognize some 16-bit instructions and operations. These typically take, surprise, 2-cycles, or more, depending on what its trying to do. So while its technically an 8-bit CPU, it does support some 16-bit instructions. Speaking in generalities, everything else is a multiple of these.

Now please keep in mind I've barely looked at AVR's asm, and I've slept since I last looked, but based on what I recall, everything above is correct. I'm sure if I misspoke, I'll be quickly corrected. :wink:

Thanks very much!