It's a typecast. It tells the compiler to convert the value to a different datatype. In this case, it's converting the value from an eight bit unsigned value to a 16 or 32 bit signed valued. The author would do that because the device expects the lower eight bits of the address to be in the second transmitted byte followed by one or three bytes of zero.
There are two mistakes. The first is that int is processor dependent. For processors like AVR it will be 16 bits. For processors like SAM it will be 32 bits. The device very likely expects a specific number of bytes. Though, it may tolerate the extra zeros.
The second mistake is that the byte order also depends on the processor.
That is the age old question that many have asked themselves.
It is not well documented, so everyone does whatever they feel like, which causes incompatible Wire libraries.
Thanks for the replies. What's confusing to me is that what the chip is expecting is one byte, not two. And I think Wire is also expecting a byte, not an integer. So I don't see what's accomplished with the int().
Assume you have a 2-bit counter which can count the rising edges of an incoming signal. How many rising edges will it register?
The count sequence of the counter will finish once it starts from state-00 and comes back to the initial state (the state-00). The counter must go through the following states/transitions:
00, 01, 10, 11, 00
The counter will register four rising edges of the incoming signal. So, the total count is: 4 (22).
The 11-bit counter will register 211 = 2048 counts which can be numbered as 1 to 2048 from human point of view or 0 to 2047 from machine/computer point of view.
@ShermanP I ask you this - you have an 11 bit address to set in order to read any byte from the memory. If you only have an 8 bit address to pass, how do the other three address bits get set?