How would I do it?
long l = 2147483L;
int i = (int) l; //precision loss from long (4 bytes) to int (2 bytes)
The process that AlphaBeta showed is the way to do it. The comment is a bit misleading. If the value that is in the long variable is larger than will fit in an int, it isn't precision that is lost. It is meaningful data.
I'd consider rounding 3.14159 to 3.14 to be a loss of precision. Changing 65,500 to some negative value isn't a less precise representation of the number. It's an incorrect representation of number.