But why do I declare it as long (or unsigned long, both didn't work) if he still works with it like an integer.
Get it that he does, but is there any higher philosophy?
Yes, without at least one suffix (U, L, and LL, or the lower case varients u, l, and ll -- but you really don't want to use 'l' since humans likely confuse it with the digit '1'), the compiler gives it a type based on the value:
- If the value is between INT_MIN and INT_MAX inclusive, the type is signed int
- If the value is between INT_MAX+1 and UINT_MAX inclusive, the type is unsigned int
- If the value is between LONG_MIN and INT_MIN-1 or UINT_MAX+1 and LONG_MAX, the type is signed long
- If the value is between LONG_MAX+1 and ULONG_MAX, the type is unsigned long
- If the value is between LLONG_MIN and LONG_MIN-1 or ULONG_MAX+1 and LLONG_MAX, the type is signed long long
- If the value is between LLONG_MAX+1 and ULLONG_MAX, the type is unsigned long long
- If the value cannot be represented as a signed long long or unsigned long long value, it is an error
Since both 40 and 1000 are greater than or equal to -32768 (INT_MIN
) and less than or equal to 32767 (INT_MAX
), the types are signed int
. If you had written 40000, then the type would have been unsigned int
In the ISO C world, this is called value preserving, and it is one of the changes that the original C standards committee did over the original K&R C language, which was somewhat looser, and would go from signed int
to signed long
without going to unsigned int
(called sign preserving).