There appears to be a defect, or at least a trap in the long map() function.

From what I read, version 11 introduced a long map function that was based on a float function. The problem I see is the scaling. In the float world, [0.0,7.0]-->[0.0,3.0] is a 3/7 scaling because we are mapping 7 unit intevals to 3 unit intervals. From my perspective, in the digital world, [0,7]-->[0,3] is a 4/8 scaling since it is mapping 8 discrete items to 4 discrete items. The conversion from float to int didn't seem to adjust for the difference in what is being counted.

Using map(i,0,7,0,3) I'd like to see 0,1,2,3,4,5,6,7,... map to a nice evenly partitioned 0,0,1,1,2,2,3,3,... but due to scaling (not truncation), it maps to 0,0,0,1,1,2,2,3,... To emphasize the scaling issue, 63 in the above scenario maps to 27 instead of the expected 31.

I don't know what I'm talking about, but it seems to me that map() should be corrected as follows:

long map(long x, long in_min, long in_max, long out_min, long out_max)
{
return (x - in_min) * (out_max - out_min **+ 1**) / (in_max - in_min **+ 1**) + out_min;
}

Alternatively, the documentation could warn people that think like me that they really need to supply the first *out-of-range* integer for the max values. That is, if one wants to map [0-7] to [0,3] as discrete values then they should specify map(i,0,7**+1**,0,3**+1**), but that seems like a workaround.

Am I thinking right?

Afterthought: the mappings appear to be "open interval" mappings. That is [0,7)[7,14)...-->[0,3)[3,6).... Digitally, we are dealing with closed sets eg {0,1,2,3,4,5,6,7}, so if a user wants to use an open-interval-related function on that closed set, they'd have to simulated it with an open interval [0,8) to be successful. I suspect we don't want users to have to convert their closed sets to open intervals, but should provide a closed set mapping