Average over time (I think this should always get you better data at the rate of the error / square root of the number of measurements),

Drop the least significant digit because it's noise (not sure why you bother to drop it other than to do a little less math... seems like if you know it's noise then just round at the end), and finally,

Add 0.07V, because he said the ADC's are unresponsive to the first 0.07V.

What I wanted to ask was why he bothered to literally add 0.07V (by "literally" I mean that he used power and circuitry to do so)? Couldn't that just be done in software, essentially saying:

Vin = Vin + 0.07;

I'm guessing that every board is not the same, and he does this for the same reason he makes a map: to linearize for that particular board. But I just wanted to see if anyone else had other thoughts.

If I did a little testing and found that it was close to 0.07V on multiple boards, and the maps were close, I think I'd just tend to do the whole thing in software and say "Good enough" since I don't need anything like the 0.2% accuracy he obtained.

Typically, yes, but it depends on how it's behaving. If you can't read anything below 0.07V that's the ONLY way to actually fix it.

A straight-line calibration (usually in software) has an offset correction (added or subtracted) and a slope correction (a multiplication factor). Of course this "assumes" that the ADC is (mostly) linear.

So typically you measure at zero (or near zero) and add or subtract to correct to zero. If there is a positive reading with true-zero input, you simply subtract that from all readings.

If the offset is effectively negative and you're reading zero with true-zero input and with a slight positive input, you'll have to calibrate near-zero where you get a real reading so you know how much to add.

If the offset error is negative, those small readings won't really be correct because the ADC can't actually give you a negative reading... You can't find the offset until you get a non-zero reading.

Than after adding the offset to every measurement, you find the correction needed at maximum, or near maximum, or at the "most important" measurement, etc. Then calculate that correction as a factor. The slope correction has no effect at zero because you are multiplying by zero, so it doesn't mess-up your offset correction.

This is usually done at the raw-data level before converting to voltage or anything else.