The ADC uses a Sample and Hold (S/H) capacitor (14pF) sourced from the input pin via an internal resistor. The S/H capacitor must be fully charged to the voltage you measure, in order the measurement be accurate. This means not only charging the capacitor, but also discharging it. This happens through the internal resistor AND the impedance of the source of the signal. So for example (with an assumption the internal resistor is small and the capacitor is grounded):

a) if you measure on ADC0 a voltage of 3Volts and the source impedance is 1kohm and you switch to ADC1 with 0.2Volt and source impedance 30kohm, the S/H capacitor must discharge itself from 3V to 0.2V via 30kohm+

b) in an opposite directon (when you switch from ADC1 to ADC0) - it must charge itself from 0.2V to 3V via 1kohm+ impedance.

So you may see the charging/discharging processes inside, how they differ in timings (in this example it takes 30x longer to discharge when switching from ADC0->ADC1, not considering the internal resistor). The charging/discharging timing follows k*R*C rule.

Therefore you have basically 2 options:

1. wait long enough until S/H charges or discharges (for all possible source impedances and voltage differences combinations) and then make the AD conversion

2. use input buffers (ie rail-rail opamps) with low output impedance (typically ~10ohm) thus the charging/discharging runs faster.

Even so you have to consider the internal resistor charging the S/H capacitor, though.

p.