I was struggling with the Due-analog inputs when I noticed, that after a short time the reading was dropping depending on the temperature of the cpu (rising above approx. 33 °C). In the enclosed example program I was inputting 1000mV to A0..A5 (setting via 5k-poti and 33nF capacitor). If the pause before a new analog read exceeds approx. 500ms the reading drops after a short time from ~1233 to ~850 for the first input beeing read - the following inputs will have smaller deviation. Interchanging the inputs will move the fault to the new input read first. Above approx. 37°C the deviation will not continue to rise. To have a block of input readings short before the final reading will improve the accuracy (block commented out in sample).
The enclosed outoput.txt shows the bad reading which occurs each time the Due is restartet if it has warmed up to approx. 36°C with enclosed test program (stop and restart serial monitor).
Found the same behaviour on my second Due.
Anyone else can confirm this strange behaviour?
Feb.05.2013 Addition to my above posting:
The problem described above was arising, when I was working on a project to log the behaviour of a 2V lead battery cell over a longer period (some days). The readings are shown on LCD, the data logging done on SD-card every 10 minutes, circuit for decharge and measurement of de-charge current was already working. To test the electric circuit and program, I used constant voltage applied to analog inputs A0 and A1. Then I noticed the drop of the readings after some time the circuit was powered although the voltage was kept constant.
To verify the cause and eliminate any influence I removed all additional hard and software not necessary for measurement (LCD and SD-card circuitry etc.) and showed the readings only via usb-serial output on the desktop. The behaviour did not change. By chance I found the drop being depending in the processors temperature - applying a piece of cold metal on it's surface - the readings returned to correct state. The suspicion, that one input was damaged was disproved by changing the inputs, showing the same drop of readings for the first input being read after a pause which showed the second strange influence - the timing.
To achieve a controllable environment, I applied a heat sink on the processor with a temperature sensor situated directly on the processors surface in a groove I machined into the bottom of the heat sink (see enclosed picture of the test arrangement). 2 resistors are used to warm up the heat sink to set different temperatures on the processor. Fixed voltage of ~1000mV is fed to all 6 analog inputs. The minimized test program (as attached to the first post) makes it possible to vary the timing and sequence of analog readings which are sent by usb to the desktop.
I found, that the drop in readings is not caused by the flow of program, but occurs immediately in the first second after program start - a delay of 2000 before the loop shows immediately the dropped value at the first reading.
The drop occurs with all analogReadResolutions:
analogReadResolution(12) drops from 1235 to 865
(11) 619 -> 434
(10) 309 -> 216
repeated setting of varying of the resolution does not change the behaviour.
As we have currently an ambient temperature of approx. 21°C, I wonder what will happen in summer, when the ambient temperature rises above the critical value of approx. 33°C, where I found the reading of the ADC starts to drop.
I need to find a solution to have a reliable readout of the analog inputs with high accuracy. My test shows that a heat sink and additionally execute a series of dummy readings before the finally used one will help to improve the result, but there is still the uncertainity and the impression that this is only a simple workaround, not eliminating the elementary cause of such missreadings.
Any solution, proposals or confirmation highly appreciated!
DUE_ADCtest.ino (1.33 KB)
output.txt (951 Bytes)
Could you be seeing something to do with this issue (along with their spelling errors)?:
18.104.22.168, Page 1454
ADC: Wrong First Conversions
The first conversions done by the ADC may be erroneous if the maximum gain (x4 in single ended or x2 in differential mode) is not used. The issue appears after the power-up or if a conversion has not been occured for 1 minute.
** Three workarounds are possible :**
- Perform 16 dummy conversions on one channel (whatever conditions used in term of setup of gain, single/differential, offset, and channel selected). The next conversions will be correct for any channels and any settings. Note that these dummy conversions need to be performed if no conversion has occured for 1 minute or for a new chip start-up.
- Perform a dummy conversion on a single ended channel on which an external voltage of ADVREF/2 (+/-10%) is applied. And use the following conditions for this conversion: gain at 4, offset set at 1. The next conversions will be correct for any channels and any settings. Note that this dummy conversion needs to be performed if no conversion has occured for 1 minute or for a new chip start-up.
- Perform a dummy conversion on a differential channel on which the two inputs are connected together and connected to any voltage (from 0 to ADVREF). And use the following conditions for this conversion: gain at 4, offset set at 1. The next conversions will be correct for any channels and any settings. Note that this dummy conversion needs to be performed if no conversion has occured for 1 minute or for a new chip start-up.
thank you very much for your hint.
I had well checked the SAM-datasheet but saw only the ADC-section - not these remarks.
Obviously the tendency is similar, but what I found seems to be a different misbehaviour. It is depending on temperature and occurs much faster (only approx. 1000ms after a read-pause). After a restart the first readings seem to be OK and than drop to constant wrong value - opposite to above SAM description.
"Dummy conversions" as a workaround I found by experimenting too, but don't seem to solve this problem clean and completely.
Maybe my two boards are defective or a bad implementation of the analogRead command?
Today I received a brand new DUE and tested it very carefully with minimum necessary connections (to avoid any damage). I used latest software v1.5.2.
Unfortunately it still shows the same buggy behaviour which confirms my earlier findings:
The temperature reading on an anlog pin is significantly dropping due to rising die temperature of the processor if the reading is executed after a pause of more than approx 1 second. As soon a cool heat sink is placed directly to the surface of the processor, the readings return to correct values. It is possible to minimize this weird behaviour by executing a series of dummy readings immediately before a final undelayed measurement.
All useres who depend on reliable readings from the analog pins should be warned and take counter measures!
I see that you are using a 2V battery. I'm not sure this voltage is in the working range for ADC.
In the case of normal power plug, this is another matter:
From what I see in the schematics, the AVREF is either 3.3V or external source. Did you plug an external source?
Furthermore, the wiring_analog API is providing raw access to ADC inputs. Did you implement a calibration in your sketch to ensure the values are processed correctly?
Anyway, I don't think that your issue is covered by the Errata.
thank you for your reply.
the 2V battery was the initial object to be measured at the analog pin, the power was supplied by USB-5V!
After first indications of incorrect results in measurement, I simplified the circuit to only test with a resistor-divider (as described below) for analog measurement and USB-5V supply, over which also the result of the measurements is shown via the serial monitor at the PC (serial.print).
The unexpected thing is, that the readings drop as described in my previous postings and after a Due-reset or restart of the serial monitor (which also restarts the Due) the first readings are correct again, before the measurements drop again to sigificantly smaller wrong values. For me, this is an indication that there is something wrong with the analogRead command, not a hardware problem.
It would be helpfull if someone could confirm. The only hardware needed for a test is a resistor of approx 2k-ohm connected from 3,3V to A0 and a second resistor of half size from GND to A0. This circuit will apply approx 1,1V to A0 (1/3 of 3,3V). Running my test-program should demonstrate the problem at the serial monitor.