# Lipo battery monitor with Arduino Uno

Hi all,

I am trying using a single cell 3.7-4.2V Lipo and would like to monitor the battery voltage and indicate if it goes below say 3.5V. I trying to put up the most minimalistic circuit.

Would a voltage divider(R1-R2) network and hooking it up to any analog pin do the job? Since I dont want my battery to discharge that much through the divider, can I put large values for the divider network? I know there is some issue by which I cannot do that(Input impedance of ADC I suppose?) but that concept of why it wouldn't work is not that clear to me. Can someone please explain to me exactly what the problem would be?(I have read there would timing issues in clearing of ADCs)

I have also read that putting a capacitor between the analog pin and ground would help solve this problem. If it does can someone explain why?

How is the Arduino powered ? With the same Lipo cell ? or with 5V ?
Is the Arduino Uno sometimes switched off (without power) ?

A voltage divider with two transistors is used when a larger voltage than 5V is measured. For example a battery of 12V.
The rule of thumb is : reading an analog voltage at an analog pin is accurate when the impedance of the circuit is 10k or less.
However, for a voltage divider you can use resistors of Mega Ohms. With those values it advised to use a capacitor of maybe 10nF to get rid of noise and to keep the voltage stable during the ADC conversion.

However, all that talk about voltage dividers is not needed when measuring a voltage below 5V.

A voltage divider with two transistors...

I have a feeling you meant two resistors.

A good way to measure battery voltage is using the internal band gap reference (or an external one connected to an ADC input would be more accurate), that way it would work reasonably well even if the atmega was being powered directly from the battery. I believe there are some code examples around if you search something like “Battery Voltage + Bandgap + Arduino”

Thank you guys for your replies.

@Peter_n

The rule of thumb is : reading an analog voltage at an analog pin is accurate when the impedance of the circuit is 10k or less.
However, for a voltage divider you can use resistors of Mega Ohms. With those values it advised to use a capacitor of maybe 10nF to get rid of noise and to keep the voltage stable during the ADC conversion.

This is the process I am trying to understand. Why is the reading accurate, when the impedance of the network seen from the ADC pin is 10k or less? What happens when its more? So is it a case that, if I have a R1-R2 network(R2 connected between ADC pin and GND) R2 can never be above 10k?(Or effectively R1 || R2)?

Now supposing I cannot have the (R1 || R2) ~ near 10k (Like the case in the battery voltage sensing. I want to have the maximum value resistors to reduce the current flow in that sensing divider network) what would happen?
Suppose if the (R1 || R2) is around 100k(supposing I cannot modify the circuit to reduce the impedance) what would be problem if I read in from the ADC pin?

As @Peter_n mentioned How would a 10nF capacitor help in this case?

Sorry for a lot of questions, but I am trying to understand this process clearly.

Also, I am asking about these questions for a custom board I am building based out of Atmega128RFA1. Architecture I guess is similar hence I thought of extrapolating the same to Uno's Atmega328.(Although for Atmega128RFA1's case input impedance limitation is around 3k for the ADC)Also its max voltage is 3.3V and hence I cannot feed the battery directly to an ADC pin to measure the voltage.

@Tom Carpenter

A good way to measure battery voltage is using the internal band gap reference

Thanks for that. I will check that out too.

Bandgap voltage is not complex, nevertheless I made up a little library called VoltageReference available here. I highly reccommend to calibrate your mcu first, then you'll be able to run analog measurements in a much reliable and precise way (this is due to atmega bandgap tolerances).

Regarding the impedance question, please consider my answer as a non-expert one which means it might be imprecise or totally incorrect: I'll wait for confirmation from someone else with more expertise.

The ADC uses a little capacitor (7pF if my memory doesn't fool me) as a voltage buffer to compare against the reference voltage: if you have an high impedance source that capacitor will not reach voltage equilibrium before the ADC comparison starts. This is valid in both directions, charging the capacitor and discharging it. The input impedance limit is thus determined by the ADC conversion time and the RC time constant of the input. I'm not sure what is the ADC conversion time, but using 10k Ohm as maximum impedance is an indication regarding the RC time constant of the circuit with respect of the ADC conversion time...

You can trick the ADC by performing multiple reads from that input, throwing them all away and taking only the last one as valid: this will help the capacitor to reach equilibrium with your input at the cost of time spent in performing multiple ADC conversions.
How many readings you will have to throw away will depend on the input impedance: higher the impedance more time required for the capacitor to charge/discharge up to the input level.

Please anybody fix anything wrong I said as this will be very appreciated!

When an analog value is being converted to a digital value, the accuracy is okay when the impedance is 10k or lower. I think it has to do with the input circuit and the feedback to the pin inside the ATmega chip while the conversion is busy. The impedance is R1 || R2. But the 10k is just a rule of thumb. You won't notice a difference with 20k.

Suppose you want to measure a battery of 9...12V with a 5V Arduino. R1 can be 22k and R2 can be 15k. The impedance is R1 || R2 and is below 10k.
To measure 3.0V to 4.2V it depends which voltage reference you are going to select.
Suppose 1.6V reference. You could use R1 = 470k and R2 = 220k. Impedance is 150k. That means low leak current, but a less accurate result.

Around 100k is fine with me. The 10-bits ADC output will be less accurate. Suppose only 8-bits accurate. But when you average a number of samples you can get 9-bits accurate.
Measuring with 470k and 220k with 9-bits is 10mV resolution. Is that okay with you ?

In a very noisy environment (like a car) you better keep the impedance of every point in the circuit low. Maybe no more than about 5k. But I assume you don't need that.

A capacitor of 10nF (say 1nF...47nF) parallel with R2 (from ATmega analog input to GND) lowers the noise. It also keeps the value stable during the ADC conversion. Maybe the average of 100 samples in software has the same result. But when you have that capacitor and you still calculate the average of 100 samples, you end up with a good result.

Can you explain the input impedance limitation is around 3k for the ADC ? Is that in the datasheet of the ATmega128RFA1 ? or in an Application Note ?

(while I was typing this, rlogiacco also wrote a long reply. I'm going to read that now... Okay ... read it. The internal reference of 1.6V is created using a bandgap reference. All the ATmega chips use a bandgap junction for reference voltages. And of course, adjusting your code for the actual reference voltage is always needed, it is never exactly 1.600V)

Thank you rlogiacco and peter_n for your replies. I am starting to get a clearer picture now. I have read atmega128RFa1 ADC section now. Its starting to make sense now.

So let me just type out what was able to understand.

ADCs input circuitry has a sample and hold circuit(series RC network) with a switch. So whatever is the input impedance which the Analog pin sees(implying the output impedance of the voltage) has a direct impact on the charging cycle of that RC sample and hold network. Smaller the impedance, faster charging of the capacitor. If its a larger impedance, longer charging times and hence when we read the values it will be not stable (since it takes a larger settling time) Is this correct?

Now if we have a larger impedance at the input, if we read any other analog pin, the value read out might have some errors since the capacitor has not discharged fast enough from the previous read? Can some please confirm on this statement?

So if I have a large impedance at the analog input it should be ok if I have some time delay between the readings? So the big question is what is the optimum waiting time I must wait before I read from a any other pin to get an accurate enough result? Is there some formula or rule of thumb for this? (Assuming the system cant spend too much idle time by putting a delay of some random number)

Can someone please take a look at this section?

Can you explain the input impedance limitation is around 3k for the ADC ? Is that in the datasheet of the ATmega128RFA1 ? or in an Application Note ?

Its mentioned in Page 424 of the datasheet linked above

A capacitor of 10nF (say 1nF...47nF) parallel with R2 (from ATmega analog input to GND) lowers the noise. It also keeps the value stable during the ADC conversion.

I still dont understand this section though. Is it lowering the input noise? But impedance is still high right? hence the problem would still persist. Right?

Atmega328 reference for the same: Smart | Connected | Secure | Microchip Technology Page 257 Analog input circuitry

So suppose my input impedance is around 100k(which cannot be reduced) and I want the best possible accuracy in a single read(Reading at 10 bit accuracy, I am OK with around 8 bit accuracy,last 2 LSB bits can toggle which would correspond to around 12mV(2-bit change) change if reference voltage as around 3.3V(3.3/1023~ 3mV)) whats the best way to go about it? Is it the capacitor method or wait some time before reading etc..

Hi

I think he doesn't need a precise reading. Nor will the voltage change rapidly. I would take a high impedance voltage divider, put a small 1 µF or so capacitor in parallel to R2. The capacitor will provide a low impedance signal to to the ADC. Then use an adjustable power supply, set the output to 3.5V and note the ADC reading.That's the cutoff value.
Linearity or speed are of no importance since only the one point is needed and the voltage changes very slowly.

Uli

UliH, I prefer 10nF over 1µF. When the ATmega chip is turned off, the charge of the capacitor might go into the analog pin and flow via the internal protection clamping diodes. I also prefer a ceramic capacitor to reduce high frequency components. But I agree, a value of 220pF to 10µF might be okay, so lets choose 10nF

000, the capacitor parallel R2 acts as a RC filter for electrical noise. R1 || R2 is the ‘R’ value of the RC filter. And the capacitor lowers the impedance for high frequency noise, and that also benefits the ADC conversion. The charge of the capacitor is used to charge the internal SampleHold capacitor. So the voltage at the analog pin stays the same.

Thanks for the reference. I read at page 424 : “The ADC is optimized for analog signals having output impedance ZOUT of approximately 3 kΩ or less
When I read that section, it is about reading analog data at a certain clock speed. The Arduino functions delay things. So it is not the same. You also are not sampling an high frequency analog signal, just a very slow voltage of a battery ! I think you can easily get 8-bits with 100k impedance + capacitor. Add averaging in software to get about 9-bits, and Bob’s your uncle.

Thanks again for the responses.

When I read that section, it is about reading analog data at a certain clock speed. The Arduino functions delay things. So it is not the same. You also are not sampling an high frequency analog signal, just a very slow voltage of a battery ! I think you can easily get 8-bits with 100k impedance + capacitor. Add averaging in software to get about 9-bits, and Bob's your uncle.

I am just trying to extrapolate this to other cases. Suppose I have a not so low frequency signal(Ignoring the battery case for now) is the same case applicable if I read 2 analog pins with around 100k impedance + 10 nF capacitor one right after the other. Will there be a significant accuracy difference?
Or should I have a physical delay before calling it again. I am trying to put a number to the delay(if there is one.)

And also can we say for certain that there is a toggle of only 2 LSB bits for a high impedance of 100k. Is there some way to prove that from the datasheet?

No prove at all. I am just using my experience with the 10-bit ADC of ATmel chips. Once my signal was so noisy, I used a few thousand samples to average, but still had a good result. When you want maximum accuracy, there is also temperature and Vcc influence. And a high impedance circuit is not reliable when there is moisture. It is therefor hard to say how accurate it will be. You have to test it.

When you read two analog channels, the mux has to switch. That is something extra. Using a delay won't help (I think), but sometimes the first ADC value after changing the mux is less accurate. But then again, the Arduino functions slow down things a bit. The datasheet is written from an assembly point of view, but the Arduio libraries try to have the same functions for a number of chips. So the Arduino functions make things a little slower.

Are you going to use the Arduino IDE to program the code ? Or just 'c' and 'C++' ?

@Peter_n

I am planning to use Arduino IDE for coding.

Hmmm. Do you know of anyway we can test the theory out? (I want to iron it out once and for all ) Has there been any testing of its performance ever documented? (Can any of the senior members remember something similar happening out here in the forums.)

Here is a link which I found interesting on the topic.

No, a delay won't help. What will help is the suggestion to take a reading from a pin several times and discard the first few readings. The sample-and-hold isn't charging the internal capacitance if it isn't taking a reading.

What will help is the suggestion to take a reading from a pin several times and discard the first few readings.

How does this play out electrically? I mean what really goes on when I read a pin several times ? Because if I read it again and again isn't the charging problem again coming into play here?

The general idea is that it only samples for a short time, but it is now closer to the actual voltage. Do that a few more times, and it should be charged or discharged to the actual value.

Because it isn't discharging that bit of capacitance it uses to hold the value for conversion to digital.

The general idea is that it only samples for a short time, but it is now closer to the actual voltage. Do that a few more times, and it should be charged or discharged to the actual value.

Hmm. Kind off makes sense now. Thanks.