I have a voltage divider dropping a voltage range of 0-15v down to a level an ESP32 can read.
The actual voltage at 15v is about 2.63v
So, I sample the analogue voltage and do a conversion. I decide to add a calibration value (float) so that I could dial-in the displayed screen voltage with the actual voltage being read.
But... if I calibrate the voltage to be displayed at say 10v, then the value drifts when either side of that calibration point.
currentvoltage[channel] = analogRead(analoguePINS[channel]); // Sample the voltage
currentvoltage[channel] = currentvoltage[channel] * calibration; // Float voltage calibration value (Below 1 lowers the voltage, above 1 increases it)
adjvoltage[channel] = adjvoltage[channel] + mapfloat(currentvoltage[channel], 0, 4095, 0, 15); // Convert the value to 0-15v
So... clearly I am doing this all wrong!
Am I applying the calibration value at the wrong point?
I am probably doing it all incorrectly to be honest.
The value adjvoltage is actually sampled 250 times after the above code then averaged out to give a steadier result.
We don't know which variables are 'int' or 'unsigned long' or 'float' or 'double'.
There is a website for that: https://www.snippets-r-us.com/
Can you show a small sketch that shows the problem ?
The Arduino Uno with its 10-bit ADC is very good and averaging/oversampling will increase the resolution (not the overall accuracy). The ESP8266 and ESP32 are not that accurate. You don't have try that hard to increase measured voltage, because the ADC is not very good.
Some ESP32 boards already have a potential divider on the PCB.
If you connect an external potential divider to the analogue input, then there is an interaction between the two potential dividers.
Have a look at the schematic of your ESP32 board to see whether that is the case.
(or post a link to it here.)
It is easier just use a single resistor in series with the upper resistor on the board potential divider, than to do the mathematics required for the two potential dividers.
Take your (average) raw reading at 15V, then use that to convert to voltage: Current Voltage = Current Raw Reading x (15 / Raw Reading at 15V)
...A standard straight-line calibration also includes an offset correction which is added or subtracted.
Generally the offset is measured at (or near) zero. Feed-in 0V (i.e. ground the input) and then add or subtract (if necessary) to correct the actual reading to zero. Now re-measure at 15V (or at or near the maximum, or the "most expected" reading). Add or subtract the offset from the reading at 15V to get a "corrected raw reading" and re-calculate a new slope so it's still accurate at 15V (as well as at zero).
The offset is done first because it affects all readings equally. The slope is a multiplication factor it doesn't change the (already corrected) reading at zero.
And it will continue to do so, even/especially as you get closer:
The ESP32 ADC is pretty non-linear. It's good enough for very rough approximations; not much more unless you're going to calibrate it throughout its range with something like a lookup table.
Try float temp_adjvoltage = ((3440.0/100.0) *(1500.0/3440.0));
Keep in mind that C++ will assume integers unless specified otherwise. If you want floating point math - specify otherwise!
The ' float temp_adjvoltage = ((3440.0/100.0) *(1500.0/3440.0));' fixed that issue.
Stupid of me to even remotely think that stating it was a float would mean it would treat it as a float
Anyway... after all this, its no more accurate than the original way I was doing it.
Now it's accurate at 15v, and drifts off all over the place below that.
Yes, I could attempt the 'analogReadMilliVolts(uint8_t pin)' approach, but I am kinda over these ESP32's now.
$2 LED voltmeter off AliExpress it is
Adafruit have another that is four channels and not much different in price.
No. Ignorant not stupid! All that maths stuff is just something you have to know, the left hand side being a float does not alter the manner in which the right hand side does its calculations.
You now know one way is to have a 42.0 float number k there, this gets seen and floating point is used.
There are similar around integers with respect to their size and whether they are signed or not.
It's a good idea to check the results of calculations, especially when you see they aren't producing the same results you get with your calculator or spreadsheet or by hand.
No. Using my bench multimeter and the voltmeter on my Bench PSU (they seem to show the same reading).
I think maybe the easiest is to go back to my original simple AnalogueRead routine and add a follow on routine that adjusts the value via a set of rules to get it nearer the required result.
That could work; there's a pretty long linear region that you can exploit.
I'd recommend chopping off both the toe and the shoulder of the range so you only end up using the linear midsection:
I have a calibration routine I can activate with a certain button press.
I am just trying a routine where 'calibration' steps up from 1-15v, one volt at a time upon you pressing a button, and you press a store button to store that raw analogue to the preferences.h storage facility on an ESP32.
So, set up the calibration routine to request you set up the input voltage in 1v steps.
Press the button, it stores the value.
Its pretty linear over 16 samples 1-16v
So, makes me think I am just simply messing up the math
Just for reference, this is a small tester to measure voltage on some ribbon cables. The actual voltages measured are usually 0-12v, but I have left some overhead (you can actual throw 20v down the ADC pin before tears).
That also eliminates the ADC anomalies at the top end of the range.
The SD card is the speech (it can say the voltage etc)
This map works off the following routine to obtain the float values:
//--------------------- Voltage mapping --------------------------
float mapfloat(float x, float in_min, float in_max, float out_min, float out_max) // Conversion to get float values for the voltage
{
return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
}
At 15v input, I get 15.00v on the display.
Turn that voltage down, and it drifts out by nearly 1.8v at 7.5v.
I would expect a little drift, but that is excessive. I know it's not a linear curve, but I sampled all the voltages from 1v to 16v (in 1v steps) and they were nearly all around 215 apart and far more linear than I expected.