How to use a polynomial fit equation for compensating a value

I have a sensor that is very sensitive to changes in temperature. To remove this drift in my baseline measurement due to temperature changes, I have taken a bunch of readings and am able to generate a voltage x temperature plot. So, given each degree of temperature that I measure, I get a different baseline voltage. My goal is to be able to remove the offset the change in temperature causes.

I have fed this data into MATLAB and have come up with the following fourth order polynomial equation that fits a curve nicely along my collected data points.

Linear model Poly3:
     f(x) = p1*x^3 + p2*x^2 + p3*x + p4
       where x is normalized by mean 1.717 and std 0.00172
Coefficients (with 95% confidence bounds):
       p1 =      -1.409  (-1.49, -1.328)
       p2 =        2.49  (2.311, 2.668)
       p3 =       37.14  (37, 37.28)
       p4 =        6115  (6115, 6115)

Using this for guidance, I have generated the following Arduino code to try to use this equation to compensate my baseline for temperature drift:

float p1 = -1.409;
float p2 =  2.49; 
float p3 =  37.14;
float p4 =  6115.0;
and 
polynomial = (p1*pow(sensorVolts,3)) + (p2*pow(sensorVolts,2)) + (p3*sensorVolts) + p4;

HOWEVER, while I may have arrived here after months and months of tinkering, learning, and experimenting (I am not an electrical engineer or mathematician) I feel I don't fully-understand the proper way to solve something like this.

I have a sensor that is reading in a fixed light level. This sensor changes voltage (slightly) with changes in temperature. What do I do with the polynomial expression that is mapping voltage to this baseline that I recorded. I am thinking that there should be someway to determine the difference between the current sensor reading and how far off from the polynomial baseline (given the current temperature), right?

Is there a standard way of doing this? I have used massive lookup tables before, but was trying a more elegant approach. Apparently, I have worked myself into too elegant of a hole.

Thanks for your insight.

pow(sensorVolts,3) is a pretty intense way to do sensorVolts * sensorVolts * sensorVolts.

I am thinking that there should be someway to determine the difference between the current sensor reading and how far off from the polynomial baseline (given the current temperature), right?

The value you labeled polynomial is the value that the sensor would produce under a specific temperature, isn’t it?

I'm having trouble understanding your explanation of the effect. Here is what I think I've got so far.

  1. You are measuring light levels.
  2. The sensor is also temperature sensitive.

Now, what do you mean by "baseline"? Are you trying to fit values that the sensor gives in the dark, or values that the sensor gives at some particular light intensity, as a function of temperature?

Will you be comparing the "baseline" to some other light levels, at some different temperatures?

Thank you both for replying so quickly.

I have a light sensor which has a light-to-voltage sensor paired with an LED. When I supply the LED with a constant current, the voltage changes with temperature (slightly, but significantly for my device). So, basically the sensor is reporting this as a change in irradiance... which it is. What I want to do is map out what each voltage value has for a corresponding brightness value. I was thinking that I could determine some "offset" value to subtract from the overall system reading (when hooked up to the rest of the machine where the brightness is affected by material flowing through the sensor array). I think it would be the equivalent of compensating for "darkness" based on temperature.

The sensor isn't affected by temperature given my operating ranges, only the LED is.

I was thinking that the polynomial expression might be an elegant way to compensate for the decrease in brightness of the LED as the temperature drops. Make sense? Apologies, this is why I am asking. I have a lot of observational knowledge that I am learning how to actually apply to analog signal design. I love it, but this is new territory for me.

Thanks guys!

If I understand your second post correctly, what you are really interested in is measuring the amount of light passing through some material. The problem is that the intensity of the light source (an LED) varies with temperature and you want to correct for those source intensity changes. Is that correct?

If so, your approach sounds difficult to implement reliably and I can think of a couple of alternatives.

One is to use a light source whose intensity does not change with temperature -- a fiber optic cable from a remote lamp housing might work.

Another is to compensate the LED current. You can have a separate LDR, photodiode or phototransistor monitor the LED output (from the side) and adjust the LED current to maintain constant intensity. That method is often used to maintain the output of high power laser diodes to safe levels.

first the polynomial = (p1*pow(sensorVolts,3)) + (p2*pow(sensorVolts,2)) + (p3*sensorVolts) + p4; can be rewritten as float polynomial = (((p1 * sensorVolts + p2) * sensorVolts + p3) * sensorVolts + p4; which is much faster.

A way to handle temperature dependency is to have an array with 4 values for every temperature. float p[10][4] = { ....}; int t = temperature/5; (steps of 5 degrees) // map temperature on array index float polynomial = (((p[t][0] * sensorVolts + p[t][1]) * sensorVolts + p[t][2]) * sensorVolts + p[t][3];

jremington: Another is to compensate the LED current. You can have a separate LDR, photodiode or phototransistor monitor the LED output (from the side) and adjust the LED current to maintain constant intensity.

I was under the impression that controlling the current was easier than controlling the voltage. Hmm.

My setup includes a TSL262R-LF IR light-to-voltage sensor that watches a clear tube which has fluid passing through it. On the other side of the tube is an opening to a 50/50 beamsplitter. One side of the beamsplitter has an IR LED. A second TSL262R is placed directly opposite the LED and the exit of the beamsplitter, giving me the ability to monitor the LED brightness without the fluid channel.

My system is working pretty well, so I would prefer not to completely start over. In fact, I am really interested in learning how to control the setup as-is because I have spent so much time working with so few components. I am monitoring the voltage of the LED using a 2VDC precision voltage reference (measured at 2.00039 on my DMM) and an ADS1115 16-bit ADC. I am getting great resolution over the range of the types of fluctuations in the fluid channel, so that is working (about 62150 counts).

What I am trying to control is the slight drift I notice as the system "warms up" ... not sure how else to describe it. I have done a fair bit of mapping of how temperature affects the Arduino itself and am facing the same problem. What's the best way to compensate for something repeatable... well, measurable. This system will be used by people other than myself, so imagine it is in someone's trunk in the winter and they bring it into their office for a test. That's what started this whole thing...

This thing is working fairly well, so the voltages I am looking is a drift from 1.6562 VDC to 1.6510 VDC over the course of 92k samples. BUT, I can directly link this to temperature. So, now my obsession has become trying to understand how to compensate for this. I have spent so much time looking at getting the system to work this well, that I am just honestly interested in trying to make this part as perfect as possible. It helps me learn.

While I can't have a fiber optic cable and remote light source, I can certainly improve my power supply, or at least build a more controllable one. What would be the most controllable or precise means of controlling the current I am supplying to my LED? Is there a way to do this programmatically? I have tried most every CC and CV power source I can get my hands on and I feel as if there version of "constant" is within a spec that isn't as good as what I need. Is this a correct assumption?

So I think I have two challenges:

1) Identify the most stable (and when not stable, controllable) means of powering an IR LED. 2) When even this drifts, compensate with math in an elegant way as we monitor the drift due to temperature.

I would try to avoid doing this complex stuff on the Arduino. If it sends the data to a PC (perhaps both light level and temperature) then do the hard stuff there where there are no memory limitations, where floating point maths is almost invisible and where experimental programming is much much easier.

...R

While that is certainly a solution, it doesn't really work of this application. I am trying to build a low-cost monitoring device. As I am not an electrical engineer, the Arduino is actually a great device to try to accomplish this task with. This device is going to be small (handheld), with an embedded Arduino inside of it, so I can't really tether it to a workstation. It is meant to be a standalone device that can be hooked up to a fluid reservoir and display the sensed value in a meaningful way.

I have access to a NI CompactRIO, but that isn't going to work either.

I am actually interested in learning how to solve this complex problem. It's really helping me learn about the subtleties of analog circuit design and signal processing. I mean, I only have 4 or 5 components on there. It's close. I just am interested in learning how to get it close to perfect (given that it is based on an Arduino to start.)

I think that one of the problems of this circuit is the precision you need in order to make it work properly. I mean, arduino has "limited" precision with analog inputs reading, so you may get a problem here.

Another problem is the fact that the IRLED changes his behaviour with temperature changes.

If I've understood your post, you have a sensor that measures the light that pass through something. I will ennumerate what I've been thinking while reading your post: - Why not use a temperature sensor to know the temperature of the IRLED and try to compensate it by "software"? (EG: at 25ºC, +0V, at 30ºC, +1V. Only numbers, not your case) - Other problem: precision. How about using an Op-Amp to amplify that small variance in voltage and cancel it somehow? - A "cooling" system to mantain it always at the same temperature?

Maybe you have explained it, but maybe with the fact I'm not English speaker, I've not exactly understood what you need to do yet. I mean: when the light that passes through that "something" changes, the voltage change, right? And the problem is when this circuit has been working for a time, it heats up and that measures change. Is that exactly what is happening?

Sorry, but I want to help if I can :S

If you google “laser diode feedback control circuit”, you will see lots of examples of how to keep the light intensity output constant. It is a big problem with laser diodes because as they heat up they can undergo “thermal runaway” and be destroyed. The circuits will work for ordinary LEDs, too. Here is an example, picked at random. It could be wired up on a breadboard using just about any op amp.

1769Fig01.gif

Your intensity value is something like ( f(x) - terror(t) ), where you want to find x, or maybe just f(x), but would like to figure out how to generate your "error function" terror(t) in a way that doesn't involve measuring a bunch of datapoints, moving the data to a PC and running matlab on the results to come up with a polynomial that you then add to your program. You're already measuring temperature and using your polynomial, you just want a better way to do things in the future. Despite people's suggestions, you're not currently complaining about the performance of your polynomial-based code.

Is that about right?

There are a couple of common ways to approach this. 1) Heat your LED to a constant temperature and don't worry about compensating the readings. 2) Measure the LED output with and without your sample in the way, so that the errors are removable without deriving their formula. This could involve using two LED and two sensors. 3) Do a "temperature calibration" run where your other variables remain constant. Save a table of the changes in value (perhaps in EEPROM) every 0.1C or so. and do simple lookups or piecewise linear interpolations from the table, without ever trying to figure out an equation that matches that data. 4) There ARE algorithms that will come up with the coefficients of a polynomial of degree N given at least N+1 datapoints (IIRC.) This is what MatLab is doing on your PC, and there's probably no reason that the arduino can't do it internally. You still need a calibration run like in (3), but you wouldn't need as many points. You'd have to check out a book on "numeric algorithms" for curve fitting using polynomial interpolation. http://oreilly.com/catalog/masteralgoc/chapter/ch13.html

Polynomial interpolation A method of approximating values of a function for which values are known at only a few points. Fundamental to this method is the construction of an interpolating polynomial pn(z) of degree ? n, where n + 1 is the number of points for which values are known.

Here is my current calibration process:

1) I place my Arduino and protoshield with LED + fluid channel + light-to-voltage (LTV) sensor into a freezer. 2) When the assembly cools to 50F, I remove it and connect it to my laptop 3) I then place it in an oven at 100F 4) I record the voltage supplied to the LED and the voltage coming from my LTV sensor using the 16-bit ADC in a CSV format 5) I process the CSV data using a program called Wizard to sort the light readings according to LED voltage 6) I then have a histogram that allows me to see the total number of samples collected for each LED voltage reading 7) Next, I load the CSV data into MATLAB and fit a polynomial curve to the same irradiance x voltage plot 8) I take the resulting equation from MATLAB and use these values to create my temperature compensation error function on the Arduino. My current way of doing this is to take the lowest irradiance reading and subtract the difference between that baseline and each reading across the LED voltage range. This gives me a temperature offset, of sorts. 9) I then subtract this offset from each actual reading from the LTV sensor given its LED supply voltage. 10) The result is supposed to be a temperature compensated irradiance reading driven by the corresponding LED voltage

I think I am also running into some problems with the actual limits of my number types. I don't really care about speed, more about accuracy, so I have been using floats. However, it says that they only have 6-7 digits of precision total. Any idea which is is, 6 or 7? What happens when a number is longer? My polynomial intercept returns a number like 6179.9106 so I am guessing this is a problem.

OVERALL, I would like to be able to deploy a sealed, calibrated device that someone can use regardless of operating temperature. If this requires a lengthy calibration process, that's fine. I am looking into the laser diode circuits and will see if I can find a self-adjusting one. I don't want to heat the LED or try to cool it. Other than an RTD setup, the temp sensors I can find are only accurate to 0.1*C and that doesn't seem as accurate as trying to measure the voltage or current going to the LED itself. No? I have the data in a format I could load into EEPROM, but shifted over to the polynomial solution because it seemed more straightforward. Also, the suggestion about using an OPAMP to measure the voltage going to the LED and offset the difference somehow is exactly what I want. I want the magic self-balancing robot version of this thing. I would love to be able to add or subtract a tiny amount of something going to the LED to keep it constant without having to do much of anything. I was trying to figure out if there was a way to use a 2nd LED in the circuit that would respond to temperature changes in the same way, but have the opposite effect on the main source LED to be able to offset its drop or rise. Again, not an EE... so, here I am.

Just to make life more complicated ...

It seems from what is being said that the problem is to maintain a constant light intensity from the LED.

I use LED lights in my boat and their output seriously degrades as they age. I don't know if this is particular to the high power LEDs used for lighting or if it is true of every LED. If it is generally true then calibration can't be based on current and must be based on measured light output.

I also get the impression that the purpose of the project is to measure the opacity (or clarity) of a fluid. If you can pass the same light source through a "standard" medium with constant opacity then perhaps by comparing the readings of the light passing through the sample and the light passing through the standard you can determine the opacity of the sample without any regard for the actual intensity of the light?

...R

I found your discussion when looking for information on polynomial curve fitting for a project I am preparing to work on myself. My Arduino hardware is still on order, so I can't comment on that aspect yet. However I think the last post is on the right path - forget trying to compensate for temperature. That is incidental. You want to compensate for changes in LED light output. If you had two identical light paths, both using the same LED source - one with your test fluid in the path and one with an identical fluid tube, but empty, and both using the same exact detection circuit and component selection to measure light intensity, temperature would act equally on both paths. Then you can use the value of the calibration path to compensate for component aging, thermal drift, and variations in LED voltage and current. Also,I would question the accuracy of your thermal measurement technique using the refrigerator. The temperature of the LED's silicon will heat much faster than the circuit board for example due to self-heating. The LED has very little thermal mass and since it generates heat the only way to begin to measure its temperature would be to attach your thermal sensor directly to the LED and bond them together with thermal grease. Even then I would not trust the results. You really need a temperature chamber so you can set the temperature and allow the circuit to reach equilibrium. I made one using a discarded microwave oven. Gut the unit, install a quartz bathroom heater element where the magnetron was for heat, and use liquid C02 for cooling using a cryogenic solenoid and small nozzle. Control both using an industrial temperature controller (Google Omega temperature controllers.) You can build the whole thing for around $300 and it will hold temp within a few degrees C over a wide range. WARNING - it will displace oxygen when cooling so it cannot be used in a confined space as it could kill you! To do a temperature run go in 5 or 10 degree steps and let the circuit soak for at least five minutes at each step once the chamber is stable at each new temperature.

81Pantah: I have been using floats. However, it says that they only have 6-7 digits of precision total. Any idea which is is, 6 or 7? What happens when a number is longer? My polynomial intercept returns a number like 6179.9106 so I am guessing this is a problem.

Floats use 24 bits for the mantissa, so your accuracy is about 1 part in 2^23. There's a lot there, but the full story on IEEE754 floating point as implemented in AVR GCC is here: https://en.wikipedia.org/wiki/Single-precision_floating-point_format