Hello.
I have some difficulties regarding a algorithm intended to control the analog control circuit of a lab power supply, I'm fairly new to programing and I have no experience with things like a algorithms or control-loops so I am improvising but please tell me if I'm on the right track.
I have read about PID controller and have a similar scheme in mind, here follows a description of the circuit:
See attached image.
A keypad is used to enter a desired output voltage(later also max current but that is another matter),
the desired voltage value is to be written to a 16-bit voltage output DAC who's output is forcing a voltage regulators output to change. There will also be a encoder to adjust the output but to begin with there are just a keypad.
But when the DAC is outputting max voltage the regulator is in turn outputting the lowest voltage, the regulator outputs a voltage between 0V(32mV) and +32V.
Then a 24bit ADC(LTC2440, LTC2400) should read the output and the value returned from the ADC is used with the original user input value to calculate a error term to be combined with the code to the DAC.
One trouble I'm having is to figure out how to take the inverted DAC functionality into account, I am not thinking of a actual code but rather the overall structure of the algorithm. Stuff like offset, gain and temperature calibration comes later.
This is my rough first version of the algorithm structure:
32bit signed int variables:
Set; //Value from keypad
FB; //Value from ADC
Error; //Error term
TempDAC; //Temporary value
InvDAC; //Value written to DAC output register
- Read a value from keypad and store it as a milliVolt value in Set, 0-32,000.
- Pass Set to TempDAC.
3.Store the value of TempDAC into InvDAC.
4.Rescale and invert the value of InvDAC, map(InvDAC, 0, 32000, 65536, 0)
5.Write InvDAC to DAC output register.
6.Read ADC into FB.
7.Calculate Error = Set - FB.
8.Adjust TempDAC, TempDAC += Error.
9.Store TempDAC into InvDAC and it all goes round and round and...
Some values have to be calibrated, I can't remember the name of it but if I would use a dual channel ADC I could read a precision reference with one channel in order to relate the second channels digital value to a real voltage value.
Or maybe in some other way, I don't know.
What do you think?
I am in the process of finishing the PCB layout for a first prototype as well as writing a example in actual code but I find it difficult when I do not have a clear picture of how it should be done.
Regards