Hi:
I've been reading Brett Beauregard's blog about PID and I don't completely understand how the sampling time on the PID function works. Here's the code:
void compute(){
int SampleTime = 1000
unsigned long now = millis();
dT = (double)(now - lastTime);
if(dT >= SampleTime){
double Error = Setpoint - Angle;
errSum += (Error * dT);
Output = Kp * Error + Ki * errSum;
if(Output>200){Output = 200;}
else if(Output <-200){Output = -200;}
else{}
lastTime = now;
}
}
I want to understand how this sets up the sampling time. From my perspective if dT is greater than one second, the calculations wil be made but how can this be possible? dt (at least from my point of view) will never be greater than 1000ms! but I know it is working (I see it in the behavior of my motor).
Can someone please explain?