- if the application is not 'always on' or on for less than 50 days this should not be a problem
- or define a confMills(endTime,range) function that make the comparison considering the rollover when millis() is in the 'upper range' and endTime is in the 'lower range', and it will not crash ( we survived the millenium bug afterall ; - )
Or just use the normal subtraction method and don't worry about it. There is no reason to get creative with millis and upper range lower range. There is already a tried and true method for doing what younsaid but in a safe way.
ok, but the creative method will stay perfectly syncronized to the 'start'
That does not have to cause a rollover problem.
One way
if (now - timer > interval) {
// do the timed thing, then
timer = now;
}
or the other
if (now - timer > interval) {
// do the timed thing, then
timer += interval;
}
There are circumstances where each are appropriate, for many cases it will make no difference.
a7
why do you think the standard method lacks the ability to synchronize 'from the start'? It's all just U32 arithmetic, after all.
So will the other method. And it does not require re-writing anything if the project becomes a long tem thing.
It is super easy to get right from the get-go.
Don't record a time you expect to stop and check if you are past that. Record the time you started and test for how long it has been.
@alto777’s second method will stay perfectly synchronized to the start:
good, my apologies
Both the methods I posted stay synchronized if… and here's where it's clear I can no longer learn anything because it's a little more complicate than
"if you don't overrun the interval with code that takes too long"
There's a subtle thing that finds no purchase in the rocky soil which is my brain.
One method gets back on track, the other will fire off "make up" executions of the timed activity.
If it was a machine handing out dollar bills once a second, you'd want any glitch to be rectified by a flurry of dollar bills spitting out at top speed.
In another case, all is forgiven. You don't take all the pills you missed, just the next one on schedule.
I'll try to find the thread where some heavies laid it out, where I promised to internalize the knowledge and which I here freely admit to not having done.
a7
I remember that thread -- It had a second layer of catching up:
if (now - timer > interval) {
// do the timed thing, then
while(now - timer > interval){ // catch up
timer += interval;
}
}
But if one is concerned about accuracy, with any interval > 200
, the 0.5% ceramic clock oscillator in an Uno might become an issue.
I wanted to satisfy my own curiosity, not to disagree with any other posts intentionally, or nitpick.
I decided to run 3 tests, one after the other, to test 1000 executions of the inner 200ms loop, using 3 different delay methods:
[Edited to add this was running on an Uno R3 clone]
- Using the original code from post #1
- Modified original code to do Told+= 200 rather than Told=Tnew
- Using delay(200)
The results were:
Doing loopOne() - original code
took 200000 ms
Doing loopTwo() - add 200ms to Told
took 200000 ms
Doing loopThree() - using delay(200)
took 200278 ms
Results one and two are satisfyingly spot on. Result three is insignificantly inaccurate.
This is the code I used:
unsigned long Told = 0;
unsigned long Tnew = 0;
byte byter, bit; // DL not sure where these were originally defined
unsigned long tStart, tStop; // DL added for overall timing
void setup() {
Serial.begin(115200);
//pinMode(anapin, INPUT);
pinMode(8, INPUT);
Serial.println("Doing loopOne() - original code");
loopOne();
Serial.println(" took " + String(tStop - tStart) + " ms");
Serial.println("Doing loopTwo() - add 200ms to Told");
loopTwo();
Serial.println(" took " + String(tStop - tStart) + " ms");
Serial.println("Doing loopThree() - using delay(200)");
loopThree();
Serial.println(" took " + String(tStop - tStart) + " ms");
}
void loop() {
}
// Loop one is like the original code in post #1
void loopOne() {
tStart = millis();
// Repeat 125 times to get a longer overall time
for (int x = 0; x < 125; x++) {
byter = 0;
for (int i = 0; i < 8; i++) {
do {
Tnew = millis();
} while ((Tnew - Told) < 200); // Wait one length
Told = Tnew;
bit = digitalRead(8);
byter = byter + bit * pow(2, i); // Store bit in Byte
}
}
tStop = millis();
}
// Loop two adds 200ms to Told rather than setting it to Tnew
void loopTwo() {
tStart = millis();
// Repeat 125 times to get a longer overall time
for (int x = 0; x < 125; x++) {
byter = 0;
for (int i = 0; i < 8; i++) {
do {
Tnew = millis();
} while ((Tnew - Told) < 200); // Wait one length
Told += 200;
bit = digitalRead(8);
byter = byter + bit * pow(2, i); // Store bit in Byte
}
}
tStop = millis();
}
// Loop three uses delay(200)
void loopThree() {
tStart = millis();
// Repeat 125 times to get a longer overall time
for (int x = 0; x < 125; x++) {
byter = 0;
for (int i = 0; i < 8; i++) {
delay(200);
bit = digitalRead(8);
byter = byter + bit * pow(2, i); // Store bit in Byte
}
}
tStop = millis();
}
Since nothing is happening in the while loop I don't understand what you think the difference might be.
You know we can test and mystery about these things, or you can look at source code and see what will happen. This is not some stochastic mystery. It is quite predictable.
Also note that your first two tests have a measurement bias. If you want to talk about how accurate a clock is, then you can't measure it against itself. If millis is off then its off when you come back to get your final number. That measurement will always by definition be exact because you are checking the same clock you used to measure the time.
Is there not a slight delay between the line where millis() is read and the line where it is stored in Told?
It will not be anything close to a millisecond. So on this clock no, there will be no time difference there. Maybe with a faster clock.
Agreed. I haven't looked at the asm level to see how many instructions it takes, but I assumed it would be < 1uS. So even doing the test 1000 times didn't show any difference. I didn't expect it to be different. Doing each of the 3 tests takes 10 minutes. I'm not inclined to try for 100 minutes to see if test 2 takes 1 ms longer overall. Off by 1 could just be caused by how far the internal counter had got towards its 1ms tick. As I said, it was just to satisfy my own curiosity. I assumed results 1 and 2 would be (almost) the same and that test 3 would take a little longer. I've confirmed that what happens is what I expected to happen.
Just think about it. The error isnt going to add up. It is a matter of using a clock with 1ms resolution.
Or to put it another way, when the millis counter says 2000 then if you check the millis counter it will read 2000. That does not tell you anything about the accuracy of that clock. All it tells you is that when the clock says a certain time then checking the clock gets that same time.
I don't see how that is useful in any context.
If my watch runs fast and I wait five minutes by my watch then my watch will say it has been five minutes even though it has actually been six. That does not tell anything. You would have to check it against a different clock that you accept as accurate.
Even with a faster clock, the difference will be less than a millisecond.
In the few cases where millis() steps by +2 and Tnew - Told steps from 199 to 201, the Told=Tnew scheme would make the process would lag a step versus millis()%200.