I have a Giga R1 connected to the cloud and I wanted automatic daylight saving adjustment so I incorporated the Timezone library. I had to fiddle with it for quite some time to get it right but it seems to work fine. However today was the second time in 3 weeks that the RTC somehow goes back 2 hours for no reason I can understand. It was started up and set the time correctly, and without restarting itself (I keep track of any crash or reboot) it went back 2 hours. These 2 hours correspond with my timezone and DST, +1 hr for each.
How is this possible? after the set_time() command is used how can it go back?
This is the code I'm using to set the time:
void updateTime() {
ntpClient.update();
if (ntpClient.update()) {
const unsigned long epoch = ntpClient.getEpochTime();
time_t localTime = myTZ.toLocal(epoch, &tcr);
set_time(localTime);
Serial.println("RTC is set");
setRTC = true;
}
}
I'm using <mbed_mktime.h><Timezone.h> <NTPClient.h> libraries for the Giga with rules:
TimeChangeRule mySTD = {"EST", Last, Sun, Nov, 2, 60}; // Standard time = UTC +1 hours
TimeChangeRule myDST = {"EDT", Last, Sun, Mar, 2, 120}; // Daylight saving time = UTC +2 hours
Timezone myTZ(myDST, mySTD);
TimeChangeRule *tcr;
Yeah I just read that thread. My giga is powered with external power and there is also a brand new external battery connected (voltage is good). So it seems unlikely to be the cause, as far as I can see.
That thread has the clock resetting to midnight, while in this case it is just the time zone being "lost"; so that the minutes are right, but the hour is wrong? In any case
EST usually means Eastern Standard Time in the US, while UTC+1 is east of London. Daylight Time means somewhere in Europe, not Africa. So Central European Time? CET and CEST (Summer Time), which ends last Sunday in October?
Do actually call update twice? May not be an issue, but once should be better?
How do you notice the time is wrong? Does the timezone change when setting the time, or just randomly while "nothing else" is happening? You could print the timezone and offset, tcr->abbrev and tcr->offset, to ensure the definitions have not been modified somehow.
You are correct, it should be CET and CEST, but given they are between " " I had assumed this did not affect the functionality.
From reading a lot of threads I came to understand that ntpClient.update(); often returns nothing. While setRTC == false it keeps running ntpClient.update(); until it returns something, then it stops trying to update. A seperate bit of code sets setRTC == false once per month to do a periodic update. This is the first day of every month, so it was not causing my 2 hour shift this week, or last week.
It's C++, and being used in an if condition, so it can't return nothing. It will return false a lot because it won't do anything between its _updateInterval, which defaults to 60 seconds. As you have it now
you call it and it might update its _currentEpoc (sic), but you don't check the return value
it immediately is called again, and so it will definitely return false
this repeats in a loop, and so for it to take effect
the first ignored called has to be just before the interval expires
and the second call is just after the interval expires
How many tries does it usually take to work? If you're only doing it at the start of the month, you can call forceUpdate instead and check the return value every time. It would only fail when there response from the NTP server times out.
None of this affects your original issue though. Do you call set_time anywhere else? What about the code that shows the occasionally wrong time?
I know it keeps repeating, I was counting on the default updateInterval to prevent it from stumbeling over itself.
sometimes it takes a minute, sometimes it can take upto 10 minutes.
Is there any benefit to using ntpClient.forceUpdate() over normal update?
I triple checken but I am sure set_time is used nowhere else. The code that does the displaying is not at fault, I'm 99% sure. I can tell by all the automated timers going off at the wrong time. This is the main reason the 2 hour relapse is a problem.
After 3 weeks of testing I've noticed that most of the "relapse" events happen at or around 24 hours after the RTC was initially set. I've made a function so I can manually trigger a NTP call and start monitoring from there.
I've also tried to change my code not to use the timezone library but a fixed 2 * 3600 offset, same result.
The NTPClient examples put the ntpClient.update() call directly in the loop(). So whether that main loop runs every 5ms or every 1000ms, the updateInterval reduces the network call out to the NTP servers to once every minute -- which is still updating the time much more than necessary. An average RTC might drift a few seconds a day. Even if yours is an order of magnitude worse, that would require updating every hour, not every minute.
Because your (original) usage depends on taking additional action when the NTP server actually responds, you should never call the function (either update or forceUpdate) and ignore the result. forceUpdate is only necessary if you make a configuration change, like changing the timezone, and you want to see it take effect immediately, even if you have been calling update regularly.
NTPClient also can tell you the time, because it records the epoch offset last received from the NTP servers, and when that last happened, using millis(). Then using millis() again, it can tell how long it's been and what the time is now. millis() rolls over every about 49.7 days, so if it has been that long without an NTP refresh, then the result will be wrong. It also has its own timeOffset for time zones, which is set independently of actually getting the time from NTP.
It would be interesting to see if the RTC loses only one hour during Standard Time in the winter. But maybe you don't want to wait that long.
Which one? There are three examples.
If you run the "Manual" example, and set the clock to be 15 minutes fast (there are timezones at 30- and 45-minute offsets, but none at 15), does it relapse?
Can you disable the WiFi, either on the Giga or the AP, so that it can't "secretly" dial out to find UTC time?
Still trying to find a solution but yesterday I found this topic about the RTC being automatically synced every 24h if it's connected to the Arduino Cloud (which it is). In my cloud settings I do have the correct timezone and DST, so if it is indeed the cloud updating the RTC I'm still stumped why it applies the wrong settings...
Maybe that discussion will clear some things up but at the moment I've set the ntpClient to call the server every hour for an update. If for some reason it does relapse it will hopefully correct itself soon after.
Is it possible that you are setting the RTC to local time, but the Arduino cloud is setting it to UTC? A common practice is to set the RTC to UTC, then make the adjustment to local time in the code when reading the time from the RTC.
Certainly looks like it. In the Arduino IoT "things" metadata I've set my timezone correctly, last sync corresponds with UTC + 1 (correct) but does not account for DST. What happens is I set my RTC to UTC + 2 (timezone + DST) but when it relapses it goes to UTC + 0.
I suppose I could, but it would confuse the hell out of me if I leave it running for a few years and decide to pick it back up.
Also I just want it to work properly, and understand why it's not.