seconds timekeeping and time conversion

I have been fooling around with one of these Sparkfun shields based on a SM5100B module (https://www.sparkfun.com/products/9607) These have an RTC on them that I will use to sync hour-to-hour timekeeping to the Arduino. I had a look at the various timekeeping libraries around and I was not thrilled with the weight or complexity, so I hacked up my own solution, that others might want to look at.

First, I implemented seconds timekeeping in wiring.c.

/* seconds support
 */
unsigned long time_sec(void);
void time_set(unsigned long);

Then a handful of lightweight utilities analogous to UNIX timegm() and gmtime(). The seconds timekeeping is a standard UNIX time_t compatible value counting seconds since Jan 1 1970, so you can exchange raw time data with other systems.

/* Time components.  Analogous to struct tm, but different.
 */
struct time_parts {
  uint8_t yr, mo, dy, hr, mi, se;
  uint8_t wd;
};

void time_to_parts(uint32_t time, struct time_parts *tm);
uint32_t parts_to_time(struct time_parts *tm);
int parse_time_string(char *ds, struct time_parts *dt);
void format_time_string(char *ds, struct time_parts *dt);

timekeeping.zip (10.6 KB)

Has this been tested well? (Even professionals mess up calendar stuff: google "leap year bug").

By the way, I see you have lots of divisions. I wonder how hard it would be to replace them with bit shifts and addition.
1/60 = 0x0.04444444... (abuse of notation, I know, but you get the point)
1/24 = 0x0.0AAAAAAA... (AAAAAAA!! is how I feel about the slowness of division)

odometer:
Has this been tested well?

It's been working for some time now without any obvious trouble.
I tend to do pretty extensive unit tests of this kind of thing on a real computer before putting the core code on the Arduino.
It is defined to work over years 2000 to 2099, during which period all leap years are of the vanilla kind.

The division is not easy to get rid of.
It is used mainly in converting a uint32_t to a struct time_parts, which is part of printing out a time. In my application I do not do this very often. Normally what I need to do is to parse a time string given by a GSM modem and use that to do time calculations.
The parsing logic is about as fast and compact as I could make it, at the cost of fairly half-assed validation.

Doesn't the Time library do most or all of this?

Pete

The Time library uses a lot more RAM than I wanted. In general it is much more heavy-weight in code, stack and static RAM than is warranted for the simple job it does. This code in Time.cpp, in particular, gives me the willies:

time_t now() {
  while (millis() - prevMillis >= 1000){      
    sysTime++;
    prevMillis += 1000;	
#ifdef TIME_DRIFT_INFO
    sysUnsyncedTime++; // this can be compared to the synced time to measure long term drift     
#endif
  }
  if (nextSyncTime <= sysTime) {
    if (getTimePtr != 0) {
      time_t t = getTimePtr();
      if (t != 0) {
        setTime(t);
      } else {
        nextSyncTime = sysTime + syncInterval;
        Status = (Status == timeNotSet) ?  timeNotSet : timeNeedsSync;
      }
    }
  }  
  return (time_t)sysTime;
}

This is an extraordinary amount of work, just to tell me what time it is. Looking at this is what made me decide to just do timekeeping in Wiring.c as millis() already does. Reading the time should not be so (potentially) expensive as the Time library makes it.

It also doesn't have date/time string parsing -- The main issue I had was dealing with ETSI GSM 07.07 date/time format values, so that's all baked in. My application involves parsing SMS delivery reports and doing time calculations based on these.

I'm not out to throw stones at the Time library, only showing what I did instead.

I'm not out to throw stones at the Time library

Yup, no problem. I was just wondering whether you had a better "mousetrap" :slight_smile:

This code in Time.cpp, in particular, gives me the willies

I think what the now() code is doing is waiting for the internal time to change to the next second before returning the time value. That way if it returns a string saying it is (for example) 17:15:12, the seconds value will have just "ticked" and be very close to 12.00 secs rather than being 12.85 or 12.13 or ....

It also doesn't have date/time string parsing

True, but I have always found it easy to parse the year, month, day, etc, of a string into the elements of the tm_elements structure and then just convert that to a time_t.

Pete

el_supremo:
I think what the now() code is doing is waiting for the internal time to change to the next second before returning the time value.

Not really. It is computing:

sysTime += (millis() - prevMillis) / 1000L;

But doing the division through successive subtraction. But it is also working around some subtitles subtleties of unsigned subtraction that this simplification gets wrong, and keeping track of the division remainder for next time. The assumption is that (millis() - prevMillis) / 1000L almost always is 0 or 1, and in this case subtraction is way way way faster than doing the division.

If you call now() all the time in a tight loop, the overhead is low almost all of the time. But if you call it only when some external event occurs, you could get hosed by a bunch of "catching up".

Not really. It is computing:

Yup, you're right.

Pete

gardner:
It is computing:

sysTime += (millis() - prevMillis) / 1000L;

But doing the division through successive subtraction. But it is also working around some subtitles subtleties of unsigned subtraction that this simplification gets wrong, and keeping track of the division remainder for next time. The assumption is that (millis() - prevMillis) / 1000L almost always is 0 or 1, and in this case subtraction is way way way faster than doing the division.

Recently, I've been working on killing a lot of division.
If I were doing this, I would use (x >> 10) as an approximation to (x / 1000), and then use the successive subtraction to clean up. (The >> symbol is the bit shift operator. On non-negative integers, (x >> 10) works out exactly the same as (x / 1024).)

odometer:
If I were doing this, I would use (x >> 10) as an approximation to (x / 1000), and then use the successive subtraction to clean up.

In my experience, division by constants is not as simple as you make out. This has a lot of detail:

Hacker's Delight gives this solution for unsigned division by 1000, which, on AVR, is probably as bad as the division. AVR is rotten at shifts, and all these C shifts will suck next to the hand-optimized shifts in the GCC division.

unsigned divu1000(unsigned n) {
unsigned q, r, t;
t = (n >> 7) + (n >> 8) + (n >> 12);
q = (n >> 1) + t + (n >> 15) + (t >> 11) + (t >> 14);
q = q >> 9;
r = n - q*1000;
return q + ((r + 24) >> 10);
}

gardner:

odometer:
If I were doing this, I would use (x >> 10) as an approximation to (x / 1000), and then use the successive subtraction to clean up.

In my experience, division by constants is not as simple as you make out. This has a lot of detail:
http://www.hackersdelight.org/divcMore.pdf

Hacker's Delight gives this solution for unsigned division by 1000, which, on AVR, is probably as bad as the division. AVR is rotten at shifts, and all these C shifts will suck next to the hand-optimized shifts in the GCC division.

unsigned divu1000(unsigned n) {

unsigned q, r, t;
t = (n >> 7) + (n >> 8) + (n >> 12);
q = (n >> 1) + t + (n >> 15) + (t >> 11) + (t >> 14);
q = q >> 9;
r = n - q*1000;
return q + ((r + 24) >> 10);
}

Maybe the author of that code had an "avoid branching at all costs" mentality, but I don't.
My method is a compromise: first approximate n/1000 as n/1024, and then clean up by using a loop for the remaining subtractions.
You could also iterate the n>>10 trick, as follows:

unsigned divu1000(unsigned n) {
  unsigned q = 0;
  unsigned r = n + 24;
  unsigned t;
  while (r >= 1024) {
    t = (r >> 10);  // (r >> 10) + (r >> 16) is maybe better
    q += t;
    t += (t << 1);
    r = (r & 1023) + (t << 3);
  }
  return q;
  // if you want the remainder:
  // r -= 24;
  // return r;
}

or, using multiplication instead of most of the shifts,

unsigned divu1000(unsigned n) {
  unsigned q = 0;
  unsigned r = n + 24;
  unsigned t;
  while (r >= 1024) {
    t = (r >> 10);  // (r >> 10) + (r >> 16) is maybe better
    q += t;
    r -= (1000 * t);
  }
  return q;
  // if you want the remainder:
  // r -= 24;
  // return r;
}

On Intel, x86_64 GCC, the HAKMEM/Hacker's Delight solution is a good bit faster than either of these.
'course plain old division beats all of them on Intel, so that doesn't prove a lot.