Can I use millis() to increment a data type every millisecond..?

The previous sketch has a variable named mills. Thot you referred to that! And once again, I was wrong!

1 Like

I wrote the current version of the timer0 ISR back in 2009.

I'm not sure I understand the issues that GoForSmoke is describing.
Since the "remained" value has it's own remainder whenever the millisecond count "skips", the skipping should have a pattern that repeats every LCM(3, 128) or 384ms (that's 0x180, with bit8 set, so perhaps that accounts for it?) I guess it's possible that there's an off-by-one error when the remainder "wraps"?

1 Like

Cool.

That's because I got stuck on a Wrong Notion that I should have known better easily! No excuse, I blew it Big Time!

Sorry, I'll be kicking myself for the rest of the month!

I have measured millis() wayyyy to long to screw it up, just lately RL has taken so much attention I got this head's up for stupidity.

I did get almost halfway through that Bresenham article that DaveX posted... it's worthy of study for sure though!
I wonder if the report function could be flagged by the IRQ every >= value and THAT could count those and only make the actual report after X times flagged outside of the IRQ?

We have the never-see low 8 bit millis() values for speed?

1 Like

That's where I first learned about the technique, and I've wondered if it has an actual name too. I've used it in various ways for time-keeping and stuff like resampling a discrete signal (e.g., using a fixed-rate timer interrupt to sample serial data at a different rate, say 1200 baud sampled by a 8192 Hz ISR). I've also used it to draw actual lines on a screen. :smiley:

Edit: oh, and I think Direct Digital Synthesis (DDS) could be considered another use of the technique. It's one of my favorite things at the moment since it's so powerful yet simple and elegant.

Edit2: I just realized that all of these techniques are analogous to two meshed gears with different numbers of teeth: in the case of a line-drawing algorithm, one gear is the X axis and the other is the Y axis. Each full revolution of a gear represents a movement of a pixel in the respective axis.

1 Like

The flag might be just a difference in the low bits of timer0_millis.

For extra speed you might do something awful and abuse a byte of timer0_millis:

extern unsigned long timer0_millis;

uint8_t s, t;
unsigned long u, v;

void setup()
{
  Serial.begin( 115200 );
  Serial.flush();
  Serial.println( "\n\n\n\n bit 10 toggle usecs within void loop limit\n" );

  s = (timer0_millis >> 8) & 0x4;
  v = micros();

  do
  {
    t = ((uint16_t)timer0_millis >> 8) &0x4;

    if (( s ) != ( t  ))
    {
      v = micros();
      Serial.println( v - u );
      s = t;
      u = v;
    }
  }
  while ( v <= 60000000 );
}

void loop() {
}
 bit 10 toggle usecs within void loop limit

1024008
1024000
1024000
1023996
1024004
1024000
1024000
1023996
1024004
1024000
1024000
1024000
1023996
1024004
1024000
1024000
1024000
1023996
1024004
1024000
1024000
1023996
1024004
1024000
1024000
1024000
1023996
1024004
1024000
1024000
1024000
1023996
1024004
1024000
1024000
1023996
1024004
1024000
1024000
1024000
1023996
1024004
1024000
1024000
1024000
1023996
1024004
1024000
1024000
1023996
1024004
1024000
1024000
1024000
1023996
1024004
1024000
1024000
1024000

The casts seem to suggest to the compiler that only one byte of timer0_millis needs to be read:

  do
  {
    t = ((uint16_t)timer0_millis >> 8) &0x4;
 5c4:	80 91 52 01 	lds	r24, 0x0152	; 0x800152 <timer0_millis+0x1>
 5c8:	84 70       	andi	r24, 0x04	; 4
 5ca:	80 93 4a 01 	sts	0x014A, r24	; 0x80014a <t>

    if (( s ) != ( t  ))
 5ce:	90 91 4f 01 	lds	r25, 0x014F	; 0x80014f <s>
 5d2:	89 17       	cp	r24, r25
 5d4:	09 f4       	brne	.+2      	; 0x5d8 <main+0x158>
 5d6:	4a c0       	rjmp	.+148    	; 0x66c <main+0x1ec>
    {

In the thread linked in @westfw's #42, and the source code, it mentioned "division can be done moderately efficiently by repeated subtraction". I think the term I was looking for is "division by repeated subtraction" as the general name for the algorithm. It is taught to kids as a simple form of division and used by programmers for incremental ratio problems like Bresenham or millis() when they want to avoid "divide".

Re edit 2: yep a ratio problem is like meshed gears. The Bresenham, millis and division by repeated subtraction trick is that you only need make the program act on the full revolution of a driving gear, and knowing that any ratio between the two is an addition and a (conditional) subtraction.

1 Like

I see in the C examples what it's doing but haven't internalized the full taco yet. I hope this is temporary but do recall hitting the same pages in SCO docs like 5 or 6 times just picking up terms!

Whether repeated subtracts or additions.. I like that he avoided negative values but even then it's still about remainders. The part about no overall error... intuit that 2 smaller timings could add to the desired timing is where I wonder about the "do report" step could be an IPO type task outside of the IRQ. The regular code will read the value/flag when it gets around to it, not the exact usec the period ends anyway.

Well sure, that's the general "mathematical" term. I'm wondering if there's a name for the general algorithm that uses repeated subtraction repeatedly while carrying over the remainder (the error term) each time so as to maintain the ratio between the numerator and denominator.

Searches on "division by repeated subtraction c" showed it used for describing some programming problems. I did see a hint of "incremental error algorithm" as a name on Wikipedia, but I didn't really see that term used anywhere else.

The book "Microcontroller Programming --The Microchip PIC" by Julio Sanchez and Maria P. Canton calls it the Black-Ammerman method and refers folks to the Roman Black pages referenced above in #35. And there, Roman Black refers to Ammerman and his suggestions about applying Bresenham to clock timing.

I think what's different about Arduino's/@westfw's millis() implementation, relative to Bresenham and Black-Ammerman implementations is using a greater-than 1:1 ratio between the values 1024:1000 -- that pushes out of the first-octant limit of Bresenham, or beyond the examples shown in Black-Ammerman, at the cost of double-increments. Since the input period of 1024us/cycle is slower than the desired output frequency of 1000us/cycle, it makes the decision a choice of whether to do a single or double count of millis(). Well, that and the optimizations that scale the running summing/decision math to fit into a single byte. (24/8=3 and 1000/8 = 128 are both less than 256.)

millis()'s occasional double-count makes it hard to do what the OP originally asked for: "Can I use millis() to increment a datatype every millisecond?" If you use millis(), you would need to double-increment whenever millis() double-increments.

Note that in the case of the millis() calculations, the "division" is always going to give you either 0 or 1, so the test and subtract is MUCH faster than an actual division, and probably faster the modulo-by-AND (if that were possible.)

millis()'s occasional double-count makes it hard to do what the OP originally asked for: "Can I use millis() to increment a datatype every millisecond?"

Depending on the application, you might be able to use the timer0_overflow_count variable that IS incremented every clock tick (at 24us slower than 1ms.) (at rather great cost - 4 memory fetches, 3 add instructions, 4 stores. My original ISR modifications got rid of the overflow count entirely, but of course that makes micros() more difficult.

What I don't understand is, why not go the other way, and get rid of millis()? I don't really see the point of millis() myself. I use micros() for short time intervals, and if I need something longer, I can use Blink-Without-Delay-style logic to increment a counter every second to count seconds.

get rid of millis()?

It used to be pretty common to have system calls that operated on "ticks" rather than a conveniently human-oriented time value. (1/60th second on a lot of mainframes, something like 1/50s in DOS.) Even having an interrupt every (close to) millisecond is significant overhead.

I would think having nice even millisecond values was something done to accommodate the non-technical crowd that Arduino was designed for. And of course, once it was milliseconds, it could never go back. (and you know, the ARM versions don't have any trouble doing exactly milliseconds. It's strictly a side effect on the AVR of wanting to use the same timer for both the "tick" and for PWM.)

The original Arduino Timer code kept just a tick count, and converted to milliseconds in the millis() function with some relatively nasty math. But that turns out to be pretty complicated, because the math overflows at different places than the timer...
Edit: Huh. Or so I remember. I can't find a distribution that old :frowning: )

1 Like

So, basically, it's milliseconds rather than microseconds just because the non-technical crowd prefers numbers with fewer digits?
I grew up playing pinball. Numbers in the millions don't scare me.

Well, "microseconds" would require big numbers not to overflow inconveniently often. you do really want time intervals in "several hours" to be convenient, and 8bit CPUs don't like numbers that big. (and lots of systems I've used have APIs that use milliseconds, even if the "tick" is coarser. It's a pretty good size, for a lot of things.)

One minute in milliseconds = 60,000. Will fit in uint16_t = 2 bytes.
One minute in microseconds = 60,000,000. Requires uint32_t = 4 bytes.
Microcontroller environments are often memory-starved.

One reason for millis() is to have 32-bits give longer intervals than 70 minutes and some.

Rather than? My Arduino has both!

Or neither! If your code doesn't use micros() or millis(), the compiler is happy to leave it out. Use it or lose it.

On the "neither" side, Arduino AVR initializes the 8-bit Timer0 for PWM, with the ISR(TIMER0_OVF_vect) ISR maintaining a count of 1024us ticks in:

extern volatile unsigned long timer0_overflow_count;

...and 4us resolution 0-255 sub-ticks in the TCNT0 register.

micros() reads those two values and translates them into microseconds in user space.

The same ISR also maintains a (jittery) count of milliseconds in

    extern volatile unsigned long timer0_millis;

...which is directly copied out into user space with:

millis();

OK, so if I use micros() in my code but not millis(), will this ISR still bother to count timer0_millis even though I don't need it?