I'm doing a little programming that controls some stepper motors, and timing is important. I want to have an idea of how long each instruction takes to process.
For example, I know how to step the motor, then delay 500 microseconds, then step it again.
How close is this to the following?
Step motor 1, delay 250 microseconds, step motor 2, delay 250 microseconds, step motor 1 again.
The delays add up to the same thing, but what about all the computing being done?
Is each "if" statement 1 clock tick? Is each "assignment" or "addition" also 1 clock tick? How long does it take the clock to tick once?
The AVR datasheet list all the AVR instructions and the number of cycles they take but your approach is flawed. You ideally don't want to count cycles. You want to use a timer. Use a stepper motor library or start reading the Timers sections in the AVR datasheet.
An if statement is a high level C/C++ programming language function and will compile to some dozens (maybe?) AVR machine language instructions, each of which take 62.5 nanoseconds I believe when used with a 16mhz clock rate.
Is each "assignment" or "addition" also 1 clock tick?
No, again C to machine language conversion by the compiler creates lots of smaller machine language steps to perform each C statement.
Generally, instructions that operate on registers take one cycle, instructions that operate on memory take two cycles, and "special" instructions take 3 cycles. This is highly simplified -- you really want to look at the data sheet -- but this is a good first-order approximation for estimation purposes.
And, yes, with a 16 MHz clock, one cycle is 1/16000000 of a second
However, for driving external hardware (steppers, servos), then you should either use an existing library, or use the timer/counter/PWM functions built into the chip, rather than try to emulate the same function in software.
I've locked in timing for gyroscopic reading and musical notation purposes using the following method. On the Arduino 16mgz the timing resolution is limited to 4 microseconds and the 8mgz is 8 microseconds. This has been accurate enough for my needs.
Micros() gives the current time in microseconds. By dividing the current time by the rate that you want (for me 100 times a second = 10,000 micros) you get to know how many timing clicks of 1/100 of a second have passed. Each time you pass a 10,000 microsecond period, mark it by iterating a period counter (Gyro_PeriodCount). It has been exactly 10,000 micros when
micros() / 10,000 = (Gyro_PeriodCount + 1));
By running this code continuously you are watching the clock and at the right time, executing your timed code. The best part is you can be running many of these if statements in the same cycle (if their timing periods are not wildly variable)
NoahHornberger:
I've locked in timing for gyroscopic reading and musical notation purposes using the following method. On the Arduino 16mgz the timing resolution is limited to 4 microseconds and the 8mgz is 8 microseconds. This has been accurate enough for my needs.
Micros() gives the current time in microseconds. By dividing the current time by the rate that you want (for me 100 times a second = 10,000 micros) you get to know how many timing clicks of 1/100 of a second have passed. Each time you pass a 10,000 microsecond period, mark it by iterating a period counter (Gyro_PeriodCount). It has been exactly 10,000 micros when
micros() / 10,000 = (Gyro_PeriodCount + 1));
By running this code continuously you are watching the clock and at the right time, executing your timed code. The best part is you can be running many of these if statements in the same cycle (if their timing periods are not wildly variable)
So what happens 71.5 minutes after booting when micros() wraps round back to zero again? Compare with
NoahHornberger:
I've locked in timing for gyroscopic reading and musical notation purposes using the following method. On the Arduino 16mgz the timing resolution is limited to 4 microseconds and the 8mgz is 8 microseconds. This has been accurate enough for my needs.
Actually, the default implementation of timers in the Arduino library makes the micros() function have about 4 microsecond precision.
However, if you set up a timer yourself, you can make the precision be almost as good as you want it. On a ATMEGA 328 (Uno, and friends), you can use Timer2 for this. You can have a timer that counts once for each clock cycle, and interrupts on loop. Make the interrupt function increment a loops counter. When reading time, read the loop counter, then read the timer value, then read the loop counter again. If both loop counter readings are the same, you're good, and return loop counter * 256 + timer value, else you try again. This will give a machine cycle accurate timing. However, there will still be jitter in this timing because of the interrupts, and there will be latency in the timing because of the instructions executed to calculate the time stamp.
jwatte:
However, if you set up a timer yourself, you can make the precision be almost as good as you want it. On a ATMEGA 328 (Uno, and friends), you can use Timer2 for this. You can have a timer that counts once for each clock cycle, and interrupts on loop.
Agreed. I did something like that recently, posted on this forum and cross-posted here:
I found it better to use Timer 1, which is 16-bit, because that overflowed much less often (once in 65536 counts rather than once in 256 counts). I used the full-speed clock resolution (62.5 nS) and got quite accurate figures up to about 90 KHz of the data I was sampling.
I'm not sure why it wasn't a bit better than that, but I guess that the time taken to service the interrupts (ie. the moment the pulse is sampled) at both ends (start, finish) would quite likely be around 10 uS. Thus the inverse of that being 100 KHz, it seems reasonable to say that is as fast as the processor can keep up.
Still, at input rates of (say) 10 KHz I was getting around 0.1% error rate, which isn't too bad. In fact, the internal Arduino crystal/resonator was probably out by that much, so this is probably as good as it is going to get without a high-precision frequency reference.
westfw:
On some of the newer AVRs (ATtinyx5) they apparently made it possible to run the timers FASTER than the CPU (up to 64MHz.) Which is ... interesting!
Not just the I/O clock. The CPU can be driven from a PLL. The internal oscillator runs at 8MHz but the CPU executes instructions at 16MHz. It just takes toggling a fuse bit to turn it on.
On-chip PLL clock generation is the rule, not exception, with high-performance processors (sub-micron transistors are very fast, crystals only go up to 30MHz or so in fundamental mode)