The last two lines of your post looks fine to me. If your ratio (1/60) is constant, I would prefer the canned two-line solution because including floating point number math into your sketch adds a lot of space overhead, and computations are relatively slow.
But I'll assume you have an arbitrary floating point number, like frames per second, and it can be anything.
int rate = 60;
float duration = 1.0 / rate;
long milliseconds = (long)duration;
long microseconds = (duration - milliseconds) * 1000;
If you use #define instead of variables then the preprocessor can do the calculations and insert constants in the object code without linking all the runtime floating point stuff. The following should be much faster and 1/3 the size of the equivalent code using floating point variables.
Nice spot on my math error, mem. Yes, if you know all ahead of time, let the compiler do the math. But I wrote it as though the core rate was variable.
I'm working on a music related project where delay timing is very important.
If the timing from one event to the next is very important, you probably shouldn't implement it using delay(). While you may be able to get the exact delay that you want, this strategy ignores the time required for other processing (which may or may not be significant compared to the delay interval). A more precise method would be to implement the timing based on elapsed time or, alternately, to configure another timer to give you interrupts at certain intervals.
If the timing from one event to the next is very important, you probably shouldn't implement it using delay().
Don's quite right. Delay is only accurate approximately to the millisecond, so delay(16) may actually delay as little as 15000 microseconds, which is more inaccuracy that your music application can probably support.
If you do end up having to delay something like 16666 microseconds, it seems like you could do two delays of 8333 us. This is a fairly rigid strategy, but if you are only looking for 60Hz and 50Hz it should work with much higher accuracy.
@mem: I like your clever idea of doing all the floating point at compile time.
A more precise method would be to implement the timing based on elapsed time or, alternately, to configure another timer to give you interrupts at certain intervals
I tried to do it this way, but I think I need a function that reads current microseconds like the built in function millis() does. I think I will be fine with a combination of delay and delayMicroseconds. I like the idea of breaking down the 16ms into two 8333µs delays. Thanks again.
// this is not accurate to the 60th of a second
customDelay(16.6667);
void customDelay(float delayValue)
{
long previousMillis = millis();
while (millis() - previousMillis < delayValue) {
// wait until time elapsed has passed
}
}
// this is accurate
delayMicroseconds(8333);
delayMicroseconds(8333);
delayMicroseconds(666);
Luis, I think that when Don suggested timing based on elapsed time, he didn't mean "as measured by millis" as in your customDelay example. Millis() is just as imprecise as delay() in the microsecond realm.
I think I will be fine with a combination of delay and delayMicroseconds.
I sure would (still) steer clear of delay() and focus on the combination of multiple delayMicroseconds() calls.