Microseconds delay

Hello! I am writing a program in which I have to delay a signa a certain amount of microseconds (usually between 100us to 1000us).

My issue is that I don't know if this will work, because I need to calculate the amount of microseconds to delay, and I am afraid that this calculations might be taking more than the amount of microseconds I need to delay, making the whole thing pointless.

To check if this was correct, I created a program:

int COUNTER = 0;

int V = 0;
int CALC = 0;

void setup() {
  Serial.begin(300);
  pinMode (A5,INPUT);
}

void loop() {
  V = analogRead(A5);
  if (COUNTER >= 1000 && COUNTER <= 1100){
    Serial.print ("START,");
    Serial.println (micros());
    CALC = V * 360 * 56 + 3 * 500 - 4;

    Serial.print ("STOP,");
    Serial.println (micros());
  }
  delay(10);
  COUNTER = COUNTER + 1;
}

This is simply a code that simulates the real code I'm going to run. I created this code because I cannot run the real code as the Arduino is not yet connected.

The results I got where the following:

START,10121356
STOP,10121716
START,10132196
STOP,10132560
START,10143044
STOP,10143400

This is telling me that to excecute this code I am taking about 360us in average.

Is this correct? If this is correct, then this delay would be useles for cases in which the calculated delay is <400us, and it would be augmented by a lot in cases where the time is above 400us, since the calculations would almost double the delay time.

Am I doing something wrong?

The simulation is probably not useful.

In real life, at the above glacial serial Baud rate, one character is printed about every 30 milliseconds. Change the 300 to 250000 for vastly improved performance.

I am afraid that this calculations might be taking more than the amount of microseconds I need to delay.

The calculation is boiled down by the compiler to one 16-bit integer multiply and one add operation, which on an Arduino Uno R3 will take about 1.5 microseconds.

1 Like

If you need to delay a certain amount of microseconds, unknown yet when the interval starts, you can do something on the lines of

void loop() {

  // ---------------------- start 
  unsigned long startingMicros = micros(); // freeze count

  // do something and
  // compute microseconds to delay

  unsigned long microsToDelay = whateverComplexFormula; // between 1000 & 1100

  unsigned long endingMicros = startingMicros + microsToDelay; // compute only once
  
  while (micros() < endingMicros); // just wait
  // ---------------------- end

}

Perhaps if you are more specific about your intent you will get much more help

1 Like

In the real program it's not operating with constants, but instead it is doing:

FLOAT = FLOAT * (FLOAT - FLOAT) * ((1 / (360 * FLOAT) * 60000000));

And before that, there is a sensor reading operation, which could introduce more delay. Based on the times you shared, this calculation would take about 50-70us, which is still very significant, plus the sensor reading, and a condition check.

If this is correct, then maybe I should perform the calculations in a previous step, even if I have to use sensor data that is a bit outdated, otherwise it's not viable.

This is a much better solution. It will still have the problem in case of the delay being shorter than the time it takes to make the calculations thoug.

You are capturing an entire Serial.print and Serial.println in your measured time interval. Here is my preferred way of measuring the time it takes to execute something:

uint32_t startTime = micros();
CALC = V * 360 * 56 + 3 * 500 - 4;
uint32_t stopTime = micros();
Serial.print ("START to STOP = ");
Serial.println (stopTime - startTime);

There are even more precise methods of measuring time, depending on what microcontroller you are using (e.g. systick counter).

1 Like

Many constant math operations are optimized out by the compiler, which is smarter than most of us. For example, this silly statement

    CALC = V * 360 * 56 + 3 * 500 - 4;

is simplified to

CALC = V*20160 -1496;

Which, incidentally, will overflow 16 bit math. Declare V and CALC long integers to avoid that.

For questions about code and code timing, post the code that actually pertains to the question, thus avoiding wasting your and forum members' time.

Float multiply and add take about 10 microseconds each, on a 16 MHz Arduino Uno R3.

Does the value of each of these floats depend on the sensor reading? Does your sensor deliver an integer number, or a floating point number?

Very true. Thanks for that.

Only two are sensor readings (floats), the others are variables and constants. Proably wouldn't take that much time.

Anyways, I didn't post the code (I know this is frowned upon) because I wasn't asking specifically about my code, I was purely asking if it was possible that those simple operatiosn were taking 360us (which was very clearly answered).

So, thanks for the help everyone. I have decided to make the calculations before the delay was needed, and to use the method described by mancera1979 to excecute the delay (instead of using delayMicroseconds(), I only check if micros() is higher than the calculated finish time).

Again, thanks everyone for the help!

This causes an issue if micros() happens to roll over, or if startingMicros + microsToDelay > the maximum uint32 number.

Use this instead:

while (micros() - startingMicros < microsToDelay); // just wait

Integer magic will ensure rollover is no problem.

2 Likes

Good catch

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.