Waiting 1/5 of a second in a program

I have an event that I know will take place 5 times every second.

Am I right in thinking that I should be able to use this code to read from a sensor to know that I will be receiving data at the correct time? I store the data in a byte when I have collected 8 bits.

unsigned long Told = 0;  
unsigned long Tnew = 0;

void setup()
{Serial.begin(115200);
 //pinMode(anapin, INPUT);
  pinMode(8, INPUT);
}

void loop()
{

byter=0;
for (int i = 0; i < 8; i++) {  
  do{Tnew = millis();
  } while((Tnew-Told)<200);    // Wait one length
Told = Tnew;
bit=digitalRead(8);
byter=byter+bit*pow(2,i);  //  Store bit in Byte
//Serial.println( bit);
};

Serial.println( byter );



};
byter |= digitalRead(8) << i;
1 Like

Just write delay(200) and it does the same thing.

1 Like
  do{Tnew = millis();
  } while((Tnew-Told)<200);    // Wait one length

This is the same as delay(200) because the micro does nothing until the time is up.
It is not the correct way to use millis().

1 Like

How long does an event last?

1 Like

Your loop is just a very complicated delay. You want to read the input and build the byte in the loop.

void setup(){
  Serial.begin(115200);
  
  //pinMode(anapin, INPUT);
   pinMode(8, INPUT);
}

void loop()
{
  uint8_t byter,i,data_bit;
  byter=0;
  for (int i = 0; i < 8; i++) {  
    data_bit=digitalRead(8);
    byter|= data_bit<<i;  //  Store bit in Byte
  }

  Serial.println(byter);
   delay(200);  
};
void setup()
{
  Serial.begin(115200);
}

void loop()
{
  static uint32_t oldMillis = millis();

  uint8_t byter = 0;
  for (uint8_t i = 0; i < 8; i++) {
    while (millis() - oldMillis < 200); // Wait one length
    oldMillis += 200;
    bool Bit = digitalRead(8);
    Serial.print(Bit);
    byter |= Bit << i;
  }
  Serial.println();
  Serial.println(byter, BIN);// may loose leading zeros
}

My question is how is this data syncronized?
There is no start/stop ( so you are not reading an 8 bit value from your data source ), you are simply reading a digital input and pack the sampled data in bytes ( so you have 8 samples per byte )?

1 Like

I've been trying to work out what it is that you are actually trying to read. As @davidefa said, synchronisation is a concern.
Could the 8 events you are trying to read look like this?


Could all be logic 1? Could all be logic 0?

Yes, the bits could all be zero, or all be 1 or anything.

I first get a zero start bit that I have not mentioned for simplicity, then I will wait half a length to synchronise and one length to get the first data bit.

I just wnated to check that do{Tnew = millis(); } while((Tnew-Told)<200); will indeed wait for 1/5th of a second.

It will but it would be the same just to use delay(200). The while loop is just adding unecessary complication. If you want a blocking delay then use delay. If you don't want to use delay because it is blocking then you are doing it wrong.

1 Like

Ah, you know what you are doing :slight_smile: Sorry for asking off topic details. Regarding the 200ms wait, it looks fine. You should be okay using delay(). I assume you maybe did it the way you did with millis() to get a more accurate 200ms sampling period. If you used delay() it wouldn't take into account the execution time of the other loop code, which may be insignificant.

What other loop code? It is an empty loop. delay does exactly the same thing.

Assuming the code was like this, and we are talking about the time taken to go round the for loop:

void loop() {
  // Code to wait for sync goes here
  uint8_t byter = 0;
  for (byte i = 0; i < 8; i++) {
    byter |= digitalRead(8) << i; // <-- executes every x ms on average
    delay(200);
  }
  // Code to process result goes here
};

Then when I said "other loop code" I meant the overhead of executing the compare, increment, and jump back in the for loop, and execution of the line in the loop that isn't delay(200), and perhaps some overhead in calling and returning from delay(). So "every x ms on average" would be an insignificant amount > 200ms. With the originally posted technique, using millis(), I think the average time would tend towards 200ms. I may be wrong and missing your point.

But it isn't.

The delay function has a nearly identical while loop. The same check is happening either way. Whether you put that in a function or run it out in a loop is a moot point.

You are talking about the time for the return, but we are talking about the millis function which has a resolution of 1ms and sometimes 2ms. The time taken by the return on delay would be nanoseconds. If you need timing tight enough that would matter then you should be using a hardware timer or at least micros.

I'm sorry, but I don't know exactly which code you are talking about. Are you talking about the code in post#1? To be clear, I'm talking about the time to execute the for loop, not the time to execute the do loop.

I was trying to guess the original poster's rationale in using the technique they used. I wasn't intending to suggest that I'd do it without using delay(), nor that there would be any obvious benefit in doing so.

Oh, well i was just saying that the while loop could be replaced with delay and the code would work the same but be easier to read.

I fear that OP thought they were getting away from the pitfalls of delay with that while loop but all they really did was reinvent it.

2 Likes

I think this is a valid test of the time between samples for both delay() and DIY delay.

void setup() {
  Serial.begin(115200);
  Serial.println("Hi Mom!\n");

  delay(37);
}

unsigned long Tnew, Told, lastTime;

void loop() {
  for (int ii = 0; ii < 10; ii++) {

    delay(100);

    unsigned long now = micros();
    Serial.println(now - lastTime);
    lastTime = now;
  }

  Serial.println("\n");

  for (int ii = 0; ii < 20; ii++) {

    do {
      Tnew = millis();
    } while ((Tnew - Told) < 100); 
    Told = Tnew;

    unsigned long now = micros();
    Serial.println(now - lastTime);
    lastTime = now;
  }

  for (; ; );
}

The *dealy*() version does better. I doubt it matters if we are just looking at the middle of the bit.
Hi Mom!

137100
100316
100308
100316
100312
100316
100304
100304
100304
100304


340
100520
100352
101376
101376
100352
101376
101376
100352
101376
100352
101376
101376
100352
101376
101376
100352
101376
101376
100352

Both show the additional time the rest of the for loops takes to execute. To get consistent perfect sampling would need the BWOD timing - that will be forgiving of variations on what the rest of the process does.

I could be totally wrong. Things have been really wonky for the last few, is it the full Moon? No! It's the new Moon, yeah that's the ticket.

a7

1 Like

The simple ( and blocking ) way is to set a tCheck variable as millis() +100 on the start front, and then increse the value by 200 to read every bit ( so it will stay sybcronized to the original front )
The less simple ( and not blocking ) way is to set a timer interrupt every 100 millisecond ( on the start front ) and shifting in the data only on even cycles

So you want it to crash at rollover?