Need some clarification on how sensor sampling works

Hello All,

I'm a newbie in sensor fusion, so I need a bit of your help in trying to understand how sensor/ program timing works. I'll my question with an example:

I have an Arduino Mega running at 16MHz (that's the default clock as i understand). Also, i have an BMA180 accelerometer with high-pass filter (1 Hz).

I want to get the position change of the accelerometer. In math it means that i need to integrate the readings from BMA180 with respect to time twice. In formulas:

Position_t =Position_0 + 1/2*(acc_data_t*dt)*dt

how to set find dt? I'm thinking about two options:

  1. Measure time between the readings of the data.
  2. Set a fixed dt equal to the sampling rate of the sensors (in this case 1hz = 1 s) and adapt the software so that the readings would be separated by 1 s. For example if the reading and calculation part take 0.5 s to complete:

Pseudo code:

while true,

        data=read(acc);
        position=calculations(data);
        pause(0.5 seconds);

end

Somehow, I believe that second is the right way, because in the example codes there is always a pause term at the end. I wish if you could clarify it.

Also, how do you evaluate how much time the code needs to complete in order to set the pause term right?

Thanks in advance!

Regards,
Sharapolas

No, you need to sample much much more often than 1Hz - you have a HIGH pass filter, there is no information at low frequencies.

Furthermore you need to be aware of frequency-aliasing if you sample at less than twice the maximum signal frequency.

I can't tell what frequencies you are interested in, but lets say 1..100Hz, in which case you want to sample at a minimum of 200Hz.

As for integration, you just sum the values and adjust by a factor depending on the period between samples. Integrating twice in a row will get a very drift-prone result - using an accelerometer to measure distance is not ideal (you can determine amplitude of vibration on the short term, which I assume is the aim as you are high-pass-filtering). If you don't sample at at least twice the highest frequency present in the signal you will lose information.

To get regular samples you can use a timer interrupt, or you can wait for successive clock values:

long sample_time = millis ()  ;
sample() ;
while (true)
{
  if (millis () - sample_time >= SAMPLE_PERIOD)
  {
    sample_time += SAMPLE_PERIOD ;
    sample () ;
  }
}

It is important to increase sample_time by a fixed amount each time like this and to do the timestamp comparisons this way. If you see code like this:

  if (millis () > target_time)

Then its broken since target_time might be the maximum representable value at some point and the test can then never succeed. (The point is that timestamp values can and do wrap-round)

You can use the internal timer as a interrupt source and count how many cycles your loop was able to execute before backing out the length of each time slice. The only potential issue (as I understand it) is that if you use more than one interrupt that you need to keep in mind that one interrupt being active disables all the other ones.

I took a slightly different approach for my latest project, using an external interrupt, driven by a DS3231 instead. With this little beastie, I get a nice consistent 1Hz sqw (though faster rates are possible) whose accuracy is not compromised by the use of interrupts.

An interrupt doesn't disable lower priority interrupts, it defers them.