Simple SD data logging solution - working

Hello all,

Storing data on SD takes more than 10 ms, so it is impossible to store data at 100 Hz sample rate in between subsequent samples. The supposed solutions I found on the internet were getting complicated very fast. I wanted something that was both beginner friendly and working at higher sample rates.

The code below is what I came up with. I used a Pro Micro, but I think it will run fine on an Uno or Nano as well. It is currently storing data from a counter at 1000 Hz without missing any 'measurements'.

I hereby challenge all experienced programmers (which I am not) to improve on this code by either making it more beginner friendly or by make it performing better without getting more complicated.

/**************************************************************************

  Problem fixed by this sketch:

    Writing data to SD takes more than 10 ms.
    At 100 Hz sample rate it is impossible to write data to SD between subsequent samples.

  Solution:

    Timer interrupt will sample data into a buffer.
    When the first or second half of the buffer has just been filled, it will toggle the main loop to store that half on SD.

 **************************************************************************/

#include <TimerOne.h>
#include <SPI.h>
#include <SdFat.h>
SdFat SD;

#define SD_CS       A1

File dataFile;

int buf[200];
boolean Aready=false;
boolean Bready=false;

int aap=0;
int counter=0;

void setup(void) {
  if (!SD.begin(SD_CS)) {
    while (1);
  }
  dataFile = SD.open("test.csv", O_WRITE | O_CREAT );
  for (int i = 0; i<200; i++)
    buf[i] = 0;
  Timer1.initialize(1000); // 1000 Hz sampling rate
  Timer1.attachInterrupt( ISRtimer1 );
}

void loop() {
  WriteSD();
}

void WriteSD() {
  if(Aready){
    Aready=false;
    for (int i=0;i<100;i++){
      dataFile.print(String(buf[i])+",");
    }
    dataFile.flush();
  }
  if(Bready){
    Bready=false;
    for (int i=100;i<200;i++){
      dataFile.print(String(buf[i])+",");
    }
    dataFile.flush();
  }
}

void ISRtimer1()
{
  aap++;
  buf[counter]=aap;  // aap could be an AnalogRead
  counter++;
  if(counter==100) Aready=true;
  if(counter==200) {
    counter=0;
    Bready=true;
  }
}

Does this code actually work on a Nano or Uno? It didn't for me. It created the file, but that's all.

I just tried using a Nano and it worked as intended. The Uno uses the same microcontroller as the Nano, so should be the same. It takes between 1 to 2 seconds to start and then it pushes out about 5 KB/s in comma separated format: "1,2,3,4,5,6,7,8,9,10,11,12,13,14,".

The fact that the file is created suggest you connected SPI correct. I don't know why yours isn't working. Is your setup working when you upload a standard ReadWrite example from the SD library?

Ok, sorry, I must have had a bad connection or something. It's working now on my Nano.

Well I'm not an expert programmer, but would offer a few suggestions:

As I understand it, all SD libraries store the data that's to be written to a card in their own buffer until they've accumulated 512 bytes, then they write all of that to the card. That's because you can only write a full sector (512 bytes) at a time, not less.

But I think it is critical to a fast logging speed to have at least one additional buffer to accumulate loggings for when extra time is required to write a sector . And your interrupt-driven data samplings is perfect for that.

The way I see it would be to set up a circular buffer to accumulate the data samples. The ISR would fill that buffer, and the main() loop would extract the entries and print them to the card as they become available. The library will take care of the second buffering, so most of the time when you print a value to the card, it will just go into the library's buffer.

It's not critical, but might save some time, if all the resulting csv values are the same length, with leading zeros or spaces, and that length, including the comma, divides evenly into 512.

Flushing is very time consuming, and unnecessary if you can simply close the file at the end of the logging session. My understanding is that flushing is essentially the same as closing the file, which requires not only writing out any remaining data and needed fill, but also updating the directory entry to reflect the new file size, the FAT table to reflect which clusters have been used, and the second copy of the fat in the same way. That's a lot of read/modify/write of sectors, which takes a while. Also, if you flush and it's not on a sector boundary, resuming writing after that can require reading back in the last sector and figuring out where in that sector the actual data left off. So if you can, it's really better to just keep writing the sectors and let the library adjust all the file system entries when your done.