I'm trying to add highly efficient SD logging to a larger project using fat16lib's SdFat.h (beta version) on my Teensy 4.1.
So far I've been able to get basic CSV datalogging working by opening and closing the log file every time I want to add a new row, but it's too slow. With this technique, I get an average 100Hz loop rate. However, if I remove the datalogging portion totally and let the rest of my sketch run, I my loop rate increases to 500Hz!!
I believe the data is written in pages of 512 bytes (regardless of the number bytes needed to be written) during a file close. I read somewhere that the normal SD library automatically writes out a complete page once overflow happens, even if you don't call "flush()" or "close()". I tried this with SdFat.h by opening the log file once and never closing - assuming pages are automatically written without needing to close the file. However, this didn't work - the code only created an empty file.
In a last ditch effort, I tried to look at the example LowLatencyLogger.ino, but it's honestly so complex I can't make heads or tails of it enough to apply its concepts to my code.
Can someone (hopefully fat16lib ) help explain the writing portion of the example or give a quick-and-dirty "efficient datalog" example?
And yes, I do need >100Hz loop rate
You will always end up with an empty file or short file if you don't close the file. The directory is only updated then or when you "flush" the file.
Paul
Power_Broker:
So far I've been able to get basic CSV datalogging working by opening and closing the log file every time I want to add a new row, but it's too slow.
There has been recent discussion on this. You pay a huge price in both time and reliability by opening and closing for each log, so I reckon you should cease and desist immediately.
I don't know if that will fix the problem, you don't say what the problem is. I have never heard of anybody actually having a problem clearly attributable to the normal SD library so, unless you have clear proof that you should not use it, I submit you should.
Just open it once, close it once. Or close it every n minutes & reopen if you're worried about data loss on a crash.
Nick_Pyner:
I have never heard of anybody actually having a problem clearly attributable to the normal SD library so, unless you have clear proof that you should not use it, I submit you should.
I'm currently using SdFat, not the normal SD library
wildbill:
Just open it once, close it once. Or close it every n minutes & reopen if you're worried about data loss on a crash.
That's what I'm trying to do, but I can't afford to lose much more than 30 seconds of data. I'm not worried about a crash, but about power loss. This is going on an RC plane, so I want to be able to land, turn off power, and immediately load my SD card into my computer to look at the log without loss of data.
I think with the long original post my main question got lost and forgotten, so here's what I'm looking for:
Power_Broker:
Can someone (hopefully fat16lib ) help explain the writing portion of the example or give a quick-and-dirty "efficient datalog" example?
I'm probably going to try something on my own here in a few days, but I'd really like to have a dumbed down explanation of how the linked example works!
Power_Broker:
I'm currently using SdFat, not the normal SD library
I know that, hence my comment.
Your "efficient datalog" is likely to be one that doesn't waste time opening and closing the file all the time. I don't know whether that will give sufficient boost to fix the problem as you don't actually say what the problem is.
Is there anything in the logged data (elapsed time, altitude, speed) that could give you a hint that the plane is on the ground again? At that point you could close and reopen without losing anything interesting.
Some folks have added caps to keep power long enough to write the last details and close the file once you detect loss of power.
Nick_Pyner:
Your "efficient datalog" is likely to be one that doesn't waste time opening and closing the file all the time. I don't know whether that will give sufficient boost to fix the problem as you don't actually say what the problem is.
Correct, I'm trying to figure out a way to datalog without always closing the file. The problem is I'm not sure the best approach to get around constant file closes or flushes. The one example that does this isn't easily comprehensible to me. It's less of a problem and more me trying to learn improvements.
wildbill:
Is there anything in the logged data (elapsed time, altitude, speed) that could give you a hint that the plane is on the ground again? At that point you could close and reopen without losing anything interesting.
That's a good idea. I have an IMU and could say if no movement is detected for a whole second, I must be stationary and on the ground. I'd prefer another method, but might use this as a backup.
wildbill:
Some folks have added caps to keep power long enough to write the last details and close the file once you detect loss of power.
Another good idea - however I'd prefer not to add more hardware. I'll definitely keep it in mind, though.
Ok, so I got a little better speed doing something like this:
- Create a buffer large enough to store an entire CSV entry (can be larger than 512 bytes)
- Fill the buffer with data formatted by sprintf or dtostrf (for floats)
void addToBuff(char buff[], int& buffIndex, char data[], bool newLine = false)
{
memcpy(buff + buffIndex, data, strlen(data));
buffIndex += strlen(data);
if(newLine)
buff[buffIndex] = '\n';
else
buff[buffIndex] = ',';
buffIndex++;
}
- Write out the buffer a single byte at a time while keeping track of the total number of bytes written to the SD card via a global integer variable. If that global variable ever gets to 512 (or larger), close and reopen the file.
for(int i=0; i<buffIndex; i++)
{
if(dataIndex >= 512) // Once an entire page is filled, save the data and reopen the file for more logging
{
myFile.close();
if(!sd.exists(filename))
setupLog();
myFile = sd.open(filename, O_WRITE | O_APPEND);
dataIndex = 0;
}
myFile.write(buff[i]);
dataIndex++;
}
It's still not as fast as I would like, but it seems to be at least an improvement. If anyone is interested in the whole code, I can post that, too.
1 Like
Another approach would be to format the entire file, as big as you want, with a character that your programs will never use. then change your program to seek, read, update, write each record. If power fails, you would lose, at most, most of the last 512 bytes that were being updated. The logical end of file will be when reading the first formatting character in the file.
Paul
Paul_KD7HB:
Another approach would be to format the entire file, as big as you want, with a character that your programs will never use. then change your program to seek, read, update, write each record. If power fails, you would lose, at most, most of the last 512 bytes that were being updated. The logical end of file will be when reading the first formatting character in the file.
Paul
I'd like to second that suggestion. If your code creates a file and writes data to it, then in the background SdFat is having to deal with the file system - updating the directory entry, updating the FAT, and updating the second copy of the FAT. But if you completely define the file in advance, making sure that the data sectors of the entire file are consecutive, then you could simply begin writing to the SD card beginning at the first data sector of the file, and the file system wouldn't be involved at all. You're just overwriting the data portion of the pre-existing file, but not changing its size or FAT entries.
You would have two buffers of 512 bytes each, and when one is filled up, you would switch to the other buffer and begin writing the first buffer to the SD card. You wouldn't miss any data unless the required write time exceeds the time it takes to accumulate 512 bytes of new log data. Among the SD card commands is multi-sector write, which just writes everything coming in to the card, incrementing the sector number as needed, until the card is filled. So in theory the code would be pretty simple.
Then if you wanted to get really fancy, you would create the file, then go back and erase the data portion. That means you could write to the card without the card controller needing to erase a block before writing to it. So the writes would be as fast as the card is capable of doing them, with nothing to slow down that process.
I've never seen a library that does it that way. Well, maybe one of the SdFat examples does that. I haven't looked at them in detail.
I also really like the suggestion of preformatting the file, but I need a little clarification.
Let's say I create a file at the start of the program with 5gb worth of newline chars and update the file 512 bytes at a time. However, how do I simply overwrite an arbitrary block of data without having to read the entire file and write it back out again?
Power_Broker:
I also really like the suggestion of preformatting the file, but I need a little clarification.
Let's say I create a file at the start of the program with 5gb worth of newline chars and update the file 512 bytes at a time. However, how do I simply overwrite an arbitrary block of data without having to read the entire file and write it back out again?
In fact you do need to read and update and rewrite each record. BUT, you do not need to close the file to get the directory entries updated with the new file size, etc. So, in case of power failure, you will have all the file written out and saved, except that part of the last block that was not fully filled by your good data.
And there is no need to save up to 512 bytes before reading/rewriting. Just do the normal record size, but read it before updating it and writing it back. I am sure that is all included in the documentation.
Paul
I assumed that for data logging, you would just write data to the SD card sequentially beginning at a certain sector number. You've already defined the file location and size, and the clusters it occupies, so you can just begin writing SD card sectors. At that point, it has nothing to do with the file. Why would you need to arbitrarily write data into the middle of the file?
Are you saying I could create the file, stuff it, close it, reopen it, and then read the entire file, replace what I want, write it all out, repeat for each CSV row, and never have to close the file again? I have a feeling that reading and writing 5-15gb at a time will take a while...
ShermanP:
Why would you need to arbitrarily write data into the middle of the file?
Because of multiple, realtime samples. That is, if I prepopulate the file with a repeating char as suggested.
Maybe I asked the question wrong. Would you ever need to write data to the file other than in sequence beginning at the beginning of the file? In other words, if you are logging data and writing it to the file in 512-byte chunks, would you ever need to write data out of sequence - either go back and write data to an earlier part of the file, or jump ahead and write data to a later part of the file?
If I have a 5gb file of ~
chars and already have a 512 bytes of data written to the start of the file, the next time I write out 512 bytes, I can't start at the beginning or the end of the file - it needs to go somewhere in the middle. More specifically, the 513th byte.
Power_Broker:
If I have a 5gb file of ~
chars and already have a 512 bytes of data written to the start of the file, the next time I write out 512 bytes, I can't start at the beginning or the end of the file - it needs to go somewhere in the middle. More specifically, the 513th byte.
But if you know the disk sector number where the file's data begins, you can simply write data beginning at that sector, and then continue to write to successive sectors as more data is collected. The SD card's built-in controller will let you do that. It's called a multi-sector write command. And the directory entry for the file contains the location of the first sector of the file.
Remember that the card's controller knows nothing about your file system or your files. It only knows about reading from or writing to specific sectors. It's the FAT file system that associates what we think of as files to the actual sectors involved. But if you know where a file begins, and you know that its data is stored in sequential sectors, then you can duck under all the FAT stuff, and just write directly to the sectors one after another. The card's controller will automatically write your data to successive sectors.
Finding a library that would support that is the big issue. I'd suggest you look at the SdFat library, and its examples, to see if something like direct sector writing is supported. To do it all properly, you would also need a way to reduce the file length based on what you've actually saved, and adjust the FAT entries accordingly, when you've finished logging data.
I'm starting to think you didn't read my first couple of posts...