SD write cycles

I am hoping to trawl for experiences using SD cards for data logging. I am aware that they are reckoned to cope with about 10,000 write cycles. I don't see the blocks holding the logged data as an issue, but am concerned about hitting the directory blocks with lots of amendments.

I suppose the first question is if I do wear out a directory block, will re-formatting the card detect the bad block & work around it? I am working with FAT32.

I have seen people recommending closing a file frequently to secure the new data. My context is writing probably 10,000 line records to a file. I have the time to close the file after writing each line record, but I am concerned that each close will cause the same physical directory block to be written with each file close operation. Therefore, do I have to face the compromise? - minimise file close operations to avoid wearing out one directory block; maximise file close operations to make my data safe.

Yes, very bad idea. Every time the file is closed and then opened, one or two entire 512 byte blocks have to be written and reread from the disk, which is not only very slow, reducing the data logging rate, it also vastly increases the error rate, the power consumption and reduces the SD card lifetime.

For logging, open the file once in setup(), write data, and close it only when finished logging.

I have seen people recommending closing a file frequently to secure the new data.

Not a good idea. Use the library .flush() operation from time to time, to update the file pointers. If logging is stopped by an error or power loss, you will lose only data written to the buffer since the last .flush() operation.

1 Like

I would presume the file pointers have the same limitations, so flush after each write is contraindicated. A flush after every 100th write might suffice, but could one do something in a brownout or power loss response, similar to recent threads regarding EEPROM writes before powerdown?

No, because unlike close and (re)open, the flush operation does not entail writing out and reading back in an entire block, along with the additional operations in the directory blocks.

Every application is different, so the flush operation should be used judiciously, taking into account how much data loss can be tolerated in case of unexpected shutdowns.

The last two posts of this recent discussion on overall communications reliability give a particularly relevant and excellent overview.

Basically, assume everything will fail, and do the research required to determine which strategy will minimize the damage and time taken to recover from the failure.

Really good feedback from people who clearly know much more than me about what's going on under the hood of an SD card. Before I composed my questions I was leaning towards the compromise that has been suggested - perform a flush about once every 100th record.

I would like to bottom out @jremington is saying. I do understand the balance needs to be weighed the demerits between securing the data & compromising the medium, but, I would appreciate a bit of help to grasp the key message you are trying to convey when you say:-

because unlike close and (re)open, the flush operation does not entail writing out and reading back in an entire block, along with the additional operations in the directory blocks.

I suppose my understanding would be advanced if you could go into more detail of what the flush would involve. Is the key to the difference the word "entire"? Also, what's the "additional operations in the directory blocks" issue?

I believe the key issue in my scenario is any "write" operations from the Arduino to the SD card. Whilst I accept there may be performance differences between close/open and flush. There may even be additional "read" operations from SD to Arduino. However, neither of these are, I believe, of concern to me. Also, I believe I am not worried by any "write" operations to the body of the file storage - because at worst there would only be a handful of "writes" to any given 512 byte data block; then we would be moving on to operate on the next block.

So, do correct me if you think I am mistaken, but I believe the scope of my concern comes down to "write" operations on any particular directory data block. Ignoring the re-writing of the data blocks that constitute the body of the data file, what write operations to directory blocks are incurred by a flush? What additional write operations to directory blocks are incurred by doing a close/open? I believe I am correct when I say by closing the file I also carry out a flush as part of that operation.

Prior to writing my question, I believed the data on an SD card is structured as 512 byte blocks. "sectors", if you like. Those blocks each have their own CRC check bytes to validate the meaningful content. The minimum data written to the SD card is 512 bytes of payload plus whatever validation overhead that incurrs. Is that agreed?

If a new block is added to the file, a new or updated entry into the FAT directory is required. Also, the current end-of-file pointers have to be updated.

I am not an expert on this topic, but there are plenty of resources if you want to go into the nuts and bolts.

OK @jremington, from what you had written I thought you were, and could provide me with a fast track to the answer. Before writing my original question I had spent some time trying to get down to the nuts & bolts without success. I thought someone who knows their way around FAT32 might be listening.

I understand about the payload data. That doesn’t worry me about threshing a particular sd data block. I think I understand the allocation table. That doesn’t worry me too much either - 4 bytes defines each new data block, so, apart from any odds & ends for linking etc, that doesn’t worry me too much either. But, is it true that flushing the file brings the sd up to date, so, after a power outage the file will be valid up to the flush point? If so, some information quantifying say how many bytes have been written to the file. It would be logical to put that information into the same 512 byte allocation table block as the last allocated block - rather than putting it in the same location for the life of that file. The alternative would mean hitting the same physical block for every flush executed on that file. A recipe for wearing out that block. Then there’s the issue of bad “sector” management. Is there any duplication inherent in the filing system, which would require multiple block writes for any update, but could deliver the benefit of a recovery path should a sector fail. Yes. I do want to understand the nuts & bolts in order to design my logging strategy. Which brings us back to the “there are plenty of resources “ issue. I’m sure there are, but I approached this forum because I had already failed to connect to the answers.

That is the point of the flush command. I have been using it for many years, and it works.

Nevertheless, power failure can lead to any number of unintended consequences, including the complete trashing of an SD card, which I have also witnessed. It takes many milliseconds to write a block, and if that action is interrupted, there is little chance of recovery.

2 Likes

@jremington - just having this conversation has been a great help to me. My application is battery powered and I am monitoring battery voltage, so I know about an impending threat of power outage. I do a controlled shut down when I detect the battery volts safe -> unsafe event & close the file, which includes a flush, when that happens.

Whilst my original thinking was that I should be doing regular flushes to make my logged data safe, I am moving to the position that I believe I don’t need to do that. I can’t justify it on data integrity grounds and there’s a real benefit of minimised write operations for not doing the periodic flushes. It highlights the fact that a hardware reset can compromise that strategy, but I can design my way around that.

I haven’t got down to the nuts & bolts detail of the conversation between the Arduino and the SD card, but this has clarified my thinking on software design and hardware design. To be forewarned is to be forearmed against the possible consequences of a hardware reset, but my conclusion is there is a positive benefit in not doing periodic flushes. @jremington, you might well argue against this choice, but it’s your reasoning that has led me to this conclusion and I am happy with my decision arising from that. I will be alert to possible symptoms of uncontrolled program termination comfortable in my justification not to execute periodic flushes in order to minimise redundant writes.

1 Like

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.