Writing data to SD card for Maximum life

I have an application that collects data from a boiler. Loggable readings occur at the approximate rate of 5 to 10/minute.
Currently I'm logging the data each time it is received so I would be opening writing and closing the SD card at that rate.

My question is:
would it worth the effort to collect data in an array for maybe a few minutes (could be more), then write it all at one time?

I understand the actual data written to a location would be the same if one by one or collected then written. What I don't know is the action of opening, writing to the FAT table then closing will cause wear on those cells.

I have the same question. And remember that FAT32 keeps two copies of the FAT, and also updates the directory entry to reflect the new file size. I don't know whether SD cards do the same kind of wear leveling that SSDs do, but I suspect they don't. So I think anything you can do to batch up the data and reduce the number of updates would extend the life of the card. Of course an alternative would be to replace the card with a new one from time to time.

There is another approach which is demontrated in the LowLatencyLogger example in the SdFat library. It creates the files in advance at maximum size on a new card so that the data segments of the files are all in consecutive sectors of the card. Then beginning at the first disk sector of a file's data (obtained from the directory entry), it just writes data directly to the card sectors, consecutively, completely ignoring the file system, FAT and directory entries. Then only at the end are the FAT and directory entries adjusted to reflect what was actually written. The author is a much better coder than I am, so I don't understand how he does this, but if you can figure it out it seems this method would eliminate most of the file system wear and tear.

I was thinking the same. Thanks for your insight :+1:

I found with SD cards maximum life comes about when I do not use more than 50% of the space available on the card and the software uses some sort of wear leveling.

I would consider using FRAM, and write the data as soon as you get it. It will tolerate over a billion write cycles. Then transfer it to the SD card(s) on demand. 32K x 8 devices are generally less then $5 us. Nice part if you have a power failure the worst loss would be the last reading which was not written.

I have some FRAM, however this board isn't layed out for the FRAM chip. (SD, RTC, battery & OLED display).
Besides I would rather me able to switch out SD cards than have to read the FRAM using a PC .

It would be great if you would post what you end up doing about this.

Right now I my code writes to the SD for each reading. I'm in the test the hardware stage (see photo).
Once it get that accomplished I'll add a buffer to hold some number of data points then write to the card. I will upload that code when completed.

I think it will be simple though:

  • define buffer (array) length, maybe 64 readings.
  • in an if structure count array index i.e. 0 to 63
  • when I = 63, write data and reset I to 0

I also have a Run/pause feature that will have to write however many data points are in the array when set to pause. I can then remove the SD card safely

Ok. Well remember that writes to the card are always in 512-byte chunks. That's one sector. So if you can fit multiple readings into a block of that size, that might make sense. But really, it's going to be 512 bytes no matter what you do.

I still have on my list of things to do to try to figure out the LowLatencyLogger code. But there's a lot of high-level C stuff that I've never used.

Thanks, I hadn't though of the sector size. I'll try to make it just short of a multiple of 512 bytes.

I would make it exactly 512 bytes. Or better said, write on 512 byte boundaries. So if your data set is 499 bytes, write it and write the next one at 512 etc.

1 Like

Access to SD cards from Arduino causes a huge amount of flash wear. The 512 byte sector size is not the size of flash pages, it is just a minimum transfer size.

The flash is written in terms of Record Units, RUs. RUs are a multiple of 16KiB with high end SDHC cards having 512KiB RUs.

If you use SD.h or SdFat in shared SPI mode you are causing some multiple of 16KB to be written every time a 512 byte virtual sector is written.

This means data is being copied many times in the card and flash is being erased and rewritten . The card cycles through all flash using a virtual to physical map for areas of flash.

In SPI mode an RU is written every time chip select goes high. The internal RAM buffer is written.

This is why a card rated at 100 MB/sec write rate runs at several 100 KB/sec in Arduino.

I offer a dedicated SPI mode in SdFat and this reduces wear by up to a factor of 100.

On Teensy 4.x I use this mode for 4-bit SDIO mode to achieve 22 MB/sec write. This is for 50 MHz bus speed so it is close to optimal.

Modern TLC flash can be rewritten about 3,000 times and QLC 1,000 times. Cards have excellent wear leveling so don't try to guess what they are doing.

Edit:

Here is an example of shared vs dedicated SPI on A Due. Almost a factor of ten.

Due Shared SPI

write speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
477.55,19684,950,1070
472.28,22019,951,1082

Due Dedicated SPI

write speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
4533.09,216,110,111
4516.71,127,110,111

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.