Cyclic Buffer for Data Logging on SD Card

Cyclic Buffer for Data Logging on SD Card :thinking:

I'm working with an ESP32 and an SD card reader to implement a cyclic buffer for logging data in a CSV file (SD library).

To explain, I use a counter variable that increments with each cycle, logging data sequentially:
0
1
2
3
4
5
6
7
8
9
Once the counter reaches the maximum entry limit of 10, my intention is to overwrite the initial values and continue replacing older entries in a loop:
10
11
2
3
4
5
6
7
8
9
However, :sleepy: when the 10th entry overwrites the initial value, subsequent entries get deleted, resulting in:

10

:melting_face:

Here's the current implementation of my code:

#include <SD.h>
#include <SPI.h>

#define FILE_NAME "/data.csv"
#define CS_PIN 5  // Chip select pin for SD card module

const int maximumEntryLimit = 10;  // Maximum entries in the cyclic buffer

int counter = 0;  // Current counter value
int currentPosition = 0;  // Current position in the cyclic buffer
int CycleCounter = 0;  // Cycle counter to manage buffer size

void setup() {
  Serial.begin(115200);  // Initialize serial communication
  while (!Serial) {
    ;  // Wait for serial port to connect
  }

  // Initialize SD card
  if (!SD.begin(CS_PIN)) {
    Serial.println("Initialization failed!");  // Print error message if initialization fails
    return;
  }
  Serial.println("Initialization done.");  // Print success message
}

void loop() {
  File file = SD.open(FILE_NAME, FILE_WRITE);  // Open file for writing
  if (!file) {
    Serial.println("Failed to open file for writing");  // Print error message if file cannot be opened
    return;
  }

  // Check if cyclic buffer limit is reached
  if (CycleCounter > 9) {
    currentPosition = 0;  // Reset position to start of buffer
    CycleCounter = 0;  // Reset cycle counter
  }

  // Move file pointer to current position and write counter value
  file.seek(currentPosition);
  file.println(counter);

  currentPosition = sizeof(counter) + currentPosition;  // Move to next position in buffer

  // Close the file
  file.close();

  // Print current status for debugging
  Serial.print("Counter: ");
  Serial.print(counter);
  Serial.print(", Cursor Position: ");
  Serial.println(currentPosition);

  // Delay before writing the next entry
  delay(2000);

  counter++;  // Increment counter
  CycleCounter++;  // Increment cycle counter
}

sizeof counter won't be the number of digits needed to print the ASCII value in the file. it's just 4 all the time (the number of bytes to represent an int)

your challenge is that ascii text does not always have the same length. In memory all the bytes are next to each other, so a file that looks like

1
2
3
4

is just in memory 1\r\n2\r\n3\r\n4\r\n

so if you want to come back and println 10 at the start of the file you'll get
10\r\n\r\n3\r\n4\r\n
and you'll have overwritten the 2

if you were to write in binary in the file with file.write(&counter, sizeof counter); then all the entries would be 4 bytes long and your approach would work regardless of the counter's value when represented in decimal.

1 Like

Did something similar like a year ago on a Uno..
UnoSaveConfig
Should be able to port to esp32..
Maybe it helps..

good luck.. ~q

1 Like

Very odd way to store data in a csv file. How do you know which value is the first and last when you read the file, there is no indication of which value was written last.

2 Likes

Agree it's weird but may be what matters is only the fact that you have the last 10 values, regardless of the order

1 Like

That is how to save storage space on a magnetic drive, I know from long practice with floppies and HD's of 20MB or less.... but

SD has limited writes and enormous space.
It would be better for your code and the SD card to simply write a new line or fixed-length record per cycle. IMO finding the last one is quicker and easier with block records but I don't know if your data varies much in length and that would make block records big enough to hold every possible data waste a lot of space.

Over-writing SD files is not good. Better to read the old file and generate a new one. Instead of sorting records (not what you are doing but another SD is not HD example), write a file of sorted links which will be shorter.

SD is flash with its own controller that does wear-leveling. Editing what was written will slow down later access to that file. If you want to compare to an older data storage, think how tape gets used.

1 Like

Thanks.
I'm looking to establish a cyclic logging system that resets upon reaching a defined maximum limit. The goal is to maintain a log with 20,0000 entries, capturing
Entry number, date, time, Temperature, and Humidity values at 1-minute intervals.
Instead of overwriting old entries, I'm now considering alternatives since overwriting might not be ideal , i guess .

On SD, even a 4G card should hold 20 million records with room to spare, no?

Arduino has EEPROM good for about 100,000 writes and you can get serial EEPROM, flash or RAM to hang on the same SPI bus that SD uses. You have many choices.

Beware some cheap SD modules have crappy 1 resistor "leveling" from 5V to 3.3V. In time those eat SD card!

1 Like

Before 2000 I used 32-bit Unix Seconds to hold date and time as one value. In hex it makes a uniform 8 chars at the left edge.

Another trick is to log millis time-stamp and 1st line record the time and date as reference, maybe once per hour or day do that again. You get shorter records that way.
Arduino clocks may drift, the boards with resonators instead or crystals do!

1 Like

image

This is the one am using hope its good .

I have a DS3231 RTC and an AT24C32 EEPROM connected. While the EEPROM handles all the settings, it's seems inadequate for logging the amount of data I need.

I'm not one who can tell you about that module. If it can run at 3.3V then you can also level your wiring to the module itself.

74HC4050 hex buffer chip does 5V to 3V leveling on 6 channels without a bunch of components. SD needs 3 lines. Don't just buy one if you ever need more, the price will be high. Get at least 5 for a discount and have the rest for future needs.

1 Like

SD... 4G SD is small, holds 4 billion bytes.

How many bytes per entry times how many entries total?
Is that bigger than 4 Billion? Get a 32G SD card, 32 billion bytes.

Why do you think that you need this... cyclic data overwrite?
You seem stuck on what should not be a problem, please what are you not saying as it's not obvious at all! If you only count to 10 over and over and the last time doesn't need saving then that is what RAM is for! If you want to save the last number before power-down then that is what EEPROM is for, it's one number to save. What are you doing that logging 1 line per entry does not cover? I see no real problem and one of us is wrong.

1 Like

Thanks for the enlightenment! :blush:

Each log entry comprises 54 bytes of data. With a 2 GB SD card and logs recorded every minute, the card can store approximately 75 years of continuous data.

If one only need the last 20,000 entries of historical data, it would be more efficient to extract and save that specific portion from the original file into a new file, while keeping the original file for appending new values. :thinking:

so you have your answer :slight_smile:

2 Likes

Or just start a new file every 20K lines?
Like once a month and name it datanameyymm so it sorts right?
~30 days at 1440 minutes = 43200 lines = 2332800 bytes.

I have done these things before, so I am cheating by experience.
pass it on.

I agree, :thinking: logging on a single file is a bad idea. As the file size increases, it demands more memory for handling. Attempting to open and read such large files could potentially exceed available RAM :man_facepalming: , leading to crashes :bomb: i guess .

It's becoming increasingly clear that this situation is growing more complex. :hot_face:

Opening the file by itself does not allocate more memory than the file handle so that's not an issue and the library has a 512 byte buffer / cache for writing to the SD that does not grow as the file grows.

Trying to read into a buffer megabytes of data would be indeed :slight_smile:

but it's more manageable to have a number of smaller files, with names containing the date as suggested by @GoForSmoke. I've such a setup too, my daily files are called like 20240625.txt and 20240625.bin

you can easily get the filename using sprintf().

1 Like

Thank you for the insight. :smiley:

May I ask, what is the purpose behind writing a binary file along with txt? :thinking:

Binary data takes less space but a program does not have to translate text to data. But since it might not every time read correct, common practice is to have a way to check.
Read up on CRC, Circular Redundancy Check for one way.

Text files are bigger but when a bit is bad you know it! re-read to see if the read got it wrong.

With Windoze there is a program named WinPAR that generates parity bit files and can patch bad bits. If you store data for a long time or transmit big files, parity file data is great to have! How that works is a longer read than CRC! The subject falls under ECC, error Check and Correct. I have used that and more transferring big files on the net long ago, it saved quite a few re-tries! WinRAR let us break big files up and reassemble those too, rar and par files were regular for me to archive.

1 Like