Urgent: SD card write speeds

one way to optimize your code could be :

sprintf(strtest,"%lu-%u:%hx%hx%hx%hx%hx%hx%hx%hx\n",t,testId,data[0],data[1],data[2],data[3],data[4],data[5],data[6],data[7]);

leaving out 8 , and one ; and one \r is 10 characters less to process for sprintf and more important to write to file. As the max size is 52 bytes this could theoretically improve transfer 20% - to be reached by putting more samples in the optimal blocksize.

The receiving application will have a bit more work parsing the data, but java is so much faster than the Arduino, you will hardly notice.

Further you might try %c iso %hx that would make the string binary and 8 bytes shorter. (42-8 = 34)

That said you might skip sprintf altogether by writing a binary file.
First define a struct to hold the data and create an array of it to hold multiple samples.

struct packet
{
uint32_t t;
uint16_t id;
uint8_t data[8];
} buffer[100];

This would contain all esential data in 14 bytes iso 52 (>70% smaller)

file.write(&buffer[0], 100*sizeof(struct packet)); // write 1400 bytes at once.

as the id never changes you should consider moving it into the logfilename iso of in every record.
On the other hand you might be better of with a structsize of 16 as there would fit exactly 32 stucts in one datasector of 512 bytes.

struct packet
{
uint32_t t;
uint16_t id;
uint8_t data[8];
uint16_t spare;
} buffer[32];

makes 16 bytes iso 52 still ~70% reduction

On the java side you need to unpack the binary format,...

my 2 cents.

It is unlikely you will achieve satisfactory high rate data logging even with the various suggestions people have offered. By satisfactory I mean at high rate without missed samples.

There are two issues that conspire to make this a difficult problem. File system overhead and SD flash rewrite latency can cause a write plus a flush to take well over 200 ms.

File system overhead is the operations required to allocate a cluster and the read/writes required to do a flush.

Allocating a cluster and updating both copy of the File Allocation Table requires reading/writing four or more 512 byte blocks. Calling flush can result in reading/writing another four block.

Most of these are rewrites. A block is read, updated, and written. Flash can't be rewritten without erasing an area and moving a lot of data. SD erase groups are very large, often 128 KB, so this can take a long time.

I have used several techniques to log data at high rates. The simplest is to capture and buffer data in an interrupt routine and write it to the SD in loop(). This works very well on a Mega where there is a lot of RAM. It helps a lot on 328s.

To go really fast I use SdFat's functions to create large contiguous files and pre-erase the flash blocks in the file.

I then capture data in an ISR and do raw writes with special raw write functions in SdFat. That's how I did the audio recorder which does 44,100 samples per second.

The audio recorder shows how. It is here Google Code Archive - Long-term storage for Google Code Project Hosting..

Hi, thanks for your help!

as I announced yesterday, i made huge improvements using the continous method, that fat16lib uses in his waverp project.

additionally, I skipped the sprintf and went to a method comparable to what robtillaart suggested (although his seems a lot prettier, i'll try your struct method later today!).

for the initialization and usage, I stuck to fat16libs code example:

uint32_t bgnBlock, endBlock;
uint8_t* pCache;

// number of blocks in the contiguous file
#define BLOCK_COUNT 10000UL

// time to produce a block of data
#define MILLIS_PER_BLOCK 10
...

  // create a contiguous file
  char name[] = "RAWLOG00.TXT";
  if (!file.createContiguous(&root, name, 512UL*BLOCK_COUNT))  error ("file.create");  
  
  // get the location of the file's blocks
  if (!file.contiguousRange(&bgnBlock, &endBlock)) {
    error("contiguousRange failed");
  }
    

   // clear the cache and use it as a 512 byte buffer
  pCache = volume.cacheClear();
  // fill cache with eight lines of 64 bytes each
  memset(pCache, ' ', 512);  
  for (uint16_t i = 0; i < 512; i += 32) {
    // put line number at end of line then CR/LF
    pCache[i + 29] = '0' + (i/32);
    pCache[i + 30] = '\r';
    pCache[i + 31] = '\n';
  }
  
   // tell card to setup for multiple block write with pre-erase
  if (!card.erase(bgnBlock, endBlock)) error("card.erase failed");
  if (!card.writeStart(bgnBlock, BLOCK_COUNT)) {
    error("writeStart failed");
  }

but now to the more important part:

uint8_t data[8];

int i = 0;
  char semik = ';';
  char dPoint = ':';
  char slash = '-';
  
long testid= 10401l;

void loop(){
  
  data[0] = B01110011;
  data[1] = B01110001;
  data[2] = B01110111;
  data[3] = B01110010;
  data[4] = B01110001;
  data[5] = B01110101;
  data[6] = B01011001;
  data[7] = B01110001;

// copy the time
 memcpy(pCache+i*32,&t, sizeof t);
 //copz the slash
  pCache[i*32 + sizeof t] = slash;

//copy the id
 memcpy(pCache+i*32 + (sizeof slash) + (sizeof t),&testid, sizeof testid);

 //copz the dpoint 
  pCache[i*32 + (sizeof slash) + (sizeof t)+ (sizeof testid)] = dPoint;
  
// copy the data
 memcpy(pCache+i*32 + (sizeof slash) + (sizeof t)+ (sizeof testid) + (sizeof dPoint),&data,sizeof data);
 
 //copy the semikola
  pCache[i*32  + sizeof data + (sizeof slash) + (sizeof t)+ (sizeof testid) + (sizeof dPoint)] = semik;

 // write the cache every 16th time
 i++;
  if((i%16) == 0){
    i=0;
    
   if (!card.writeData(pCache)) error("writeData failed");
    
  }

}

This works! I also changed my parser to accept the new code and data structure. I managed to write 15 times in 20 microseconds (!!!) and then the 16th time (the actual writing to the SD card) it takes less than 1 millisecond (600-800 microseconds) . So it's a massive improvement (x10) - and losing a maximum of 2 CAN messages, when actually writing to the SD should be acceptable.

i fixed the following problem, take a look at the next post - I'm leaving this here just for reference

BUT ...

when trying to put that code into my actual project I encountered a massive problem ... It collides with reading the CAN message out of the mcp2515 buffer .. as soon as i've read a message from the buffer, writing returns an error 11,FF ... if i comment out the receiving, it works -.- ...

I've already reveiwed the code that gets the message, as well as trying to free the tCan message, before writing to the SD ...

void loop(){
  
  if (mcp2515_check_message()) {
  	tCAN message;

   WRITING TO SD HERE WORKS


  	// read the message from the MCP2515 buffer
  	if (mcp2515_get_message(&message)) {
        WRITING TO SD HERE FAILS ...
        
        }
   }

I REALLY DON'T want you to debug my code ... but aaaaany suggestion or hint (maybe you've encountered a comparable problem and had to ... I don't know ... touch your toes while writing to solve it ...)

@fat16lib ... what cache is actually used to write the data to the sd ? is it an SD card cache, or a cache of the ARdunio?

I figured it out!!

... i had to turn chipselect on and off manually before and after writing to the sd card, because it shares the data lines with the can controller ...

pinMode(sd_cs,OUTPUT);

...

digitalWrite(sd_cs, LOW);
card.writeData(pCache);
digitalWrite(sd_cs,HIGH);

...

... phew, that took me a day to fix .... glad I made it though!! Hopefully this will help somebody else encountering a similar problem :slight_smile:

thanks again for all your help!

Ciao,

for the ones who are interested there was also a discussion in AVRfreaks
http://www.avrfreaks.net/index.php?name=PNphpBB2&file=printview&t=66583&start=20

I recommend You to use an industrial grade SD card if You want to reduce data loss risks.

For example, but there are many others:

http://www.cactus-tech.com/product_05_b.html

Ciao,
Marco.