Multiline reads from SD card?

Step by step, I'm getting to whip this (personal) project into shape. I now have a (true) binary file on the SD card. The file header and data looks like this:

unsigned char version
unsigned int cols
unsigned int rows
char r
char g
char b
char r
char g
char b
ad infinitum (or till EOF is reached)

When I read that back in, I get:

1    // version
44   // cols
1
0
0
48  // rows
0
0
0
148   // r
100   // g
127   // b
144   // r
106   // g
139   // g
ad infinitum (or till EOF is reached)

My questions are:

  • Is there a way to read the four lines that pertain to cols and rows respectively all in one shot? Or must I call .read() four times to get it all?
  • Is there a way to read a full set of r, g, and b together as one, or must I have multiple .read() calls?
  • And while I'm at it, is there a way to read 48 sets of r, g, and b together as one?

See the r,g,b values are being piped into an LED library which is updating a POV display within microseconds. So I'm trying to figure out what's the best, fastest way to get the data from the SD card. I don't know that I can read all the data into a variable without blowing up the processor (Need.More.Memor....poof) so I'm going to be reading and updating the LED string simultaneously (or as simultaneous as possible, give or take a few microseconds.)

Is there a way to read the four lines that pertain to cols and rows respectively all in one shot? Or must I call .read() four times to get it all?

There is an overloaded version of the read() method that allows you to specify an array to write to, and the number of bytes to read.

Is there a way to read a full set of r, g, and b together as one, or must I have multiple .read() calls?

Yes.

And while I'm at it, is there a way to read 48 sets of r, g, and b together as one?

Yes.

PaulS:

Is there a way to read the four lines that pertain to cols and rows respectively all in one shot? Or must I call .read() four times to get it all?

There is an overloaded version of the read() method that allows you to specify an array to write to, and the number of bytes to read.

Okay PaulS, I'm going to need a pointer here. I'm not sure where to look nor how to make it work. I'm using the SdFat library.

I'm using the SdFat library.

Well, it might have been useful to mention that in your post. Or, to include a link.

PaulS:

I'm using the SdFat library.

Well, it might have been useful to mention that in your post. Or, to include a link.

Yeah, that's totally an oversight on my part. My apologies. I should've said that at the beginning.

But, having said that, I do have a working version after working on both sides of the process all night. I had a small error on the encoding side on the computer which was causing all kinds of odd stuff when I read it back in on the Arduino side. But once that was fixed, then I moved on to doing a continuous read from SD and, while slow, it does work. Not knowing how else to do it, I am doing three consecutive file.read() to get the values for r, g, and b before showing it into the array that the LED library is using.

I suspect once I can figure out how to do the proper reads that things will speed up. It's still nice to see the progress ...

Ok, I figured out I can do a read like this, specific for the LED library:

      for (int myCol = 0; myCol < columns; myCol++) {
        for (int myRow = 0; myRow < NUM_LEDS; myRow ++) {
          myFile.read((char*)(&leds[myRow]), 3);
        }
        LEDS.show();
      }

That fills CRGB() with the three necessary bytes representing r, g, and b respectively. However, it's still taking a long time. Adding a timer that start right before the for() loop and ends after it

      for (int myCol = 0; myCol < columns; myCol++) {
        t = micros();
        for (int myRow = 0; myRow < NUM_LEDS; myRow ++) {
          myFile.read((char*)(&leds[myRow]), 3);
        }
        t = micros() - t;
        LEDS.show();
      }

... across the 43 columns in a file, I get an average of 2,079 microseconds. That's 2,079 microseconds to read 48 *3 bytes. I don't know how the math works out in terms of the SD speed itself and whether that's what is causing the bottle neck, or whether there's a faster way to get the data ...

If you read all 144 bytes in a column with single read, you can reduce the average time to 350-500 microseconds for a 144 byte read.

Here are times for a low cost microSD

Maximum latency: 3108 usec, Minimum Latency: 108 usec, Avg Latency: 440 usec

Here are results for an Industrial full size SD

Maximum latency: 2020 usec, Minimum Latency: 108 usec, Avg Latency: 385 usec

These results are from the SdFat bench.ino example with a 144 byte buffer size.

#define BUF_SIZE 1144

So I ran the bench test on a couple of card.

Transcend, 4Gb, class 6, FAT16 with default blocksize (whatever that means within Win7):

Write 129.45 KB/sec
Maximum latency: 215916 usec, Minimum Latency: 108 usec, Avg Latency: 1107 usec

Read 337.77 KB/sec
Maximum latency: 2412 usec, Minimum Latency: 108 usec, Avg Latency: 420 usec

Sandisk, 2Gb, unknown class, FAT16 with default blocksize:

Write 181.53 KB/sec
Maximum latency: 147608 usec, Minimum Latency: 108 usec, Avg Latency: 787 usec

Read 325.94 KB/sec
Maximum latency: 2568 usec, Minimum Latency: 108 usec, Avg Latency: 436 usec

Sandisk, 1Gb, unknown class, FAT16 with default blocksize:

Write 167.39 KB/sec
Maximum latency: 141040 usec, Minimum Latency: 108 usec, Avg Latency: 854 usec

Read 321.48 KB/sec
Maximum latency: 2604 usec, Minimum Latency: 108 usec, Avg Latency: 442 usec

When I force the format on all cards to be FAT16 and 64Kb blocksize specifically, I get this:
Transcend, 4Gb, class 6, FAT16, 64Kb blocksize:

Write 131.41 KB/sec
Maximum latency: 215112 usec, Minimum Latency: 108 usec, Avg Latency: 1090 usec

Read 337.72 KB/sec
Maximum latency: 2412 usec, Minimum Latency: 108 usec, Avg Latency: 420 usec

Sandisk, 2GB, unknown class, FAT16, 64Kb blocksize:

Write 195.43 KB/sec
Maximum latency: 138296 usec, Minimum Latency: 108 usec, Avg Latency: 731 usec

Read 327.46 KB/sec
Maximum latency: 2572 usec, Minimum Latency: 108 usec, Avg Latency: 434 usec

Sandisk, 1GB, unknown class, FAT16, 64Kb blocksize:

Write 188.60 KB/sec
Maximum latency: 140264 usec, Minimum Latency: 108 usec, Avg Latency: 758 usec

Read 327.89 KB/sec
Maximum latency: 2592 usec, Minimum Latency: 108 usec, Avg Latency: 433 usec

So as far as these two cards go, it seems to average 430 usec. Which is plenty. I just need to figure out how to squeeze that out of my file.reads(). Looking for help on the c++ side now, see if I can optimize the data structure in the file. Hopefully I can get this licked later today.

KirAsh4:
So I ran the bench test on a couple of card.
[... snip ...]

Unfortunately, your computer has a faster route to read from the cards. Typical PC interfaces use a parallel interface to the card (4-bit I think, but it's been a while since I've surfed through that part of the Simplified SD spec), that is clocked faster than SPI. I also think that in order to get the full documentation on how the parallel mode works one needs to pay the SD association a membership fee. The SPI interface (while serial and slower) is fully documented for free.

The only benchmarking that will relate at all to real-world numbers would be benchmarking on an Arduino.

Hopefully someone else can prove me wrong... :~

Sembazuru:
The only benchmarking that will relate at all to real-world numbers would be benchmarking on an Arduino.

That is the Arduino benchmark, not the computer. It's a sketch that comes with the sdfat library that first writes a 5Mb file then reads it back.

KirAsh4:

Sembazuru:
The only benchmarking that will relate at all to real-world numbers would be benchmarking on an Arduino.

That is the Arduino benchmark, not the computer. It's a sketch that comes with the sdfat library that first writes a 5Mb file then reads it back.

OK, my bad. Carry on. :slight_smile: