I plan to redo the Wave Shield code but not with SdFat. It has a custom SD reader. This library started on 168 Arduinos with 1 KB total RAM and used half blocks or 256 bytes.
was able to read from 2 open files, each at about 230 kbytes/sec, by alternating calling read(buffer, 512) on each.
With the 1284 I can read from two contiguous files on a cheap SD at about this speed, about 220 KB/sec each. On an industrial SD this call to read a block takes 924 micros or 554 KB/sec.
uint32_t m = micros();
if (!card.readBlock(lbn, buf)) error("readBlock");
Serial.println(micros() - m);
If I read a single stream, I can use multi-block read commands. Then reading a block takes 824 micros or 620 KB/sec.
This is on a 16 MHz AVR with a 8 MHz SPI bus.
On Teensy 3.0 and Due I cache a FAT block and a data block. If you read a multiple of 512 bytes and start on a 512 byte boundary, I read directly into your buffer and don't use the data cache.
Or perhaps a short list of the first dozen cluster addresses for each file? Would that allow rapid seeking within the first many kbytes of the file?
Sound files go so fast that a dozen clusters is not worth it. 12 32K clusters 16-bit 44.1 mono is 4.4 seconds.
I have played with other cache possibilities but nothing beats contiguous files. I spent my career around the world's largest physics experiments and most RTOS used there, like VxWorks, have real-time file extensions that depend on contiguous files.
SD firmware in cameras records video in contiguous files. Contiguous files are required to meet a cards performance spec.
SdFat has had this call starting with the first release:
bool SdBaseFile::createContiguous ( SdBaseFile * dirFile,
const char * path,
uint32_t size
)
Create and open a new contiguous file of a specified size.
Note:
This function only supports short DOS 8.3 names. See open() for more information.
Parameters:
[in] dirFile The directory where the file will be created.
[in] path A path with a valid DOS 8.3 file name.
[in] size The desired file size.
Returns:
The value one, true, is returned for success and the value zero, false, is returned for failure. Reasons for failure include path contains an invalid DOS 8.3 file name, the FAT volume has not been initialized, a file is already open, the file already exists, the root directory is full or an I/O error.
For the CrossRoads prototype I create a 100 MB file, record, and then truncate with this call:
bool SdBaseFile::truncate ( uint32_t length )
Truncate a file to a specified length. The current file position will be maintained if it is less than or equal to length otherwise it will be set to end of file.
Parameters:
[in] length The desired length for the file.
Returns:
The value one, true, is returned for success and the value zero, false, is returned for failure. Reasons for failure include file is read only, file is a directory, length is greater than the current file size or an I/O error occurs.
Opening files on FAT volumes is slow since the directory entry can be anywhere. There may even be unused entries before the entry to be opened.
My main point is that the end hardware/software product needs to be very agile in playing a series of files with no time gaps.
It must integrate into ISRs for sensors. I have a interrupts safe file start function to switch streams in an ISR or normal code and the call take about 30 usec.