okay. Ive been testing with sending data betweeen arduino and MAXmsp. I noticed that when i sample at 1000 samples persec the data stream becomes very bursty. In MAXMSp i can see i am receiving the data in 4KB chunks every 640ms. My data packet from the arduino is 6 bytes long. So that would be 6 * 8 * 1000 = 48000kbps. I believe the explanation lies here on page 8
http://www.ftdichip.com/Documents/AppNotes/AN232B-04_DataLatencyFlow.pdfI dont have anymore time to spend on this but it looks like i need to recompile the ftdi driver with a lower USB block request size or something like that.
3.3 Effect of USB Buffer Size and the Latency Timer on Data
Throughput
An effect that is not immediately obvious is the way the size of the USB total packet request has on
the smoothness of data flow. When a read request is sent to USB, the USB host controller will
continue to read 64 byte packets until one of the following conditions is met:
1. It has read the requested size (default is 4 Kbytes).
2. It has received a packet shorter than 64 bytes from the chip.
3. It has been cancelled.
While the host controller is waiting for one of the above conditions to occur, NO data is received by
our driver and hence the user's application. The data, if there is any, is only finally transferred after
one of the above conditions has occurred.
Normally condition 3 will not occur so we will look at cases 1 and 2. If 64 byte packets are
continually sent back to the host, then it will continue to read the data to match the block size
requested before it sends the block back to the driver. If a small amount of data is sent, or the
data is sent slowly, then the latency timer will take over and send a short packet back to the host
which will terminate the read request. The data that has been read so far is then passed on to the
users application via the FTDI driver. This shows a relationship between the latency timer, the data
rate and when the data will become available to the user. A condition can occur where if data is
passed into the FTDI chip at such a rate as to avoid the latency timer timing out, it can take a long
time between receiving data blocks. This occurs because the host controller will see 64 byte
packets at the point just before the end of the latency period and will therefore continue to read the
data until it reaches the block size before it is passed back to the user's application.
The rate that causes this will be:
62 / Latency Timer bytes/Second
(2 bytes per 64 byte packet are used for status)
For the default values: -
62 / 0.016 ~= 3875 bytes /second ~= 38.75 KBaud
Therefore if data is received at a rate of 3875 bytes per second (38.75 KBaud) or faster, then the
data will be subject to delays based on the requested USB block length. If data is received at a
slower rate, then there will be less than 62 bytes (64 including our 2 status bytes) available after 16
milliseconds. Therefore a short packet will occur, thus terminating the USB request and passing
the data back. At the limit condition of 38.75 KBaud it will take approximately 1.06 seconds
between data buffers into the users application (assuming a 4Kbyte USB block request buffer size).
To get around this you can either increase the latency timer or reduce the USB block request.
Reducing the USB block request is the preferred method though a balance between the 2 may be
sought for optimum system response.