VidorDSP

Hi Loopmasta,
thank you for your input. let's use this thread to define how it would look like...

  1. we can input and output audio in FPGA through I2S, either passing through SAM D21 acting as USB audio device or via an external CODEC (we're thinking about a shield for this). we can also output audio via any FPGA pin as we have a sigma delta DAC IP running at 100 MHz where you can modulate sampling frequency/dynamic range at your will

  2. we have SDRAM which can be used for buffering either pre-sampled audio clips or to perform delays. SDRAM functionality is not optimal for random access but with proper caching/buffering in internal RAM this is viable. memory is clockable up to 140 MHz but in reality we should likely keep that down to 100 MHz but that is still quite a lot of data, and even considering overhead we could read/write up to around 80 MSPS.

  3. for oscillators the easiest thing is creating a wave lookup table using embedded RAM. each embedded RAM block can hold 512 samples at 16 bit and we already have a "reader" that can run at fractional frequency so that you can play that back at any frequency you like. this can generate any waveform, provided the 512 samples are enough. we could also use SDRAM for this but that would make it a bit more complex as we need to have buffering to compensate for latency

  4. most logic inside the FPGA can run at high frequency so a single IP can process multiple channels within a single sample timing. for example if i have a single multiplier running at 100 MHz and my sampling frequency is 50 KHz i can use this multiplier 2000 times per sample so for example i can have a 2K tap FIR filter with a single multiplier or i can mix 2K channels each with individual volumes

  5. in order to control the whole system we can have a soft processor core running in the FPGA program routing of audio samples across IP blocks. at audio sampling frequencies this should be relatively trivial (a NIOS processor runs at 100 MHZ easily). SAM D21 can in parallel handle other stuff such as USB communication, interface with sensors/actuators, etc.

thinking about the way it could work, the core of this system may be a SDRAM scheduler which basically feeds each IP block with packets of data and stores output packets of data. each packet would be a "channel and by defining which channel goes to which IP in which sequence would define routing. by just swapping buffer addresses. actually passing through SDRAM means we should do bursts so "sample accurate" may have to be approximated to the burst size... let me know if this would be acceptable..
let's say then that with 80 MSPS, at 40 KHz we have 2K channels. these have to be divided by two because we have to read and write these channels so it's 1K channels which means we may have up to 1K "virtual" processing blocks running in parallel. it's rough estimates but gives you an idea of what can be done...

now, fire is started... let's keep discussing...