Programming ADC Registers in RP2040 Connect

Looking at past topics I see a question from Jul'21 which asked whether Macros for reconfiguring the ADC registers exist. The response was an obnoxious rant about how the person should not be programming registers and the RP2040 Connect was intended for communication not ADC. Okay....someone was a bit testy.

Hoping to get a slightly less obnoxious answer here. I've tried everything I can think of to improve the sample rate of the ADC. I've basically got a tight loop which keeps executing analogRead to a single pin and stuffs the result in an array. I cannot imagine getting faster than this. Yet, the best sample rate I can get is about 80k samples/sec.
According to the RP2040 Connect datasheet the sample rate at 48Mhz should be around 500k samples/sec. I'm want to understand how the mbed OS is programming the registers in the RP2040 and if there is anyway to reconfigure for faster performance.

Any help would be appreciated. I've been crawling through the mbed OS source code and I see places where there are functions to configure various bits in the CS register. But, its not clear how these functions are accessed from the Arduino IDE.

From what I can figure out The RP2040 operates at 48MHZ which comes from USB PLL. Hence, its ADC takes 96 CPU clock cycle to perform one conversion. Therefore, the sampling frequency is (96 x 1 / 48MHz) = 2 μs per sample (500kS/s), you numbers are correct. After the conversion you need to unload. Each of these operations will take CPU time effectively adding to the A/D conversion time. The best speed would be if all the critical removing the data and restarting it were done in tight assembler code. However the part does have a DMA which can unload the A/D, save it and restart the A/D. Here is a link that may help: ADC Sampling and FFT on Raspberry Pi Pico - Hackster.io

Thanks for the response. Below is a snippet of my code:

for(int i=0;i<DATAWINDOWSIZE;i++)
raw_data_buffer[raw_data_ptr]=analogRead(probe0);

You are right, there is some overhead. Checking the mbed OS source code, the analogRead
is simply a read to the upper 16-bits of a register and a shift for precision. This cannot
be more than a few clocks (probably 2). I would THINK the compiler unrolls this loop since
the loop bound is a constant (DATAWINDOWSIZE). Assuming its not then that's a few more
clocks for the increment and comparison branch. I cannot imagine the overhead per data point
is more than let's say 6 clocks. At 133Mhz that's much smaller than the 96 48MHz clocks it
takes to do the ADC conversion. I'm only getting 80k samples/sec, so the overhead is taking more than 10us. That would mean the overhead is like 1300 133Mhz clocks!

I suspect that the mbed OS is programming the ADC to continuous mode and using an interrupt to pull the data . There may be some penalty for crossing from the 133Mhz to 48Mhz clock domains as well. Or perhaps it set the sample frequency in continuous mode to something other than maximum.

I would really like to understand how it's actually programming the CS register. I just can't find it in the source code.

Thanks for the help! I'll check out that link you gave me.

Many years ago I had to do a 8" DD floppy disk with an 8080, all were sure it could not be done without DMA. Wasn't, did 16 load-store sequences with a hardware wait state until the next read was ready. Very little extra room but it worked great in a lot of CPM boxes. After sixteen reads we had some time to reload the counters etc. There was a fail safe timer incase it locked up.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.