Go Down

Topic: Higher Precision scope (Mac based) (Read 12577 times) previous topic - next topic



first, I was very impressed by this scope - I don't have a Mac, but what I saw in the flash, it looks fantastic!

I have a problem. I wanted to create a similar front-end to yours, but on my Windows machine.

When I'm running this Arduino code on my Arduino Pro, I don't get more that 200 - 250 kbps between arduino (thru the FTDI chip) into my Java Application.

Maybe you have some more experiences, what's wrong with this code? (It's a subset of your lines):

void setup() {

       UCSR0A |= 2;
       // send a message to let the computer know you are alive
       Serial.print("Hello World");
               while(true) {
                       //Serial.write((high2 << 5) | (low2 >> 3) | 0x80);
                       UDR0 = 0xff;

Is there anything special needed to set up on the port's properties or anything?



wait, how are you getting 2mbps serial? It seems to me the max arduino can handle is 1mbps (16000000/(16*1000000))-1=0. What is this UCSR0A register? Thanks!


Jun 12, 2010, 10:21 am Last Edit: Jun 12, 2010, 10:24 am by selfonlypath Reason: 1
I've a iMac G5 running with Tiger OS and implemented with success Alvaro open source: maximum I can get 115K with arduino Mega via Mac based java GUI. It is my undesrtanding Arduino does not have real USB but a RS232 with USB emulation via FTDI chip so bandwith bottleneck due to RS232 modem.


The bottleneck should be in the UART itself, it takes 16 clock cycles to send a bit through the UART, so the theoretical maximum is 1000000. I suppose you could get a little higher than that because the serial is re-synced every byte, but to get any higher I think you would have to do some clever bit-banging of the serial. Of course, this guy could be using some trick I've never heard of....


Setting the U2X0 bit high in the UCSR0A registers half's the divider so it goes from 16 to 8, so you can get 2Mbps

But the arduino serial library handles that already by default, so you could do Serial.begin(2000000) from the start. (I have never tested it)
The atmel datasheet even states 0,0% error rate on 2Mbps @ 16MHz so that's nice as well.


Oh, very useful to know. I wonder if this works on the attiny2313 as well...


I was not to reply to this, but here it goes.

Most serial errors occur during reception, not transmission. RS232 is asynchronous, meaning the clock is not sent along data. Although you can have both clocks with same frequency at high levels of accuracy, their phase difference is not known. This might lead to metastability problems (errors). To overcome that, most UART do oversampling on the receiver side (tipically 16). This means that for each bit being extracted, you sample 16 times. If your clock is 16MHz, your bit rate hardly goes over 1Mbit, if you want a clear transmission.

Let me try to explain how the UART receiver works.

As you know, RS232 transmission uses 1 start bit, 7-8 data bits and 0-1 stop bits. And might use parity. Let's concentrate on 8N1 (8 data bits, no parity, 1 stop bit).

So the receiver is idle, sampling 16x times faster than its baud rate, and reads a logical '0' on the wire. It keeps on going until he reads a '1'. Here it "synchronizes" its clock.
Let's assume transmission is like this: 1 01010101 1 (startbit and stopbit included).

If '1' came at sampling time N, the start bit, then the first bit of data will be sampled exactly in between its start and its end - after 16 + (16/2) cycles - 1.5 bit time (includes the start bit). This ensures best setup time and hold time (setup time is the amount of time required for a signal to be stable to be sampled, hold time the amount of time required after sampling). Then others are sampled at each 16 cycles.

if your baudrate does not allow for proper center alignment, then you won't meet either setup time or hold time.  If you are sampling zeroes and ones you can try resynchronizing the clock phase, but that is not used usually.

At higher baud rates, capacitance on the transmission lines shows up, greatly affecting sample+hold time. So you need a lot of precision when sampling the data bits.

Hope I shed some light on the subject. Feel free to ask for clarifications.



At some point, it is best to think in terms of Information Theory (Claude Shannon) where you look for bit rate (bits/s), bit error rate (10e-5), bandwith, energy (Eb/N0)... If you send at 1Mbits/s but get many errors (i.e. 10-2), it is no use. What Alvaro is doing: CRC16bits plus smart acknowledge, resend if error detected by using a channel (i.e. 115Kbits/s) but will in fact provide much less bits/s (i.e. 64kbits) being almost error free (10e-5) so his protocol takes care of corrupted 115Kbits/s being layer 1 of ISO model. In conclusion, best compare apples with apples, oranges with oranges so what bits/s you get with a fixed bit error rate where you need to normalize with overhead bits (i.e. CRC, resending same frame).


I am bumping this, because it is an existence proof that relatively high sample rates (and serial data rates) are possible. It seems to be a common misconception here that higher ADC rates aren't possible, same with high serial data rates.

360kHz sample rate is possible (if you up the ADC clock to 8MHz), as well as 2Mbit/s sample rate.


Of course you can use those rates.

But I would not trust ADC at that speed.

2Mbit for serial is possible, but I am not sure your AVR can handle it if using interrupts. If FTDI chip rx fifo was a bit larger, then I guess you could use it.

selfonlypath did a lot of testing at higher baud rates, he found the error rate too high. I'm using 3Mbit on ZPUino with not many errors (with a FTDI chip also), but I have an hardware 2kbyte fifo. Helps a lot.


Jan 10, 2011, 10:20 pm Last Edit: Jan 10, 2011, 10:21 pm by Robotbeat Reason: 1
Well, you shouldn't trust any ADC you haven't calibrated and tested yourself. For 3 or 4 bits of precision, I bet it's pretty close to within 1 LSB. Though you don't know until you test it.

As far as the error-rate... I bet that could be fixed if you worked at it long enough, but losing .05% of the data stream is acceptable for some applications, like some basic audio work and as a basic oscilloscope.


Why is an ADC inherently less reliable at higher speeds? Also, robotbeat, I've never heard of someone calibrating an ADC on an AVR... I think if you get to a point where that much precision is important, you should probably use a discrete, dedicated ADC.


Because there are various types of ADC circuitry, and the one used in the avr chips is a sample and hold one, which have a cap that must be charged, then the time that it takes to discharge the cap is the analogic to digital value, so if you really want to sample very fast you need a lot of current so the cap will have the same voltage of your intended sample source, so if you cant provided an high-current sample you will have a reduced charge in the cap and thus reducing your adc resolution.

Go Up