Hello
I’m currently learning about clocks and timing, using a stock 5V Uno rev3 (16Mhz) and an ICSP-programed 328P.
Hope someone could enlighten me on these, as i am kind of baffled about what i see.
Stock UNO clock and instruction duration
Using the stock uno 16MHz crystal, running one instruction per cycle, each should take 62.5ns per cycle. Using direct port manipulation, i see (on a Rigol scope) that each port write is taking 125ns...
QUESTION A : why does a single instruction seem to take 2 clock cycles ?
Using timer1 with no pre-scaler, i time my code according to the code below.
What i see and understand :
- bitSet/bitClear should be 1 cycle (direct and/or'ing memory address)
- bitRead should be 3 cycles (copy'ing memory address, and'ing, shift)
- in total 12 * 1 cycle + 2 * 3 cycle = 18 cycles
- but it prints 38 on the serial monitor
- i guess that 38 = 18 * 2 + 2.
- the "*2" could be because each instruction takes 2 cycles (as per Question A above)
- the "+2" could be from stopping the timer at the end (1 instruction, 2 cycle, Question A)
QUESTION B: where can i find the "readable" assembly code to verify my timing analysis ?
Bare 328P chip (ICSP/SPI programmed)
I read from boards.txt that it activates CKDIV8, and CKSEL=0b0010 (int 8MHz RC) :
atmega328.menu.clock.internal1=Internal 1 MHz
atmega328.menu.clock.internal1.bootloader.low_fuses=0x62
atmega328.menu.clock.internal1.bootloader.high_fuses=0xdb
atmega328.menu.clock.internal1.bootloader.extended_fuses=0xfd
atmega328.menu.clock.internal1.build.f_cpu=1000000L
In this configuration, CPU clock is 1Mhz, and each instruction should take 1us.
Using direct port manipulation, i see that each port write is taking 2us...
So question A still applies.
And i have a test led blinking using delay() : 1s on, 1s off.
In that configuration, it blinks "really" on time.
Then i tried to burn the code again using "Tools / Clock / Internal 8Mhz" (instead of 1Mhz)
From boards.txt it should deactivate CKDIV8, and keep CKSEL as int 8MHz RC :
atmega328.menu.clock.internal8=Internal 8 MHz
atmega328.menu.clock.internal8.bootloader.low_fuses=0xe2
atmega328.menu.clock.internal8.bootloader.high_fuses=0xdb
atmega328.menu.clock.internal8.bootloader.extended_fuses=0xfd
atmega328.menu.clock.internal8.build.f_cpu=8000000L
After that, i see that using direct port manipulation, each port write still take 2us !
QUESTION C : why does it seem like CKDIV8 is not de-activated ? i should run at 8MHz !
And i see the light blinks 8x slower (8s on, 8s off) ... so delay() is misbehaving. That leads me to believe that f_cpu is actually taken into account "in code" even if the hardware clock is wrong.
QUESTION D : how can i fix this "software" side-effect (delay and such) ?
Thanks in advance for your feedback, and have a nice day !
Below is the test code i used for this experiment.
#define DURATION_MS 1000
void setup() {
Serial.begin(9600);
// scope output on UNO D9
bitSet(DDRB, PB1);
// dummy input on UNO D10
bitClear(DDRB, PB2);
// led output on UNO D13
bitSet(DDRB, PB5);
}
uint8_t foo;
uint8_t bar;
uint16_t count;
void loop() {
// start timer
TCCR1A = 0;
TCCR1B = 0;
TCCR1C = 0;
TCNT1H = 0;
TCNT1L = 0;
bitClear(TIFR1, TOV1);
TCCR1B = 1;
// scope timing
bitSet(PORTB, PB1);
bitClear(PORTB, PB1);
bitSet(PORTB, PB1);
bitClear(PORTB, PB1);
// reading input
foo = bitRead(PINB, PB2);
// scope timing
bitSet(PORTB, PB1);
bitClear(PORTB, PB1);
bitSet(PORTB, PB1);
bitClear(PORTB, PB1);
// reading input
bar = bitRead(PINB, PB2);
// scope timing
bitSet(PORTB, PB1);
bitClear(PORTB, PB1);
bitSet(PORTB, PB1);
bitClear(PORTB, PB1);
// stop timer
TCCR1B = 0;
count = (TCNT1H << 8) | TCNT1L;
Serial.print("tov1=");
Serial.print(bitRead(TIFR1, TOV1));
bitClear(TIFR1, TOV1);
Serial.print(" count=");
Serial.println(count);
// dummy code so that reads are not "optimized away" by the compiler
foo += bar;
// visual delay duration check
bitSet(PORTB, PB5);
delay(DURATION_MS);
bitClear(PORTB, PB5);
delay(DURATION_MS);
}
