I am facing a dilemma where i need to run a specific function, let's say, every 200 ms,
200ms? or 200.0ms? or 200.000ms? The answer depends on how accurately you need to have the 200.
If you just need it to the millisecond then you can probably catch it with millis and avoid the overhead of a timer interrupt.
If you need it to the microsecond then you have no choice but to use a 16 bit timer1 interrupt.
- Which one is the most accurate way: timer interrupt or millis or others?
The timer interrupt is more accurate in as far as firing as close as possible to the target time. But how fast are your encoders really pinging? If you have a couple ms lag maybe you're off by 1 count.
2. Is there a big difference between the two?
Overhead. ISR will mess with timing on other functions. You'll have to use critical sections to access the data. The code is more involved. Rule of thumb with interrupts is use them when you have to and avoid them when you can.
3. And why is it the most accurate? I would appreciate some insight under the hood.
You can link the timer up to the system clock at 1:1 so you get 62.5ns resolution. The interrupt fires immediately (as long as an interrupt with higher priority isn't already running). So it can usually catch things the quickest, to within a microsecond or two at worst with good code.
With millis you have to actually wait for the code to come back around to that statement. If you write good code you can keep that time to a few hundred microseconds, but it's still a tiny amount of lag. For the purpose of reading encoders that aren't being spun by geared up jet engines or something, you can probably afford that.
4. Is it possible to analyse the loop execution time through some code?
Not without affecting it a little. And sometimes the effects aren't completely intuitive. Sometimes adding your timing code will cause the compiler to completely rethink it's optimization and you get completely different code. But most of the time just adding a loop counter to count a few thousand loops and using micros before and after you can get a pretty good average.
The best way I've come up with is a simple pin toggle on an output pin by writing to the input register (look it up "Arduino Port Manipulation") at the top or bottom of loop(). Then I can watch that pin with my scope or even another Arduino and get a pretty good idea that way. That almost never affects the timing of the rest of the code since it is a single register write right at the beginning or end of the function.