Assuming you're using my core they're implemented the same way they usually are. Delay is two nested loops, the outer one records micros and then enters the inner loop, which checks micros until it's >= 1000 and then subtracts one from the remaining millis of delayand adds 1000 to the initial micros (that's what it should be doing at least, I;ll be sure to make sure that's what it's actually doing for 2.0.0. Thus, if interrupts fire in a delay, they do not change the time the delay takes to run, as long as they aren't so long that they break millis so if you have a delay(1000) and during that time an ISR fires 1000 times taking 50 us each time the delay will still end about 10 seconds after you called it. If you violate the conditions documented for proper timekeeping and instead have a terribly written ISR that fires only ten times in that 1000ms, but each of those takes an insane 5ms to return) millis will lose 40ms. and the delay will be more like 1040ms.
delayMicroseconds is very very different: it is a pure cycle counting loop written in assembly. that means that the time does not tick down when an ISR is running unless you disable interrupts. delayMicroseconds(1000) gets interrupted by an ISR that takes 750us to run (again, poorly written ISR extreme example for the sake of making the effect more visible) 100 us into the delay. After the ISR returns, the remaining 150 us of the delay runs.
That's what the current implementation I am using is on mTC and DxC, and I think it's the same scheme on ATTC. It's derived from the original version, with a few fixes for very short delays on delayMicroseconds, plus added assembly blocks to handle the wierd speeds we support. (the first issue happens when delayMicroseconds is called only a very small number of times - LTO will inline the function. Great - except that the time it takes for the call and return instructions is included in the delay, so when it gets inlined, one call (3 clocks on AVRxt, 4 on AVRe) and one ret (4 clocks) are lost - potentially a whole MICROSECONDS Doesn't really mattter when you delay for 100 us, but when you need a 5 us delay call time, but only gives you 4 us or 5 us depending on decisions the linker makes at the end of the compile process. The LTO fix is definitely not in a released version of ATTC (the fix is turning the classical variable delay into a non-user-visible _delayMicroseconds(), with the never_inline attribute, and then having delayMicroseconds() itself as a stub with always_inline, that uses __builtin_constant_p(delay) - if delay is constant, it passes it to the /util/delay.h _delay_us(), which is perfect, but requires the delay to be compile time constant. Otherwise it passes it to _delayMicroseconds(). The stub function gets optimizedout by LTO (which, while optional in 1.5.x, is mandatory in 2.0.0 because we can guarantee that the compiler will support it (it will pull in Azduino6 or Azduino7 version of the toolchain), and a huge nuber of the improvements in 2.0.0 depend on LTO (LTO doesn't just result in 5-25% reduction in code size. It also lets you do non-preprocessor if statements that can test if the argument is constant or not even when the constant comes from another compilation unit. That's what makes it possible to give compile errors when non-existent pins are used and the like,
A while ago someone put a huge amount of work into making sure that bizarro clock speeds had essentially zero calculation drift, because he was using 18.whatever USART clock crystals and needed them to keep time well (previously, the time was not as precise due to integer math)
The CORRECT_EXACT_MILLIS stuff makes understanding what's going on in millis harder to understand on ATTC, but It is known to work and make millis accurate on whackjob speeds