This is actually quite interesting.
delay() relies on micros() to work: it calculates the elapsed time to make sure that sufficient amount of timer ticks has passed.
micros() relies on timer0 overflow flag to work: it tests the timer0 flag and if it is set, it thinks a ms has passed and calculates micros() accordingly. For this approach to work, time0 flag cannot be set by the timer0 isr so it has to disable interrupts, knowing that once you return from micros(), the flag, if set, will be cleared by the timer0 isr.
The side effect of this is that in an isr environment, once you exit from micros(), you are still in the isr and the timer0 flag is never cleared. So once the timer0 flag is set, each time micros() is called, it thinks a ms has passed.
What you observe is that delays() in isr goes very fast: it got the first <256 ticks right but after that, "time" flies literally.
They could have programmed it different to avoid this problem. For example, they could have used a static variable to record the TCNT0 last time micros() is called and compare it vs. the current reading of TCNT0 to decide if the timer has overflown. This approach does not rely on the timer0 flag to be set / cleared. But it requires that micros is called frequently.