What it exactly does is to catch a transition with a stable period of 7 regular intervals ---- intervals chosen to add up to longer than any expected bounce plus a margin. With dirty buttons that interval will need to be longer but for now it's 3.5 ms after the last bounce less than that.
It does not consider at all how long the bouncing went on, typically around 2 ms. It doesn't count bounces nor can it since such code never sees every little spike that a scope shows nor should it be concerned with microsecond or less detail as that would be a retentive WASTE of cpu cycles and RAM.
It does what it needs to do and leaves the time and cycles between pin reads to other tasks.
I chose half a milli after examining scope pictures of bounces just to catch some bounce and be able to get a short stable period with fewer reads than the more detailed debounces I had written from 2012 to 2017 that used more cycles, code and twice as much RAM per button.
I chose this method to be applicable to button boxes 10x10 or more without slowing average loop speed while reading all of the buttons in less than 2 ms.
It doesn't need to use 7 bits of stable. It could use fewer bits read at 1 ms intervals by masking high bits and comparing to not 127 or 128 but 31 or 32 instead. Then the stable period would be 4 ms instead of 3.5.
It is what it is and not something else. I was using 4 bytes per pin and more cpu time and this uses 2 bytes and runs leaner. Sorry for the rant but IMO the code validates itself or it doesn't.
After suggesting using a cap to debounce in hardware i was informed that doing that can burn switch contacts, btw. I'm not greatly sure about that but I'm not a EE hardware guru.