I'm creating a project using a feature of the core code for my micro-controller that does precise "wave forms". Basically, it allows me to define the HIGH and LOW durations. The same core code is used for things like making sounds and controlling servos. I'm creating a library, using this core code, to drive a NEMA 17 stepper motor. All is working great.
My question is of a more esoteric nature. I've notice many threads on the forum where the subject of the relative HIGH and LOW durations are discussed. Particularly, Robin2 mentions on some of these threads that the HIGH pulse of even 1 micro-second is good enough. I have not found an explanation as to the pros/cons of either option.
Is there any practical reason why I would want to use something at this extreme versus merely using equal HIGH and LOW durations? I am wondering if there are stepper motor or driver heat or longevity issues that might suggest using the 1 uSec version. That Robin2 seems to be recommending this (and his rating shows he in an expert) I tend to follow that advice. However, I see some coding efficiencies if equal length HIGH and LOWs are used... less callback adjustments... they're all the same length.
I've tried both options and hear and see no differences in the motor's behavior. If its pertinent, I'm currently driving the step pin on an A4988.