The supply voltage I was feeding it from my "brick" was +/- 30VDC, give or take a fraction of a volt. The amp is capable of more than this so it's within hardware limits.
While the amp is capable of 10A, other hardware in the board isn't. For example, current sensing is currently limited to 4A so charging a lead acid would be done at 4A or less. Not exactly efficient I know but the design has room to evolve if I can get a bigger current sensor. The algorithms I developed use both constant current and constant voltage, depending on the voltage regime of the battery. As such, the voltage could fluctuate to get the desired current but if you consider the fact that lead acids have very low internal resistance and large Ah values, the voltage needed to achieve 4A is almost the same as the terminal voltage, which would gradually rise from whatever depressed voltage it happened to be at, say 11.5V, to as high as 15V, where it would be desulphating. That's the story for lead-acid, which is a fairly narrow range of voltages. For any other chemistry, you're looking at output values that range from 0.7V all the way up to 25V for multi-cell packs of Nickel chemistries (or lithium when I incorporate charge balancing). However in all cases, the output voltage wouldn't drift too far from the terminal voltage, else large currents would result. The most violent behaviour is when shifting from charge to discharge at high frequency, which in theory could involve voltage swings of 25V to 0 and back, in the khz range. When the output is at 0, the battery would discharge and current flow would be reversed, which goes back to what I was saying earlier about not wanting any output voltage to fight my battery when it's trying to discharge.