Tell more. Your Sallen-Key is presumably a two-pole Butterworth bandpass filter, with corner frequencies at 50 Hz and 1300 Hz. Is that right? If so, you can't expect that the filter will completely eliminate signal components that are a little bit outside the passband. You'll get a rolloff of about 6 dB per octave, so signal components around, say, 2000 Hz won't be reduced by even as much as half, and what remains of them will alias as lower frequencies. If your filter is a two-pole some-other-kind-of-filter, it still won't be much better than that.
Unless you have some spectacular number of poles, you won't get a sharp cutoff at the corner frequency, anyway. The Nyquist criterion says that the lowest sample rate required to avoid aliasing is twice the highest frequency with significant content in the input signal. If there's significant content in the unfiltered signal at or a little above 1.3 kHz, there'll be significant content in the filtered signal, too, and it will alias. Significance, of course, is in the eye of the beholder - it''ll depend on what you're doing with the results. And, "a little above" might well mean "between 1.3 kHz and 13 kHz," depending on the sensitivity of the follow-on analysis.
You could quickly check whether this is a source of your difficulty by restoring the original sample frequency, leaving your bandpass filter in place and active, and examining the output of the FHT to see how much content appears outside the nominal passband of the filter. If it's a significant fraction of what's appearing at the in-band frequencies, aliasing could be the problem, or maybe part of the problem.
Another valuable test would be to restore the sample frequency to the original high speed, and reduce by degrees until the effect becomes noticeable. That might be the lowest frequency you can operate this rig, without beefing up the bandpass filter.
But, this is all conjecture. Tell more about your setup. Do some experiments, and tell about the results, too.
Function pingADC() looks to be used as an interrupt service routine, and the wrapper for that seems to be the TimerOne library. I've no knowledge of that library, so I can't comment on whether it's been handled correctly. Intuitively, it appears that the timer is initialized with 250; the library referenced in the Playground shows the argument to the initialization function to be in microseconds, and a 250 us yields a frequency of 4 kHz, rather than the 2.5 kHz referenced in the original post. The comment says that the period is 0.1 seconds, which is looks to be very wrong wrong. I'll ask that you try not to misdirect us with out-of-date comments next time.
So, maybe I can comment on that code a little, but only intuitively.
I'll note that the pingADC() seems unduly long for an interrupt service routine, and that it seems to cell a bunch of FHT functions that are certain to take a disastrously long time for an interrupt service routine. I'd prefer to see that the ISR note that the ADC buffer is full, and that the data be processed in loop(). It may work OK, since it looks like it doesn't call the long routines until the buffer is full, and running a long program at that time won't mess up data acquisition; it'll mess with other ISR-based stuff, though, like millis(), micros(), and Serial.
There's other stuff not to like. It can wait.