How to determine minimum bits per sample?

For sampling, we have number of bits per sample and the number of samples per second (Sampling rate). The sampling rate can be mathematically calculated from the Nyquist theorem. Sampling rate >= 2 x highest freq. Is there a mathematical formula to calculate the minimum number of bits per sample? Or is this number determined empirically?

Minumum can be two - high or low. With some filtering, it becomes a clean representation of your signal.
Check out "one bit dac"

Atmel even has an AVR application note on it.

number of bits per sample is totally up to you, but the more bits the better it sounds. you can calculate the theoretical noise floor for a set of bits via the following equation:

SNR = ((number of bits) x 6.02) + 1.76

If you know the required signal-to-noise ratio, each additional bit gives you 6dB (over the quantization noise). 16-bits gives you around 93dB of dynamic range.

With audio, we're dealing with human perception, so it's empirical. Nyquist doesn't really answer the sample-rate question either... First, you have to know (empirically) the range of human hearing (around 20kHz), or the range needed for intelligible communication (around 4kHz). Once you know the required frequency range, you can use Nyquist to calculate the minimum sample rate.

Of course the empirical tests have already been done. If you ignore the audiophile-types, the guys who do scientific double-blind testing have found that 16-bits is better than human hearing. That is, if you take a higher resolution file (say 24-bits/96kHz) and convert it to 16-bits/44.1kHz, nobody can hear the difference (in blind listening test). In most cases, 13 or 14-bits is probably better than human hearing. They say that a good vinyl record is equal to about 12-bits... But, analog & digital defects/weaknesses are different so they never sound the same and it's a matter of opinion when they are "equally good' or "equally bad". i.e. Some audiophiles prefer the sound of vinyl over any digital format/resolution, and that's a matter of taste. With 8-bits (linear) you can clearly hear the quantization noise. (The "answer" lies somewhere between 8 & 16-bits.)

If you don't need high-fidelity, telephone systems normally get-away with 8-bits. But for better sound quality, they use u-Law or A-Law (logarithmic) encoding instead of linear PCM. I'm sure Bell Labs has done tons of research on the minimum bit-depth for communications, but I'm more into high-fi and I don't know as much about communications.

Just to give you a rough idea, I think 8 bits is enough for voice and about equal to telephone land lines.
16 bits give you CD quality.

The 8 bits and 16 bits is also dependent on the sampling speed:

8 bits @ 8-10K sample rate would be about phone line quality.

16 bits @ 44.1K sample rate, full stereo (both sampled & converted back seperately), no compression, is CD quality.
3 minute song needs 3 x 60 x 44100 x 2 = 13,230,000 bytes of storage.

When I ripped my audio CDs (about 400 of them) I used 256K sample rate, joint stereo (no combining channels), uses about 8 Mbyte for that same song.
I tried 320K sample rate too, couldn't hear any difference, and 320K is almost no compression.
A lot of stuff on the internet is 128K and I think does not sound as good, I tried to do side by side sound tests sampling the same song at different speeds, not fully blind testing, but I gave it a shot.
Of course, listening to an MP3 player plugged in to the car radio totally kills any sound quality, but in quieter home environment it could be different.