A bunch of questions about playback sampled sounds

The reason you use an interrupt is so the task, which in this case is outputting the next sample, can be done at precisely the right moment. Because there is such a small interval between fetches then polling to see if the time is right is not only inefficient but does not give you fine enough control. The microseconds timer only gets updated every four microsecond for a start and then the other task or tasks in the Sate machine have to be completed in under the time between samples, which puts a lot of limitations on them. For example it means you can't do an analogRead while playing a sample without disturbing the timing.

By using an interrupt any other task gets suspended and your sample is changed on time. I would not say that non of the tutorials use this technique but as I said at the start of the thread there is a lot of crap out there. And an Instructables url is one way you can spot crap.

Correct me if I'm wrong. Do you mean that many mandatory functions inevitably consume a significant amount of time and it's impossible to achieve a 100% perfect timing? Well... at least is comprehensible that even delayMicroseconds is accurate enough, but not perfect (I guess due to clock rate).

Good tutorials though.

Do you mean that many mandatory functions inevitably consume a significant amount of time and it's impossible to achieve a 100% perfect timing?

Yes. And that is with a task that contains only the one instruction, you will have to insure that all your tasks complete in significantly shorter time than the sample rate. Even then there will be a jitter of a time period equal to the length of your longest task. Using a delay is just a "baby" stop gap used as a demonstration or where you don't want to do anything else while a sound is playing.

at least is comprehensible that even delayMicroseconds is accurate enough

Any delay is blocking so your processor can not do anything else while it is waiting. A jitter of 4uS might not be noticeable at the low quality you have at the moment but it will soon show up as noise on your output. Remember 4uS equates to 64 clock cycles, about the same time as a digital write function call.

Grumpy_Mike:
And that is with a task that contains only the one instruction, you will have to insure that all your tasks complete in significantly shorter time than the sample rate. Even then there will be a jitter of a time period equal to the length of your longest task.

Now I understand your point even better. You mean that I have a kind of "time budget" to do other things without compromising the sample rate, right?
If so, then I just have 53 microseconds to do other stuff (the time I'm "wasting" in a delay, in order to provide to the runtime a "time gap").
Is that enough time to do a AnalogRead and map functions? Because I wanna give to my current player a new feature: adjust the pitch (and speed along the way) with a potentiometer. Yeah, I had that crazy idea in my head since yesterday. It sounds like a something called "circuit bending"...

A jitter of 4uS might not be noticeable at the low quality you have at the moment but it will soon show up as noise on your output.

What kind of noise? A "waveform glitch" or something worse enough to blow off an amplifier?

Standard analogRead() take 110uS.

. . . but it's a very sImple function to take apart - most of the 110us is a busy wait.

You mean that I have a kind of "time budget" to do other things without compromising the sample rate, right?

Right.

What kind of noise?

Time quantisation noise sometimes called sample noise. It is noise, that is signals that do not belong in there.

There are ways round the default analogRead, like to set the A/D in free running mode. But do it right and use interrupts, and let the other stuff that can wait, wait.

Grumpy_Mike:
There are ways round the default analogRead, like to set the A/D in free running mode. But do it right and use interrupts, and let the other stuff that can wait, wait.

How exactly can I do that? Do you mean that the "free running mode" will take less CPU cycles than AnalogRead function? But how can I retrieve the measures?

And what about the map function? How long it takes?

Do you mean that the "free running mode" will take less CPU cycles than AnalogRead function?

No. But it will not block your program while you are waiting for the conversion to compleate.

But how can I retrieve the measures?

From the analogue conversion register.

You can also make the conversion faster at the expense of a bit of precision:-

// set up fast ADC mode (Put in the setup function )
   ADCSRA = (ADCSRA & 0xf8) | 0x04; // set 16 times division

And what about the map function? How long it takes?

No idea, but it is only a bunch of arithmetic statements for those who can't remember kindergarten maths.

Grumpy_Mike:
You can also make the conversion faster at the expense of a bit of precision:-

// set up fast ADC mode (Put in the setup function )

ADCSRA = (ADCSRA & 0xf8) | 0x04; // set 16 times division

And which value should I put in it if I want just 8 bit precision, instead of the default 10 bit?

There is noting that returns an 8 bit value, you just have to use the 8 most significant bits of the returned value. Shift right by two places is the way to do it.

is the shift right operator.

Grumpy_Mike:
you just have to use the 8 most significant bits of the returned value. Shift right by two places is the way to do it.

So do you mean that the ADC always return a 10 bit value?
If so, please explain me a bit more that line of code that sets the analog input pins in "fast mode", how to retrieve the data with this new configuration and how to discard the two bits.

So do you mean that the ADC always return a 10 bit value?

Yes.

If so, please explain me a bit more that line of code that sets the analog input pins in "fast mode",

The A/D works with a clock that controls the timing, that line alters the pre scaler division ratio so that the clock runs faster and so the A/D runs faster.
For full information see section 23 of the ATmega328 data sheet.

how to retrieve the data with this new configuration and how to discard the two bits.

Exactly the same as before, with an analogRead function call. I told you about the shift operation to remove the lower two bits:-

eightBitValue = analogRead(0) >> 2;

Is there a way to measure the time that a funcion takes to execute? (in clock "ticks"). Because, I don't know exactly how long the AnalogRead function takes in the "fast mode".

And I found a problem by discarding the two bits. For example: if the 10-bit value is 256, 512 or 768; discarding the two most significant bits will result in a 8-bit value of 0 (zero), in all those three cases.

I want to clear all my doubts before put anything in practice...

And I found a problem by discarding the two bits. For example: if the 10-bit value is 256, 512 or 768; discarding the two most significant bits will result in a 8-bit value of 0 (zero), in all those three cases.

Why on Earth would you discard the most significant bits?

Use the micros timer for timing function calls.

That code discards the two least significant bits not the most. Are you sure you understand what a shift operation is doing?

Grumpy_Mike:
Use the micros timer for timing function calls.

Sorry for my ignorance, but how can I do that?
I mean, I want to execute a function, and then print the amount of ticks that the function took, to the serial monitor of course.

That code discards the two least significant bits not the most. Are you sure you understand what a shift operation is doing?

Whoops, my bad. You mean the LEAST significant bits, not the MOST ones. Very well, it works for me then...

You use micros to set a variable before you go into the routine.
https://www.arduino.cc/en/Reference/Micros

When you come out of it you subtract the current value from the one you stored to get how long it has taken.

Then you print out that number.

What does mean "4 microseconds resolution"?

There is actually 4 microseconds between every micros count?
Or, the micros count has an accuracy of 4 microseconds? (+- 4 microseconds)

What does mean "4 microseconds resolution"?

It means that for any given number there may be a 4uS error + or -