Pow accuracy and data types

Hi, Trying to get understand data types and accuracy with root and power functions, for calculating musical frequencies:

void setup() {
  Serial.begin(9600);
}
double TwelveRootTwo;
double Octave;
void loop() {
  TwelveRootTwo = pow(2,1/12.0);
  Serial.print("2^(1/12): ");Serial.println(TwelveRootTwo,10);
  Octave = pow(TwelveRootTwo,12);
  Serial.print("TwelveRootTwo^12: ");Serial.println(Octave,10);
  delay(10000);
}

Octave should be 2.000000000.
It gives: 2.000001192

Actually TwelveRootTwo should be 1.059463094359... it returns 1.0594631433. So that is the problem.

Is this the best it can do?

Type double is not supported, and treated the same as "float" on AVR-based Arduinos, so you get six or sometimes seven (at best) total digits of accuracy.

I doubt anyone could hear the resulting difference in musical tones.

Really. Less than 1ppm error.

A real instrument won't be that close! And if you play the note twice it will probably vary more than that. :wink:

It would be easier, and computationally more efficient to make a table for one octave and then multiply (or divide) to get the other octaves.

2 Likes

Yes. Make a table on a real computer, or with a spreadsheet program.

Or do the heavy lifting in you setup() function by running some code that fills in the empty waiting table.

a7

rpschultz13,

If you really need the extra accuracy, then you could use an Arduino R4 Minima/WiFi.

All good suggestions. Ultimately going to do this on a Teensy, so the extra precision is probably there… even though I agree it probably isn’t necessary.
Thanks!

Double precision is fully supported on the 3.x and 4.x Teensy boards.

Yes, that's about the best it can do with the 32bit floats that are "double" in avr-gcc.

1.059463094359 (more accurate)
1.0594631433   (avr result.)

Those are identical, rounded to 8 digits of precision. (1.0594631) that's slightly better than expected (6 to 7 digits precision.)

Got my Teensy. Changing the TwelveRootTwo to double does indeed give better results:

2^(1/12): 1.0594630944
TwelveRootTwo^12: 2.0000000000
A2 = 110
A3 = 220.00000
A4 = 440.00000

As a reference, my iPhone gives 1.059463094359295

WolframAlpha gives: 1.0594630943592953

Thanks for the help!

Have you considered how long it would take to measure the difference between those as audio frequencies?
At 1kHz it would be quarter of an hour before it was a whole cycle error.

1 Like

Yes I agree that level of precision isn’t necessary. Partly just gaining experience with programming.

However, it needs to be a float. At the lower frequencies, say guitar low E, 1 Hz is 20 cents and very noticeable. Most guitarists need to tune to within 5 cents. For reference there are 100 cents (semitones) in a 1/2 step.

An exercise, then, could be to calculate the number of cents in this difference:

Octave should be 2.000000000.
It gives: 2.000001192

That would be insignificant by far. I agree that double isn’t needed, but float is absolutely necessary for any frequency calcs. Integers would not work.

Many digital music synths use scaled integers for frequency calculations. As long as you're within a couple cents (about 0.1%), any frequency error is below the threshold of human discernment. You can easily get that level of accuracy with a fixed-point number with, say, 9 or 10 bits in the fractional part. To keep it simple you can have a 16.16 fixed-point number in a 32-bit value (unsigned long on Arduino), and with that you can represent any possible frequency from 0 Hz to 65 kHz with a resolution of 0.026 cents.

That's enough for the whole hearing range of cats or possums (up to 64 kHz) but not for a porpoise (up to 150 kHz). So if you need it for porpoises you can take some of the fractional bits and give them to then integer part (maybe 18.14 fixed-point). Then you can say it's good enough for all intents and porpoises. :smiley:

3 Likes

I don’t understand this. Please explain.

You went a long way for that one :grinning:

16 bits for the integer portion, 16 bits for the fractional part.

Fixed-point is an alternative method to floating-point to store a number with a fractional part. Floating-point has a radix point (known as a "decimal point" when talking about decimal) that can "move" depending on the exponent, but fixed-point keeps the radix point in the same spot. It can be faster than floating point because of that. One downside is it can't represent the same range of numbers as floating point in the general case, but in specific cases with fairly small ranges like frequencies in music, fixed-point can work great.

Also, saying something like "16.16 fixed-point" is just a shorthand way of saying there are 16 integer bits and 16 fractional bits.

See Fixed-point arithmetic - Wikipedia if you want to learn more about it.

...plus it has a floating point processor.