My code uses a timed interrupt every second to update a count so I have one second between interrupts which is a nice time frame to work in! as so much can be done
Anyway I have a couple of sensors that I read and what I want to do is to take as many readings as possible in the one second time frame and then take the average of all the readings so my average result is updated every one second
I could use a fancy algorithm which I havent ruled out but I dont think I need to bother, I was thinking an array
ARRAY[0]+=ADC_DATA;
ARRAY[1] +=1;
That could add all the samples together with the total number of sample held in other entry
AVERAGE=ARRAY[0]/ARRAY[1];
The average is simply the two divided but what I am worried about is overflows as at the moment I am not sure how many samples there will be
Now I could just go with a (64bit)long long and thats a big number! a (32bit)long would do it but I want to understand the best way to approach this, the large numbers are unwieldy (not that its a problem) but I am considering using a float array with the ADC data divided by 1000
Can I ask
A what are the drawbacks (aside from speed issues)of dividing by 1000 and saving the ADC data as a float?
B what is the maximum count a float can go to without overflowing ?
C Which way would you use to calculate an average of many samples?
While I don't have the precise answers, as I recall:
The float data type uses 4 bytes of storage and yields a range of values of approximately 1.2 +/- E38, which is an approximation of where overflow would take place.
The real disadvantage is that you only get about 7 digits of precision out of the value. Anything beyond that is the software's best guess of the value. Part of your answer will depend upon the "digit-size" of each sample.
Unless you need some special type of average, it would seem a simple calculation of the mean would do.
Why are you storing the samples in an array? Will it be dumped to a file or SD card, sent somewhere else over a serial link...what?
I think 7 digits of precision would be ok, I am controlling a mechanical process and there are many inaccuracies within the system, an accuracy within 5-10% is just fine
The reason I am storing them in an array is just for convenience, I could use a variable but with a three entry array is nice for me, entry one is running sum, entry two is the number of samples and entry three is the average, its global and nice and easy to check in debug
Not that any of those arguments are a concrete reason I suppose the real reason is habit
The values at some point I will want to send them through some comms (SPI or I2C) maybe to an LCD display or whatever but thats optional extras!
Is there any reason why its bad for me to use an array like this?
What number format and range do the sensors return and how do you read them (SPI, I2C, Serial, Analogue etc). You should be able to work out the theoretical maximum number of sample you can do in a second for each sensor and this would determine the data type to store the readings in. Another approach is to collects a running average and then you remove the need for arrays. Assuming the sensors return integer numbers (not many do floating point) then it's probably better and faster to store/add as integer types until the final floating point calculation else your more likely to introduce errors.