you have 10 RA objects of size 60 and 5 RA objects of size 24, that is a lot of memory (at least for an UNO)
Every object has 10 bytes local vars and 4 bytes per element.
15 objects = 150 bytes
10 x 60 x 4 = 2400 bytes
5 x 24 x 4 = 480 bytes
roughly 3000+ bytes of SRAM
What board are you using?
If it is an UNO, it does not have that much memory, ==> Get a MEGA.
Learning point for me: The RA class has no error handling or state to see its "health".
Analysis:
The class does not check if the allocation of the needed "arrays" succeed, but the code behaves as if it does so.
That means that only the first RA object are healthy and the others have internal pointers pointing to (probably) NULL.
To make it more robust the interface of the Class could change to have a method
bool begin(uint8_t size) ;
that returns true is size elements could be allocated.
The allocation should be removed from the constructor.
// this is a breaking interface change
Another alternative is to have a method uint8_t internalSize();
That returns the size of the internal size. This returns zero if allocation had failed and the internal size otherwise.
// this would not break the existing interface and would increase footprint not much.
Updated the library to version 0.2.04 ( 0.2.03 was skipped - dev only version)
To solve the problem of Chrismolloy above I added code to check if the internal array could be allocated. The previous 0.2.02 version was really opportunistic
If the array cannot be allocated the size of the array is set to 0, the new method getSize() can be used to check if the allocation worked. Having an internal size of zero the Class cannot accept new values and the average will be NAN as it cannot be calculated.
The size of the lib increased about 20 bytes, which is imho acceptable for this "safety net".
I've been using your library, in conjunction with TinyGPS++ library, in a GPS routine to average the GPS position and help reduce the position wander, and had some interesting results. Whilst the longitude seems to function correctly e.g. 1.01834380, the latitude seems to stop or stall and not continue example: 51.34103012.
Note that I'm going to 8 decimal points for the LAT/ LON position. I wondered if there is a limitation in the RA library that limits the length or size of the number?
Also, would it be capable of taking both + and - numbers?
The RA lib uses floats which are on an AVR based Arduino 32bits IEEE754 floats. This means a 23 bit mantissa which equals about 7 significant digits.
If you are using an ARM based Arduino, which supports 64 bit double you could patch the library by replacing all float types with double. Should be a good change anyway. On my todo list.
Do you have any plans on getting your libraries added to the 1.6.2+ library manager? I imagine it would take quite a bit of work because it seems at first brush that each library should be it's own repository (instead of how you have several libraries in one repository).
I think AdaFruit has published a script to help mass publishing of libraries.
One thing to be wary of when using this library is floating point imprecision.
I left a unit running for a week sampling 10 times a second and was very confused when I came back as to why my average (over the last 50 samples) was very different to the actual current input.
Looking into the code, the RunningAverage::addValue function subtracts a Double and then adds a Double. Adding and subtracting of Doubles (and Floats) can lead to a huge degree of imprecision.
I'm not sure what the best solution is here, calculating the sum total every time getAverage() is called is the obvious one, but that leads to somewhat more function overhead.
You are completely right. Thanks for this observation.
10 samples/second = 864.000 /day = 6 million per week.
The number of samples is approaching the accuracy of the (IEEE754) float (7 digits) when one reaches 1 million samples (1 day in your case) . This means if the average sample has 3 digits, one will loose 2 digits every time.
The "trick" of remove one double and add one double is an efficient way to calculate the sum of the array of samples. And yes when using this trick the error will add up in sum. In theory after 100 samples the last two digits are suspected to be incorrect. Need to investigate the effect.
Solution is to (sort and) add all elements of the array every time, giving a constant error. Consequence is a slower execution. The sorting will improve the sum when the dynamic range of the numbers is large.
A practical solution for the library might be to do a proper sum e.g. every 100 or 1000 times a value is added (new parameter?). The lower this number the more accurate the sum. The price is a performance penalty once every 100 / 1000 times. There are several variations possible.
A good value for the number will also depend on the size of the numbers added. Ideally one would like to have a sort of error counter that makes an estimate of the max error and when a threshold is reached do a recalculation of sum.
The easiest solution is to redo the function Average() and rename the current one to fastAverage();
// returns the average of the data-set added sofar
double RunningAverage::getAverage() const
{
if (_cnt == 0) return NAN;
double sum = 0;
for (uint8_t i = 0; i < _cnt; i++)
{
sum += _ar[i];
}
return sum / _cnt;
}
double RunningAverage::getFastAverage() const
{
if (_cnt == 0) return NAN;
return _sum / _cnt;
}
The following changes were made since 0.2.08
(request aaronblanchard)
getAverage() renamed to getFastAverage() as it is fast but less accurate.
reimplemented getAverage() to be accurate, but a bit slower.
getAverage() sums all elements in the internal buffer and average them. This solves the remark from aaronblanchard a few posts ago that the fastAverage() drifts from the real value. An example sketch is included to show this drift.
(request by mail)
added GetMinInBuffer() to make difference with getMin()
added GetMaxInBuffer() to make difference with getMin()
Note getMin() and getMax() gives the minimum/maximum since last call to clear(). An example sketch shows the differences between the methods.
The sample sketch adds values between 0.0 and 0.999 in a loop. The maximum difference after 30 million++ additions is relative still low. There will be input streams which are more sensitive for the fastAverage algorithm. So when max accuracy is a requirement please use getAverage().
as always remarks and bug reports are welcome,
Rob