optimizing sram? MEGA, with big datasets.


I’m looking at reading 4 different Sharp IR sensors over a period of 2 seconds at max rate. And I’m doing that repeatedly.

I’m pulling in data from 1 sensor each millisecond, which gets me about 10 readings per 1/50th (which is the maximum read rate of the sharp IR’s) per sensor. That’s enough for me to get mode/mean from each, and determine what the “True” reading of the sensor is, without having to get too fancy with noise mitigation.

Suggestions? This appears to me to simply be too much data for the arduino. I am running a MEGA.

I’m concerned that I need a 3d array, of bytes and integers, and that I’m going to blow through all of my sram and then some. Given that this isn’t the entirety of the project, I’d like to see if there’s a way around this that I’m simply not getting.

I’m happy to use external sram modules or such, but it seems like interfacing them gets very complicated very fast.

Suggestions? Thanks.

You don’t need to keep a large number of samples to compute an accurate average value.
Taking 500 samples per sensor (1 ever 4 ms for 2 seconds, you said) and then doing math on those samples seems … misguided.

All you need for an average is a sum and a count. Just make sure the data type used to represent the sum is big enough to avoid overflowing during your sample period.

If this is something you’re doing on an ongoing basis and actually want to have a rolling average, you can get something very close to that just using an exponential decaying average i.e. new average = (x * old average) + ((1-x) * new sample). Vary x to control how quickly the average decays.

Also - if you really need more memory (SRAM) - for the Mega you can get an expansion - for instance:


Sorry, I'll clarify. I know I won't need a huge dataset for the sampling/mean/etc.

But I do want to have distance from each of 4 sensors, with the time of reading, for 2 seconds (160 samples if my math is right, at 1/20th second per).

if you schedule the measurements properly you can calculate the time from the start time. So that does not cost extra RAM.

20 samples per second x 4 sensors x 2 seconds = 160 distances, depending on accuracy you need 1…4 bytes per distance.

uint16_t samples[4][40];   // 16 bit numbers 0 .. 65535   e.g. millimeters
uint32_t lastPing ;
uint8_t s = 0;

void setup()
  lastPing = millis();
  s = 0;

void loop()
  if (millis() - lastPing > 20 && s < 40)
    sample[0][s] = ping(1);  // returns distance sensor 1
    sample[1][s] = ping(2);
    sample[2][s] = ping(3);
    sample[3][s] = ping(4);
   lastPIng += 20;

Brilliant, thanks. Yes, since I'm hardcoding checks to make sure the checks take place each MS, you're right that I can avoid storing 80 instances of "sampletime", and only have the initial. That saves a ton of my data right there :slight_smile: