Time to get more specific.
My actual code is over 2000 lines of object-oriented goodness, so i'll give you a simplified version of what I'm doing.
uint64_t Accumulator; //The question: Should I use a large integer for this, or can I use a float/double?
uint32_t NumSamples;
float ActualValue;
void AccumulateValues()
{
Accumulator += ReadTCValue()*1000; //This value is floating-point, so I multiply by 1000 to keep 3 decimal places.
NumSamples++
}
void ApplyValues()
{
ActualValue = Accumulator / NumSamples / 1000; //Divide that 1000 out that we multiplied by earlier
NumSamples = 0;
Accumulator = 0;
}
void loop()
{
currentTime = millis();
while (millis() - currentTime < 10000) // Loops as fast as it can until 10s has passed
{
AccumulateValues(); //Accumulates sensor value samples
}
ApplyValues(); //Grabs the average of said samples and resets the accumulator and sample counter
Serial.println(ActualValue);
}
So right now I accumulate values into an integer value. But I'm wondering if a float/double would be able to work in this scenario. That way, I don't have to multiply by 1000 and divide it out later. My concern is that the addition would be screwy on a value whose exponent keeps changing. Like, what happens if you add 2 to a floating-point value of 1.00000000000x10^15, does it even add anything to it if it doesn't register in the variable? Or is that data just lost?
Sorry if it is a bit of a roughly-asked question.
No idea, but the beauty of software development is that you can simply test it 
Like, what happens if you add 2 to a floating-point value of 1.00000000000x10^15, does it even add anything to it if it doesn't register in the variable?
For single precision floats, the value is not changed.
This is a very well understood problem (one of several) with floating point operations. One place to start reading about the topic is here.
Thread title contains "sensor value".
I can't imagine any real world sensor that has an output range REQUIRING a float variable. Just because you have a decimal point in your end result does not mean you need floats for calculation 
Always use integer math if sensors are involved, it is way faster and does not introduce rounding or conversion errors
- I think there is rarely good reason to use floating point arithmetic on (8 bit) Arduino. Maybe sometimes it may be easier (for programmer) and not so demanding for Arduino so it is not worth spending time to make integer arithmetic to fit programmer's needs.
2) Accumulator += ReadTCValue()*1000; //This value is floating-point, so I multiply by 1000 to keep 3 decimal places. helps nothing. This is what "floating point" means - it has the same amount of "decimal" places regardless magnitude of the number. But since it is binary and not decimal multiplying and dividing by 1000 only takes memory and processor time and adds some small(?) error without ANY gain.
EDIT:
Jonathanese:
Like, what happens if you add 2 to a floating-point value of 1.00000000000x10^15, does it even add anything to it if it doesn't register in the variable? Or is that data just lost?
The data is lost. You need to use 32 bit "long" or 64 bit "long long" or make your own even longer integer variable to hold such number.
EDIT 2: sorry, now I understood which value is float, 2) is not valid here. But consider multiply and divide by 1024 - it should be faster and possibly less rounding errors.
uint32_t AccumulatorOnes = 0;
uint32_t AccumulatorBillions = 0;
uint32_t NumSamples = 0;
const uint32_t ONE_BILLION = 1000000000UL;
void AccumulateValues()
{
AccumulatorOnes += ReadTCValue();
if (AccumulatorOnes >= ONE_BILLION) {
AccumulatorOnes -= ONE_BILLION;
AccumulatorBillions++;
}
NumSamples++;
}
Thanks, guys. that really helps!
I'll be using a uint64_t and multiplying the value by 1000 (I'll see about using 1024, but right now I am multiplying by 1000, then I have a multiplier i use for other things as well, so I multiply by 0.001 which isn't quite as clean if it is 0.0009765625.
so I multiply by 0.001 which isn't quite as clean if it is 0.0009765625.
Neither method is as clean as an integer divide by 1000 or 1024, for which there is no loss of accuracy.
Note: the value "0.001" cannot be represented exactly as a floating point number in the computer.
Multiplying and dividing by power of two is very easy for (common) computers. It is as easy as power of 10 for you.
While in general it is faster to multiply by a constant than divide it is not true for power of 2. Moreover I wonder how compiler handles integer * 0.0001. I guess it either round 0.0001 down to 0 and multiply by it or it will convert the integer to float, multiply and convert back. On the other hand dividing by 1024 will be easy for it.
I guess
There is no need to guess. The language definition specifies the rules.
In the case of integer*float, the integer is converted to float then the two are multiplied.
jremington:
In the case of integer*float, the integer is converted to float then the two are multiplied.
Which likely removes most of precision gained by integer aritmetics ruining OPs original intent...
BTW what gives you the original floating point values? I hope it is not analogRead.