Quick check about division by zero

Considering the atmega328p:
Obviously it won't work, and the value which comes out will be nonsense, but...

...will it cause any "damaging" crash circumstances? The way that errors from over-writing the end of an array can?

Can divide by zero errors ever cause other variables to get corrupted? Can they cause programs to jump location strangely?

If a piece of code such as
float Output=FirstFloat/SecondFloat;
where SecondFloat is a variable taken from sensor readings (compiler never knows its value in advance) nd goes to zero.

What about
float Output=FirstFloat/SecondUint8;
where SecondUint8 is a uint8_t, and goes to zero, again sensor derived so no compiler detection.

I ran tests of these and got nan values printed over serial, so can verify they give the junk values expected, but I can't think how to test whether corruption of other variables is possible.

Also what if a NaN gets fed in to a function like sin or atan2? Again just a corrupt output value, or stack troubles and other variables messed with?

I know that if one tries to average, or add, or low pass filter a NaN value then it corrupts all future uses of that value.
LPF=(1-alpha)ValueIn+alphaLPF ;
has LPF permanently sent to NaN if ValueIn is ever NaN, but where one is sensing an instantaneous value, and can cope with it being a junk value at times when a variable does go to zero, and then fully replaces the value of the variable from fresh readings at later times, is there any harm in letting divide by zero occur?

Thanks

There is never any reason for it to occur. Always test first and do something sensible. Example:

if (x != 0) result = y/x; else result=0;

2 Likes

But do that and you end up with all the meaningful parts of code completely obscured by masses of
if(thing!=0)

If you’re seeing nan, it means the runtime has caught whatever math error occurred, and there should be no further repercussions.

However if the calculation is under the covers, and is used as an index or pointer etc, then all bets are are off. The program is likely in uncharted territory.

This all comes down to programming style.

Don't be silly.

2 Likes

Yes, and?

In the environment I work in, every return value has to be checked and handled, divide by zero errors are simply not allowed to occur, and numerous other safety checks. Yes, probably 75% of the code is error checking, that's what has to happen sometimes.

It will "soft brick" boards with native USB functionality. The reason is that the USB stack that produces the USB CDC serial port used to upload sketches to these boards is running on the same microcontroller as the sketch program. So if the sketch program "crashes" the program, then the board no longer produces a serial port, which prevents you from uploading a working sketch.

The board can easily be recovered from this "soft bricked" state if you know the correct technique to activate the bootloader, but new users who are not yet aware of the technique might get the impression that their board is dead.

1 Like

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.