Go Down

Topic: How long can my arduino work continuosly? (Read 16614 times) previous topic - next topic

egamez

Good afternood everyone,  i planning to make a proyect that basically has a 9v motor, a 9v computer-like fan and a temperture sensor. It will have to set ups, with the motor and fan on at the same time or one on while the other one is off, in bith cases with the sensor measuring. So the question is very simple how long do you think the arduino will work, in both cases without breaking? Im looking for an estimate because i dont have money to use my arduino UNO til it burns and then buy anotherone. So please people how long do you think the arduino will work continously in a safe way and then te other day the same thing over and over...

Thank you so much!!

JChristensen

As long as it is operated within its specified limits, at least years, more likely decades. No different than any other electronic device, and an Arduino is probably simpler than most these days, so fewer things to fail. I'd expect the motor or fan to fail far sooner.

retrolefty


Good afternood everyone,  i planning to make a proyect that basically has a 9v motor, a 9v computer-like fan and a temperture sensor. It will have to set ups, with the motor and fan on at the same time or one on while the other one is off, in bith cases with the sensor measuring. So the question is very simple how long do you think the arduino will work, in both cases without breaking? Im looking for an estimate because i dont have money to use my arduino UNO til it burns and then buy anotherone. So please people how long do you think the arduino will work continously in a safe way and then te other day the same thing over and over...

Thank you so much!!


I too think it can run for decades as long as it's not subjected to miswiring, bad power or other external causes. The vast number of actual board/chip failures are most likely human error type situations where a user is subjecting the board/chip to higher voltages or current flows then it is designed to handle.

Lefty

Coding Badly


At room temperature, the Flash is essentially guaranteed for 100 years...
Quote
3.4 Data Retention
Reliability Qualification results show that the projected data retention failure rate is much less than 1 PPM over 20 years at 85°C or 100 years at 25°C.


My guess is that is the most likely component to fail under normal operating conditions.

drjiohnsmith

there are two parts to this

electro / mechanical ,
   yep the thing should keep going for decades,
   
BUT:
   its a small bit of silicon, with a few electrons per bit in the program.
       one strike via a stray neutron , esd spike or what ever, and the program will glitch.
           now that glitch might be to some where that the program can recover,
                    or it might be to some where that causes the mirco to 'crash'.

and over years, the code will glitch at some point,
     


retrolefty

#5
Dec 14, 2013, 02:47 pm Last Edit: Dec 14, 2013, 02:49 pm by retrolefty Reason: 1

there are two parts to this

electro / mechanical ,
  yep the thing should keep going for decades,
 
BUT:
  its a small bit of silicon, with a few electrons per bit in the program.
      one strike via a stray neutron , esd spike or what ever, and the program will glitch.
          now that glitch might be to some where that the program can recover,
                   or it might be to some where that causes the mirco to 'crash'.

and over years, the code will glitch at some point,
   




Interesting...........not convinced............but interesting

Early dynamic memory chips were vulnerable to random cosmic bombardment causing 'soft errors'

JChristensen


Interesting...........not convinced............but interesting


It definitely is an issue, ECC memory is not uncommon. I have no idea how much of an issue it is in the microcontroller world or how it might be mitigated in MCUs. For smaller processors, where there may not be as much incentive to miniaturize, just sticking to older lithography technology might go a long way, physically larger memory cells being less susceptible.

retrolefty



Interesting...........not convinced............but interesting


It definitely is an issue, ECC memory is not uncommon. I have no idea how much of an issue it is in the microcontroller world or how it might be mitigated in MCUs. For smaller processors, where there may not be as much incentive to miniaturize, just sticking to older lithography technology might go a long way, physically larger memory cells being less susceptible.


I suspect it's not an issue at all with most modern microcontroller chips as I think all their ram is static and can run at any clock speed the chip is rated at, where as dynamic ram had to be 'refreshed' (usually only requiring a read) on a banked basis on a fixed time schedule. I recall the old Z-80 chip had internal support for automatically refreshing any attached dynamic memory. It would 'cycle steal' memory cycles to cause a read of the next 'row' of address needed and the address counter would just then increment until rolling over to start again. I forget the amount of overhead this took out of a running program (1%...10% ?) but is was the reason many used only the more expensive 'static ram' chips over the cheaper and denser dynamic memory chips.

I seem to recall the original apple computer, which utilized dynamic ram, used a already defined video line or frame counter to perform memory refreshing thus requiring no addition hardware support.


Lefty

Grumpy_Mike


   its a small bit of silicon, with a few electrons per bit in the program.
       one strike via a stray neutron , esd spike or what ever, and the program will glitch.
           now that glitch might be to some where that the program can recover,
                    or it might be to some where that causes the mirco to 'crash'.

and over years, the code will glitch at some point,

No that is simply wrong. If it were true then no machine could run for very long.
My first embedded micro controller project controlled a ham radio repeater. That ran 24/7 for 23 years before it was replaced. And then it was still working.

drjiohnsmith

it does happen
   it is a problem

how oftern has one had to reset a computer, your tv gone 'silly' or the washing machine gone funny,
   I bet every one who has a few micros around the house has had a few funnies over the years.

if it does not matter to your design, then your ok,
    but random none programmed behaviour can not be ignored,

if you decide a random crash is not a problem, then thats a good solution, most arduinos I would suggest are like this. if it goes wrong the user cycles the power,

ways to mitigate include,
  power  reset the processor regularly,
      have an external  watchdog timer that resets the processor if an error is detected,
     
there are many other harder and simpler methods, which might suit your application , but these two are a good starting point.

As an example,  I for one would not have a processor controlling a large motor without their being a limit system in place that kicks in if the arduino system for what ever reason goes wrong.

   

retrolefty


it does happen
   it is a problem

how oftern has one had to reset a computer, your tv gone 'silly' or the washing machine gone funny,
   I bet every one who has a few micros around the house has had a few funnies over the years.

if it does not matter to your design, then your ok,
    but random none programmed behaviour can not be ignored,

if you decide a random crash is not a problem, then thats a good solution, most arduinos I would suggest are like this. if it goes wrong the user cycles the power,

ways to mitigate include,
  power  reset the processor regularly,
      have an external  watchdog timer that resets the processor if an error is detected,
     
there are many other harder and simpler methods, which might suit your application , but these two are a good starting point.

As an example,  I for one would not have a processor controlling a large motor without their being a limit system in place that kicks in if the arduino system for what ever reason goes wrong.

   


A random crash is just that, a random diagnosed crash. Until one has evidence of the root cause of any given 'crash', blaming it on a cosmic ray collision is just idle speculation. Interestingly I believe one of the early micros (RCA 1802?) was available in a 'radiation hardened' package and was used in a lot of satellite applications for operating in such hostile environment. 

drjiohnsmith

Agree

A random crash is normally the constraint on how long a processor will run,
 
U just need to decide how you cope with said event happening, no matter what the cause was.

westfw

Most often, a microcprocesor "thing" will work continuously until a BUG in YOUR PROGRAM causes it to do something wrong.
There are some common causes: timer variable overflows (using int instead of long can cause problem at ~32 or ~65 seconds.  Failing to use unsigned long can cause problems after about 25 days.  Failure to handle ulong overflows correctly can cause problems at about 50 days.)  and "memory leaks" are the big things that will cause a program that seems to be working fine to fail at some later time.

After that, the next most common cause is probably power supply issues, dust and other environmental factors (corrosion, rats chewing on wires, etc)

cosmic rays may be able to cause problems, or flash memory decay due to other factors, but those are far from the most likely failures!

drjiohnsmith

totaly agree

the actual board is not the limiting factor in how long an arduino can work continuously,




mauried

Reading the OPs post, the Arduino is going to be controlling a motor and a small fan.
Do you think they will last longer than the Arduino will?

Go Up