Good afternood everyone, i planning to make a proyect that basically has a 9v motor, a 9v computer-like fan and a temperture sensor. It will have to set ups, with the motor and fan on at the same time or one on while the other one is off, in bith cases with the sensor measuring. So the question is very simple how long do you think the arduino will work, in both cases without breaking? Im looking for an estimate because i dont have money to use my arduino UNO til it burns and then buy anotherone. So please people how long do you think the arduino will work continously in a safe way and then te other day the same thing over and over...
As long as it is operated within its specified limits, at least years, more likely decades. No different than any other electronic device, and an Arduino is probably simpler than most these days, so fewer things to fail. I'd expect the motor or fan to fail far sooner.
egamez:
Good afternood everyone, i planning to make a proyect that basically has a 9v motor, a 9v computer-like fan and a temperture sensor. It will have to set ups, with the motor and fan on at the same time or one on while the other one is off, in bith cases with the sensor measuring. So the question is very simple how long do you think the arduino will work, in both cases without breaking? Im looking for an estimate because i dont have money to use my arduino UNO til it burns and then buy anotherone. So please people how long do you think the arduino will work continously in a safe way and then te other day the same thing over and over...
Thank you so much!!
I too think it can run for decades as long as it's not subjected to miswiring, bad power or other external causes. The vast number of actual board/chip failures are most likely human error type situations where a user is subjecting the board/chip to higher voltages or current flows then it is designed to handle.
At room temperature, the Flash is essentially guaranteed for 100 years...
3.4 Data Retention
Reliability Qualification results show that the projected data retention failure rate is much less than 1 PPM over 20 years at 85°C or 100 years at 25°C.
My guess is that is the most likely component to fail under normal operating conditions.
electro / mechanical ,
yep the thing should keep going for decades,
BUT:
its a small bit of silicon, with a few electrons per bit in the program.
one strike via a stray neutron , esd spike or what ever, and the program will glitch.
now that glitch might be to some where that the program can recover,
or it might be to some where that causes the mirco to 'crash'.
and over years, the code will glitch at some point,
electro / mechanical ,
yep the thing should keep going for decades,
BUT:
its a small bit of silicon, with a few electrons per bit in the program.
one strike via a stray neutron , esd spike or what ever, and the program will glitch.
now that glitch might be to some where that the program can recover,
or it might be to some where that causes the mirco to 'crash'.
and over years, the code will glitch at some point,
It definitely is an issue, ECC memory is not uncommon. I have no idea how much of an issue it is in the microcontroller world or how it might be mitigated in MCUs. For smaller processors, where there may not be as much incentive to miniaturize, just sticking to older lithography technology might go a long way, physically larger memory cells being less susceptible.
I suspect it's not an issue at all with most modern microcontroller chips as I think all their ram is static and can run at any clock speed the chip is rated at, where as dynamic ram had to be 'refreshed' (usually only requiring a read) on a banked basis on a fixed time schedule. I recall the old Z-80 chip had internal support for automatically refreshing any attached dynamic memory. It would 'cycle steal' memory cycles to cause a read of the next 'row' of address needed and the address counter would just then increment until rolling over to start again. I forget the amount of overhead this took out of a running program (1%...10% ?) but is was the reason many used only the more expensive 'static ram' chips over the cheaper and denser dynamic memory chips.
I seem to recall the original apple computer, which utilized dynamic ram, used a already defined video line or frame counter to perform memory refreshing thus requiring no addition hardware support.
drjiohnsmith:
its a small bit of silicon, with a few electrons per bit in the program.
one strike via a stray neutron , esd spike or what ever, and the program will glitch.
now that glitch might be to some where that the program can recover,
or it might be to some where that causes the mirco to 'crash'.
and over years, the code will glitch at some point,
No that is simply wrong. If it were true then no machine could run for very long.
My first embedded micro controller project controlled a ham radio repeater. That ran 24/7 for 23 years before it was replaced. And then it was still working.
how oftern has one had to reset a computer, your tv gone 'silly' or the washing machine gone funny,
I bet every one who has a few micros around the house has had a few funnies over the years.
if it does not matter to your design, then your ok,
but random none programmed behaviour can not be ignored,
if you decide a random crash is not a problem, then thats a good solution, most arduinos I would suggest are like this. if it goes wrong the user cycles the power,
ways to mitigate include,
power reset the processor regularly,
have an external watchdog timer that resets the processor if an error is detected,
there are many other harder and simpler methods, which might suit your application , but these two are a good starting point.
As an example, I for one would not have a processor controlling a large motor without their being a limit system in place that kicks in if the arduino system for what ever reason goes wrong.
how oftern has one had to reset a computer, your tv gone 'silly' or the washing machine gone funny,
I bet every one who has a few micros around the house has had a few funnies over the years.
if it does not matter to your design, then your ok,
but random none programmed behaviour can not be ignored,
if you decide a random crash is not a problem, then thats a good solution, most arduinos I would suggest are like this. if it goes wrong the user cycles the power,
ways to mitigate include,
power reset the processor regularly,
have an external watchdog timer that resets the processor if an error is detected,
there are many other harder and simpler methods, which might suit your application , but these two are a good starting point.
As an example, I for one would not have a processor controlling a large motor without their being a limit system in place that kicks in if the arduino system for what ever reason goes wrong.
A random crash is just that, a random diagnosed crash. Until one has evidence of the root cause of any given 'crash', blaming it on a cosmic ray collision is just idle speculation. Interestingly I believe one of the early micros (RCA 1802?) was available in a 'radiation hardened' package and was used in a lot of satellite applications for operating in such hostile environment.
Most often, a microcprocesor "thing" will work continuously until a BUG in YOUR PROGRAM causes it to do something wrong.
There are some common causes: timer variable overflows (using int instead of long can cause problem at ~32 or ~65 seconds. Failing to use unsigned long can cause problems after about 25 days. Failure to handle ulong overflows correctly can cause problems at about 50 days.) and "memory leaks" are the big things that will cause a program that seems to be working fine to fail at some later time.
After that, the next most common cause is probably power supply issues, dust and other environmental factors (corrosion, rats chewing on wires, etc)
cosmic rays may be able to cause problems, or flash memory decay due to other factors, but those are far from the most likely failures!
One thing that has not been mentioned for "always on" systems is that it's a good idea to use the watchdog to reset your system in case of software crash (yes it happens if your system is complex enough to allow Murphy's law to apply...).
Yes, it is somewhat a paradox of embedded systems that it can be (MUCH) better to crash and restart than to go on operating incorrectly. Arguably, long-duration applications should consider resetting themselves "often" rather than worrying about the possible bugs that might crop up when millis() overflows. After all, a complete reset only takes a couple of seconds.
68tjs:
Don't worries about the microcontroller in an arduino board weakest components are capacitors.
Having followed along with posters problems for around five years now, I would say the weakest component is the AVR chip's output pins due to lack of electronics knowlege and experiance. Subjecting arduino pins to voltages below zero or above 5.5vdc is probably a close second. So the weakest component by far is really the user.
retrolefty:
I would say the weakest component is the AVR chip's output pins due to lack of electronics knowlege and experiance.
If you do not follow the manufacturer's instructions everything is possible.
Curents in output (sink or source) are strickly specified by Atmel.
Current limits indicated by Arduino are false -> they correspond to "Absolute Maximum Rating".
When looking reliability rule for capacitor is not to exceed half the maximum voltage specified by the manufacturer.
I have an UNO R2 with capacitors 25 volts, which is suitable for use between 8V and 12V.
Now Arduino for all his boards uses 16 Volt capacitors which, in my opinion, makes the boards no reliable over several years.
We can give no indications without having accelerated aging tests.
Without this tests the only message that can be delivered is "we think it can work ..... but we can not exclude that it can fail.