I'm working on a project which is a data-logger. It aggregates values and every 5minutes will produce a CSV entry which I was going to store to an I2C EEPROM, null terminated. Each CSV entry is between 50 and 67 chars long. Periodically I was going to have it attempt a WiFi link, and when successful it will transfer the data to a server.
To keep track I was storing the address that the next entry will start at, at address 0. That way when returning from reset the queue was always in a known state.
After reading some posts on wear-levelling I realised this address being rewritten every time an entry was added, or when the queue was cleared I could be setting myself up for wearing out this cell early. The spec on the AT24C256 says it's good for 1million write cycles, or about 9 years of punishment in this application. The first 70 or so addresses (entry #1 in the queue) would also get similar wearing.
I'm not certain I need to be considering wear-levelling at all granted that's a long time, but in the experience of those who've done this before does EEPROM break down sooner? And if so, is there any (hopefully simple) algorithm I can use for moving both the index and the queue about the EEPROM to extend the life?
I was considering for example using alternate ends of the EEPROM address space and growing the queue backwards from the end half the time.
Don't be alarmed, no puppies get harmed if I lose some data, but I'd be very interested to hear your thoughts, thanks ! Geoff
It would be better if the length was 64 byte fixed as it then would fit in a whole page. Can you compact the data a bit? If not can you post a smaple of how your data is build up, maybe we see some ways.
32K EEPROM / 64 = 512 entries x 5 minutes = 2560 minutes > 42 hours of data before overwriting starts
Note that you can use up to 8 32K EEPROM's which can hold 2 weeks of data at that rate (given that the CSV <= 64 bytes)
To keep track I was storing the address that the next entry will start at, at address 0. That way when returning from reset the queue was always in a known state
The problem will be that this entry is written fairly often!
You allready gave a hint for how to handle the wear: use multiple indexes. And yes as EEPROMS are written in 32bytes pages (or multiple) you have to reuse pages
The trick is to create multiple index slots, e.g. use 16 pages to write the start and end address of the new data. If you need to update that, just read all 16 of them, and overwrite the smallest value. This one becomes the highest and a next one will become the lowest. so you have a created a circular buffer of 16 pages increasing the MTBF with a factor 16.
Rob that sounds like a brilliant plan. Thanks so much for the extensive explanation. It's early days yet so I'll do my best to massage the entries into 64 bytes as per your advice. Since the queue is FIFO by nature, the start of it will move down the EEPROM and eventually loop back to the low addresses, so my initial thought that the first entry location would also get a lot of wear isn't accurate.
When the last entry in the queue is before the first on the EEPROM I won't be able to overwrite the least value in the index so I'll have to give that some additional thought, but I think I'm on track now.
Another strategy with no admin in EEPROM, also interesting.
1 - fill the EEPROM with a defined pattern ( 0xAA ) so you can see how far it is filled to find the first free entry.
2 - Keep an up to date pointer to this free address in RAM as admin.
3 - When a new entry is made this pointer in RAM is updated to the first free address.
This is your main modus operandi
4 - When the wifi connection is made all records are send to the server, and the entries are overwritten by 0xAA except for the last one *.
==> There is allways one entry in EEPROM.
Note: To be robust you must { read, send , overwrite } per record.
5 - When the system resets/reboots - scan the (whole) EEPROM to find the last entry made and update the pointer in RAM, and you can start writing new entries again
The last entry is not erased from EEPROM to serve as an anchor for new entries. The server should be able to recognize it as an existing entry.
This implies that every eeprom addres is written twice as much as the previous scheme but all the write operations are distributed 100% evenly so the wear is distributed evenly. Furthermore it takes some time to erase entries, and to sync the admin after a reset. Also you must take care that the address pointer wraps around in time and you need some EEPROM full detection, (read before you write, or 2 pointers start/end in RAM)
Actually since all entries are time/date stamped the API on the web server ignores duplicates so resending the anchor entry when the queue is empty is not a bad thing. This version also more effectively spreads the wear.
I'm still giving the first one more thought though as it should be faster and avoids the unnecessary transfer. Most importantly with the first option I should be able to adapt it for use more generally as a text file on the EEPROM by storing strings in the queue once and implementing functions to iterate through the data without fear any could be records that should have been deleted...
Thanks for your help. I'll post results as this develops.
robtillaart:
And yes as EEPROMS are written in 32bytes pages (or multiple) you have to reuse pages
The trick is to create multiple index slots, e.g. use 16 pages to write the start and end address of the new data.
For clarification, will I get the same levelling impact if I use sequential bytes for this rather than a page per index entry?
This description indicates I'd be using a page to store a single index value (ie 32 rather than 2 bytes). Am I right in thinking you meant 16x 2 byte entries meaning each index consumes just one 32 byte page?
[quoteActually since all entries are time/date stamped the API on the web server ignores duplicates so resending the anchor entry when the queue is empty is not a bad thing. This version also more effectively spreads the wear.][/quote]
are you storing the date/time with each sample? if so you could just scan the eeprom for the newest date and then just over write then next, you could then loop through the eeprom and only have one erase/write per data save.
This description indicates I'd be using a page to store a single index value (ie 32 rather than 2 bytes).
writing a byte to an eeprom does work as follows:
read the appropiate page of 32 bytes.
changing the appropiate byte within the page
write the whole page
So from point of wear it does not matter if you write one or 32 bytes to a page.
There was once a wrapper proposed around the byte writing function that looked like this:
Now that I understand this is always going to impact a page regardless if I write a byte or a page I agree - the first method with the rotating indexes for start and end of queue would be in my view too wasteful of the available EEPROM space, using 1024 bytes (if my maths is right, 32 pages at 32bytes) even when using 64k EEPROMs, so the 2nd way modified as per Jason's idea is what I intend to use. I've run a simulation of the data logger and the CSV records range between 38 and 49 bytes. If I allow these to start mid-page rather than on page boundaries I'll still get wear levelling benefits by spacing it over the whole ROM as we're discussing, but I'll be able to store a minimum of 4.5 days of logging.
I've also looked at storing it as int & long data points rather than CSV text, and that will compress it more, down to a fixed length of 23 bytes including a null terminator. If I use the idea of not forcing these to start on a page boundary I'll fit over 9 and a half days into the queue.
I'm hoping I've understood you correctly, and therefore the above is workable.
Thanks to you both for your thoughts and suggestions, Geoff
I was browsing Atmel's app notes the other day, and noticed
AVR101: High Endurance EEPROM Storage
(5 pages, revision A, updated 9/02)
This Application Note describes how to make safe, high endurance, parameter storage in EEPROM, insuring no wear-out of the memory.
Haven't read it, not sure if there's anything in it that hasn't already been discussed here, so FWIW.
Haven't read it, not sure if there's anything in it that hasn't already been discussed here, so FWIW.
[/quote]Thanks Jack, I certainly had not seen that. At a high level this illustrates the first method outlined above, though we needed two status buffers to define start and end of a list of stored data.
Just a number of points that have been overlooked:
The figures for R/W endurance are only guestimates.
Bits can get randomly flipped as the page ages.
zap the whole EEPROM a couple of times and check it when the system is configured, mark any 'bad' pages, by back filling with a known value, don't re-use them.
It depends how critical the data is and if you can tolerate minor corruption, but I would recommend some sort of page based checksum and only write 'large' sections of data, since writing part of a page ages the whole page.
Also if data integrity is critical then you are going to need some sort of page exclusion algorithm for when a page goes bad, just because the manufacturer says its 100k, who exactly is going to record and check it?
Whilst the Atmel data-sheet is a start, they failed to mention 'checksumming' and 'magic' values