Random numbers based on the temperature sensor?

I noticed that the TrueRandom library takes over analog pin 0(apperantly including turning it briefly into an ouput to make more noise IIRC) and also has some issues with failing some statistical tests(or has that been fixed?)

Since I have some ideas that require real random UUIDs and possibly the use of analog channel 0,I thought I'd try and fix the problems with TrueRandom. I use the temperature sensor(Tested on 32u4, The code is there for the 328 but I don't have one on hand to test), and a simple XOR-into-the-circulating-buffer algorithm with a configurable strength parameter that uses a (almost certainly bogus) entropy estimator.

The results seem to pass all the tests(ent and rng-tools rngtest), and my data rate is 100 bytes per second on strength=24 and 500 bytes per second on strength=8, both of which pass the tests on my arduino.

strength 4 fails miserably though.
I put my code here, is there any chance that someone with more knowledge in this area than me could take a look?
The code may or may not run, github still confuses me at times, but the writeup is there.

How many bytes per second do you need? What if you built an interrupt service routine (setup using one of the timers, watchdog reset wouldn't be fast enough assuming you wanted to get 1000 samples per second, if you were only going for once a second, then watchdog would work too) to increment a variable every few clock cycles and at 16MHz a single byte would overflow many many many times in any given few milliseconds. Thus it would be near impossible to tell where the variable will be in between 0 and 255.

wes000000:
How many bytes per second do you need? What if you built an interrupt service routine (maybe using watchdog timer in interrupt mode) to increment a variable every few clock cycles and at 16MHz a single byte would overflow many many many times in any given few milliseconds. Thus it would be near impossible to tell where the variable will be in between 0 and 255.

That's a really great idea actually.

freshRand still seems to have a bit of non-randomness that shows up only when running for hours and hours.
Not sure if it's the whitener or something about the temperature or the estimator. The readme file has been edited appropriately.

I amended my previous post, watchdog wouldn't be fast enough if you wanted 1000 bytes per second but timer interrupt could be used.

I actually may work on some code this weekend and see if I can get it working reliably.

I think the delay is too short after setting the MUX. Also the first ADC value after setting the MUX might be wrong. Try trowing away the first ADC value and run for another ADC value.

You won't be getting the randomness from the temperature sensor, but from the
USB power noise I think.

Certainly an ATmega running off a battery can show rock-steady ADC outputs, whereas
off a DC-DC converter there is LSB noise in the ADC.

http://code.google.com/p/avr-hardware-random-number-generation/

MarkT:
Certainly an ATmega running off a battery can show rock-steady ADC outputs, whereas
off a DC-DC converter there is LSB noise in the ADC.

That's a good point. In theory the entropy estimator would catch it and it would just take a crazy amount of time per byte, but I should test the theory.

@Caltoa
I will try discarding the first value. I started out with the PIC line so I don't know much about the low level details of the AVR.

The watchdog is starting to sound like a better option here. The tests on google code all pass quite nicely. The only concern would be if you actually wanted to use the watchdog for it's intended purpose. What's the general opinion there? It seems like a bad idea to use the WDT for fault detecting and normal service in the same program.

EternityForest:
I noticed that the TrueRandom library takes over analog pin 0(apperantly including turning it briefly into an ouput to make more noise IIRC) and also has some issues with failing some statistical tests(or has that been fixed?)

From what I've read the basic technique is sound. But the technique requires a "large" capacitance relative to the processor's clock speed. The capacitance of the AVR analog-to-digital sample-and-hold capacitor is simply too low for the technique to work. Adding an external capacitor and resistor overcomes the problem (but adds two external components).

I also recall there is a bug in the code.

https://web.archive.org/web/20090205182836/http://home.comcast.net/~orb/index.html

EternityForest:
I use the temperature sensor...

In my experimenting that does not work well. Simply running more current through the processor (e.g. lighting an LED) made the internal temperature sensor readings much more stable.

On some processors (like the ATtiny85) the readings were always even.

If the air around the processor is still the readings were very stable.

EternityForest:
What's the general opinion there?

You are on the lunatic fringe of what people do with AVR processors. There is no general opinion.

It seems like a bad idea to use the WDT for fault detecting and normal service in the same program.

Then don't do it. If you need both features, either use an external circuit for generating entropy or an external watchdog.

Just wanted to update this thread, I have done quite a bit of work on a random number generator library of my own and have documentation, ready built Arduino libraries, code examples, and images for beginners here: http://electronicsbug.wordpress.com/arduino-libraries/

Wow! That's a really cool library.

It seems that using the timing drift is the way to go for anyone who doesn't need the timers.
I've been working on my library too, and it seems to pass all the tests except for brief periods when a few numbers are more likely, which usually lasts about a minute and occurs a few times an hour or less, and is very difficult to reproduce.

What I'm doing right now is adding the micros() value into an uninitialized variable that should have whatever stack garbage was there before, xoring a few thousand temperature sensor readings into one rotating byte until the entropy estimation says it's enough.

I'm pretty sure stack garbage, exact timing, and several hundred temperature readings are enough, but I need to do a bunch more testing to make sure the bug is gone.

Or I could do the easy thing and change my code to not use timer2 or the WDT, but what fun is that? :stuck_out_tongue: