Do you have a link to the stuff you are using to calculate the entropy value?

If it is producing true random numbers, can it be made faster by masking blocks of random numbers together; seeing as each block is truly random?

I am using a python script I wrote to perform the calculations used by the ent program (

http://www.fourmilab.ch/hotbits/statistical_testing/stattest.html) by John Walker. My script (attached) performs the same tests, except for the pi calculation, and combines several of the option along with producing a couple of charts of the data. I am not sure where you would obtain blocks of random numbers to mask with those produced by this algorithm, so I can't comment if it would be faster. If you need, non cryptographically secure, random numbers, that still have useful properties, at faster speeds, the best method is to use this library to re-seed the avr-libc random function whenever it has a value available. Here is a sample sketch to illustrate what I mean:

#include <Entropy.h>

void setup()

{

Entropy.Initialize();

randomSeed(Entropy.random());

}

void loop()

{

if (Entropy.Available > 0)

randomSeed(Entropy.random());

// Use normal random function for getting random numbers

// ie. some_value = random();

}

I have attached my capture, it is still a binary file. I have to still convert it to strings for you but you can grab this if you wanted now.

I set up my mega to gather 7kb blocks at a time before sending to the PC.

I am attaching the results of the tests for your data. This is the first sample that shows some concern, the p-value for the chi-square test is only 0.0189. Can you send me the full text on the smd chip you used for this--the label on the chip? If the mechanism is producing truly random number we should get samples that exceed the normal acceptible p-values as George Marginalia says himself in his diehard series of tests:

#

NOTE: Most of the tests in DIEHARD return a p-value, which

should be uniform on [0,1) if the input file contains truly

independent random bits. Those p-values are obtained by

p=F(X), where F is the assumed distribution of the sample

random variable X---often normal. But that assumed F is just

an asymptotic approximation, for which the fit will be worst

in the tails. Thus you should not be surprised with

occasional p-values near 0 or 1, such as .0012 or .9983.

When a bit stream really FAILS BIG, you will get p's of 0 or

1 to six or more places. By all means, do not, as a

Statistician might, think that a p < .025 or p> .975 means

that the RNG has "failed the test at the .05 level". Such

p's happen among the hundreds that DIEHARD produces, even

with good RNG's. So keep in mind that " p happens".

#