Unformatted write to SD

I did a test and the good news is reading the ADC in Noise Reduction Mode in the middle of an SD block transfer seems to work OK.

I did not write rawAnalogReadWithSleep() but it seems to use Noise Reduction Mode.

I added the code between the slashes to the write loop (two bytes are sent for each pass to slightly speed the transfer):

  for (uint16_t i = 0; i < 512; i += 2) {
    while (!(SPSR & (1 << SPIF)));
    //////////////////////////////////////////////////////////////
    int rawAnalogReadWithSleep();
    if (i == 300) {
      rawAnalogReadWithSleep();
      Serial.print('.');
    }
    /////////////////////////////////////////////////////////////
    SPDR = buf[i];
    while (!(SPSR & (1 << SPIF)));
    SPDR = buf[i + 1];
  }

I ran this sketch:

#include <SdFat.h>
#include <avr/sleep.h> 
SdFat sd;
SdFile file;

ISR(ADC_vect) { } 

int rawAnalogReadWithSleep() {
  // Generate an interrupt when the conversion is finished
  ADCSRA |= _BV(ADIE);

  // Enable Noise Reduction Sleep Mode
  set_sleep_mode(SLEEP_MODE_ADC);
  sleep_enable();

  // Any interrupt will wake the processor including the millis interrupt so we have to...
  // Loop until the conversion is finished
  do
  {
    // The following line of code is only important on the second pass.  For the first pass it has no effect.
    // Ensure interrupts are enabled before sleeping
    sei();
    // Sleep (MUST be called immediately after sei)
    sleep_cpu();
    // Checking the conversion status has to be done with interrupts disabled to avoid a race condition
    // Disable interrupts so the while below is performed without interruption
    cli();
  }
  // Conversion finished?  If not, loop.
  while( ( (ADCSRA & (1<<ADSC)) != 0 ) );

  // No more sleeping
  sleep_disable();
  // Enable interrupts
  sei();

  // The Arduino core does not expect an interrupt when a conversion completes so turn interrupts off
  ADCSRA &= ~ _BV( ADIE );

  // Return the conversion result
  return( ADC );
} 

void setup() {
  Serial.begin(9600);
    // setup ADC
  analogRead(0);
  if (!sd.begin())sd.initErrorHalt();
  if(!file.open("ADC_TEST.TXT", O_RDWR | O_CREAT | O_AT_END)) {
    sd.errorHalt("opening test.txt for write failed");
  }
  for (uint16_t i = 0; i < 50000; i++) {
    file.println(i);
    if (file.writeError) sd.errorHalt("print");
  }
  file.close();
  Serial.println("Done");
}
void loop() {}

Lots of dots get printed and the file has the correct content. The file has 660 blocks.

The bad news is that I can't find proof on the web that Noise Reduction Mode helps.

I tried various tests comparing analogRead() with rawAnalogReadWithSleep() above. If there is an improvement it is really small. Other factors overwhelm the the difference between the two functions. Clean power to the Arduino makes a huge difference.

One curious result is that the two functions return slightly different results. You need to average a 1000 measurements to see the difference.

My test setup was very crude so it is not definitive but as until I see proof I won't believe Noise Reduction Mode is worth the pain.

I hope you prove Noise Reduction Mode gives more accuracy.

Edit:
I did some more work and oversampling and Noise Reduction Mode are extremely frustrating. If I work hard to reduce noise, oversampling will not work since you need noise for oversampling. I don't trust oversampling with the 10-bit AVR ADC, it's too easy to fool yourself.

If you need more accuracy, an external ADC seem like a far better approach.

Thanks for testing that function. There's a couple of things that appear sub-optimal

was the second adc a typo? wouldn't this double read add jitter?

int rawAnalogReadWithSleep();
if (i == 300) {
//? rawAnalogReadWithSleep();
Serial.print('.');
}

do
{  
  sei();
  sleep_cpu();
  cli();
}
 // Conversion finished?  If not, loop.
  while( ( (ADCSRA & (1<<ADSC)) != 0 ) );

Now I'm confused. the interrupt triggers and the ISR only gets called when the conversion IS complete. What is he waiting for here, another not quiet ADC conversion ?

What appears sub-optimal? I just stuck the lines in the loop to test sleep, not as a test of jitter.

I didn't post the Noise Reduction tests.

Appears your C++ is a bit rusty. The first statement is a type declaration so the library will compile. I could have put it anywhere before the call.

  int rawAnalogReadWithSleep();

This is the call:

  rawAnalogReadWithSleep();

The "no-op" ISR is necessary to field the wake-up interrupt.

Most of the time in the loop is spent sleeping. The person that wrote this function is allowing for wake-up by interrupts other than the ADC. If the ADC is not done, the function goes back into ADC Noise Reduction Mode.

"no-op" ISRs are not uncommon. Sometimes they clear a flag or cause other status change. They are very fast since no context needs to be saved. I use one to clear a timer flag in the 100,000 sample per second logger.

I did more testing on the ADC Noise Reduction Mode. I used a high resolution DAC to generate a ramp. The DAC is on a well designed shield on the Arduino I was testing.

I got the noise so low with just analogRead() that I couldn't do oversampling. For a number of DAC steps the 10-bit Arduino ADC always gives the same value. I don't need noise reduction, I need noise injection to make oversampling work.

Appears your C++ is a bit rusty. The first statement is a type declaration so the library will compile. I could have put it anywhere before the call.

Of course. More like my eyes so tired I can't even see straight after all this reading up and pouring through all this arduino pseudo code and pages of libs full of #ifs and #buts and D_OXYGEN_ garbage just to boil done to one-liner assembler code. It's enough to make my head spin.

Sorry for having thought you could have made a mistake. :wink:

The "no-op" ISR is necessary to field the wake-up interrupt.

Yes, I appreciate that. The arduino mumbo-jumbo ends up, after 2 or 3 pages of conditionals, by converting the empty declaration into an inline reti;

The person that wrote this function is allowing for wake-up by interrupts other than the ADC. If the ADC is not done, the function goes back into ADC Noise Reduction Mode.

Ah OK. I was not thinking outside the simple case we were looking at. Indeed another ISR could have just been run , leaving the cpu active and ADC incomplete.

Makes sense.

I got the noise so low with just analogRead() that I couldn't do oversampling. For a number of DAC steps the 10-bit Arduino ADC always gives the same value. I don't need noise reduction, I need noise injection to make oversampling work.

Are you concluding that you are not getting any noise in the 10th bit, even without going into idle? It would be great if NR mode was not even needed.

I don't need noise reduction, I need noise injection to make oversampling work.
Wouldn't the internal reference help? - those ref sources are usualy noisy enough :slight_smile: Or external zener diode, without a blocking capacitor..

The tenth bit is the same for 1000 reads with analogRead() for about three out of four steps when I generate a ramp with a 12-bit DAC.

Oversampling just gives the 10-bit values not extra bits. This is because for oversampling to work you need.

• The signal-component of interest should not vary significantly during a conversion.
• There should be some noise present in the signal.
• The amplitude of the noise should be at least 1 LSB.

See http://www.atmel.com/Images/doc8003.pdf.

If you power the Arduino with USB or a cheap wall wart there is plenty of noise.

I use a real power supply - one with a three prong grounded plug and low ripple/noise. This supply is good for lots of amps so it's real overkill.

The DAC is on the Arduino using a shield Limor Fried gave me as a prototype. She is good with ground planes and filtering. I even put a big, 100K, resistor between the DAC and ADC and still didn't get noise. I did get noise when I added about two feet of wire in addition to the resistor.

I would love to see a good case study that shows when ADC Noise Reduction Mode is needed and helps.

I would love to see a theoretical study that shows how adding noise to your signal improves the s/n ratio! I get the gut felling that is violating the second law somewhere on the line :wink:

While dithering can reduce the quantisation error, the premise of the method is that the noise is gaussian. However, 4 or 16 samples will not provide sufficient sampling of the population to ensure that the gaussian distribution is well represented. The sampling error is not accounted for.

Also, arbitrary noise samples like bits of wire or internal interference of an AVR are unlikely to be all that "white".

Introducing a clean cyclic signal seems more legit to me and was the way I was taught to do this.

Of course, at best, this can only reduce the 0.5 LSB of quantisation error. The other 1.5 LSB of accumulated errors (gain , linearity, etc.) will still prevent the result from being of 10b accuracy.

This is why I was cautious about ENOB. It is only a measure of resolution , not accuracy.

The tenth bit is the same for 1000 reads with analogRead() for about three out of four steps when I generate a ramp with a 12-bit DAC.

If I'm interpreting that correctly, you are saying that about 25% of the 1000 readings were different in the LSB. If half of them were different that would be 1LSb of noise. So off the top of my head that sounds like 0.5 LSB.

Remains to be determined weather they are both noisy or one is 0.5 LSB better.

What happens is that three of the steps in the 12-bit DAC result in the same value for all reads with the 10-bit AVR ADC.

The fourth step results in two values but this doesn't mean there is 1 LSB of noise. Often most of the readings are one of the values.

There is always some voltage where a tiny change will result in the next code. At this point about half the readings will be n and half will be n+1 even with very low noise.

1LSB for The 10-bit AVR ADC with 5V reference means greater than 5V/1023 of noise. You really should have more like twice that for oversampling to work. You need to get more than one code for multiple reads at every voltage.

You really need to read about ADCs, all manufacturers have good app notes on oversampling.

Almost all app notes have charts to illustrate this concept and statements like this.

In this example, the actual voltage falls between two steps of the 12-bit ADC resolution and there is no
noise riding on the signal. It is easy to see the problem. With no noise on the signal, the ADC result for
each conversion will be the same. Averaging produces no effective gain in resolution.

You really need to read about ADCs, all manufacturers have good app notes on oversampling.

I'm familiar with dithering as I said and Ive read the AVR app. note that you linked to. You suggested I should not rely too heavily on spec sheets. I certainly do not regard manufacturers' app. notes as rigorous theoretical works, though they are usually very valuable.

I am questioning the idea of decreasing the S/N of the input signal as a means of improving it's measured S/N. Especially when the source of that noise is likely to be far from a flat spectrum.

Sure, any grec that you add to a low noise signal will produce different levels in subsequent samples. Whether it does so in a statistically neutral way or whether it biases each oversampled result in an arbitrary way is what needs to be considered.

Adding truly white noise and grouping hundreds of samples would be valid, since the averaging would effectively reduce the noise component that was added at the same time as removing most of the 0.5LSB quantisation noise.

If the noise is not gaussian or it is insufficiently sampled you cannot expect to remove it effectively by averaging (oversampling). You will be injecting one or two LSB of uncharacterised noise in the hope of reducing 0.5LSB of quantisation.

If your signal is "too clean" it would seem preferable to inject a clean cyclic signal than arbitrary environmental noise of unknown quality that will almost certainly be far from gaussian.

Anyway , we're rather drifting off the topic. Hopefully I will have some hardware today so I can start testing using your libs and examples. 8)

What happens is that three of the steps in the 12-bit DAC result in the same value for all reads with the 10-bit AVR ADC.

The fourth step results in two values but this doesn't mean there is 1 LSB of noise. Often most of the readings are one of the values.

Here's an interesting test. How about you post me the two sets of data and I try to tell identify which is which ? 1024 points would be preferable.

The data looks like 10-bit stair-steps with a little fuzz every once in a while. It should look like 12-bit stair-steps. You can't see the fact that the input was from a 12-bit DAC and you can't recover the fact by averaging data.

I'm not going to waste any more of my time.

It's time for you to go back to school and learn what every young EE knows about digital converters. I work with lots of EE students and they know this stuff.

There are plenty of free sources on the web.

There are newer books but this is a great book and it's free http://www.analog.com/library/analogDialogue/archives/39-06/data_conversion_handbook.html.

Your going to have more problems using a multiplexed AVR ADC to do oversampling on multiple channels. So learn a little first.

Good luck.

I'm aware channel swapping will introduce a bunch of other issues. But the channel swapping application is a lot less exigent than your audio context. Like I said earlier, I'm just interested in learning the limits of this hardware, to establish what it's good for. This project won't be pushing it to it's limits.

Thanks for testing N/R mode. I was concerned that not using it would have had a heavier penalty. I'm still rather surprised they would have gone to that trouble if it brings no discernible benefit.

The SD logging tests provide most of what I need and will save a huge amount of time and effort.

I decided To post the DAC/ADC test data. The DAC is a MCP4921 with 5 V ref. http://ww1.microchip.com/downloads/en/DeviceDoc/22248a.pdf.

I got the noise really clean now as you will see in the attached file. Only about one reading varied in the set of 64 that I took at each DAC value.

Here is the sketch that generated the data

#include <McpDac.h>
void setup() {
  Serial.begin(9600);
  mcpDacInit();
  for (uint16_t i = 2000; i < 2024; i++) {
    mcpDacSend(i);
    for (uint16_t j = 0; j < 64; j++) {
      Serial.print(i);
      Serial.write(',');
      Serial.print(j);
      Serial.write(',');
      Serial.println(analogRead(0));
    }
    delay(500);
  }
}
void loop() {}

The sketch loads values from 2000 through 2023 into the DAC. It then does 64 reads with analogRead() for that DAC value.
There are three columns: DAC value, reading #, ADC value.

The Arduino I used has a large offset error so it reads about 3 counts low. This is not uncommon for an AVR ADC. Be sure to calibrate your Arduino http://www.atmel.com/images/doc2559.pdf.

2001 on the DAC should be 500 on the ADC (2001*1023/4095 = 499.9). The ADC reads 497.

You must be logged in to see the attached file.

Edit: I have now attached files taken with three Arduinos. Arduino two and three have the fluctuation property since the DAC/ADC values lineup just right.

This does not mean there is more noise with Arduino two and three. If you don't understand, read the app notes, you claim to be an engineer so this should be easy.

dac_adc1.csv (19.3 KB)

dac_adc2.csv (19.3 KB)

dac_adc3.csv (19.3 KB)

I missed this.

I would love to see a theoretical study that shows how adding noise to your signal improves the s/n ratio! I get the gut felling that is violating the second law somewhere on the line

Your funny, wrong but funny. Time to read the ADC theory.

Note that this averaging is possible only if the signal contains perfect equally distributed noise (i.e. if the A/D is perfect and the signal's deviation from an A/D result step lies below the threshold, the conversion result will be as inaccurate as if it had been measured by the low-resolution core A/D and the oversampling benefits will not take effect).

The above means adding noise can improve accuracy with enough oversampling. Too little noise will result in lower resolution.

This is why I never use oversampling, too much can go wrong. Better to use a more accurate ADC.

This is why I was cautious about ENOB. It is only a measure of resolution , not accuracy.

Wrong again.

ENOB specifies the number of bits in the digitized signal above the noise floor, this is accuracy. A 12-bit ADC has 12 bits of resolution but may not be accurate to 12-bits.

Are you really an EE?

Maybe I expect too much. In physics I work with EEs that design ADCs and other IC parts.

I worked on the CERN Atlas experiment that discovered the Higgs Boson. The front end electronics used ASICs (Application Specific Integrated Circuit) designed by CERN engineers. This is necessary for low noise, high speed, and these parts must be Radiation Hard.

I expect EEs to know theory.

ENOB specifies the number of bits in the digitized signal above the noise floor, this is accuracy.

No it's not , it's resolution. Your arduino with a 3LSB offset is less accurate. Does it have any less ENOBs because of that offset?

This a quote from Analog Devices (not mine).

ENOB specifies the number of bits in the digitized signal above the noise floor, this is accuracy.

Here is another definition, not mine:

The effective number of bits (ENOB) is a way of quantifying the quality of an analog to digital conversion. A higher ENOB means that voltage levels recorded in an analog to digital conversion are more accurate.

Here is another from Analog Devices.

Resolution. An N-bit binary converter has N digital data inputs (DAC) or N digital data outputs (ADC). A converter that satisfies this criterion is said to have a resolution of N bits.

Resolution has nothing to do with accuracy. It's the number of bits an ADC outputs or the number of bits of input to a DAC.

DC accuracy involves these (From Analog Devices), not resolution.

The static absolute accuracy of a DAC can be described in terms of three fundamental kinds of errors: offset errors, gain errors, and integral nonlinearity.

Another quote from Analog Devices.

The traditional static specifications such as differential nonlinearity (DNL) and integral nonlinearity (INL) are most certainly reflected in the ac performance.

That's why ENOB is a better measure for quality of signal measurements. It combines all factors regarding accuracy of the measurement.

For simple DC measurements, non-linearity is the big deal. Offset errors are easy to calibrate. Many ADCs like the MCP3421 do it automatically.

From Microchip for the MCP3421

Self Calibration of Internal Offset and Gain Per Each Conversion.

It's true offset errors may not affect AC performance. You must compensate for the AVR offset errors for DC measurements.

This is such basic stuff for an EE. Are you really an EE? If so when did you go to school?

This a quote from Analog Devices...

So having started by telling me to "ignore" their data sheets, you now wish to use them in an appeal to authority argument.

A higher ENOB means that voltage levels recorded in an analog to digital conversion are more accurate.

You seem to be reading something more into that than there is stated. Yes, better resolution can contribute to better accuracy , that does not mean resolution IS accuracy, which is what you seem to be implying it says.

Resolution has nothing to do with accuracy.

Wrong again, as you would say. Resolution is one factor that contributes to (or limits) accuracy.

You may increase resolution by dithering and oversampling, thereby improving accuracy within the limit of the quantisation error you had previously. If you have 0.5LSB of quantisation error you cannot improve the quantisation error by more than 0.5 LSB. Neither will oversampling correct non-linearaity or gain errors. The major part of the 1.5 LSB errors reducing the basic 10b sample.

You may also remove gaussain noise by oversampling. But if your noise is not gaussian it will NOT be correctly removed. There will remain a bias in the result, the sign and magnitude of which you will not be able to know. If you do not know the nature of the noise you are injecting by waving a piece of wire around, you cannot know what will be left after oversampling. You will therefore have degraded the S/N of the result (notwithstanding the limited improvement gained by dithering).

That is why I suggested that, since you seem to have got a very clean signal, a suitable cyclic perturbation signal would be better than arbitrary, uncharacterised noise.

This is so basic, I don't see why I am having to lay this out for a third time.

If you see something wrong with my argument I'd be more impressed if you could say where it was wrong rather than pretending app. notes are some kind of text book on methodology or saying "bah! go back to school" and questioning my credentials.

I deliberately left that last comment short. Yet you ignore it. I'll ask again...

ENOB specifies the number of bits in the digitized signal above the noise floor, this is accuracy.

Your arduino with a 3LSB offset is less accurate. Does it have any less ENOBs because of that offset?

the quotes are by Walt Kester he is the recognized authority on converters. His book is the standard reference http://www.amazon.com/Data-Conversion-Handbook-Analog-Devices/dp/0750678410.

Actually the 3LSB offset will reduce ENOB. The reason is that ENOB is based on how well an ADC digitizes a full scale sine wave. The 3LSB offset error will distort the wave for low values, returning zero when the voltage is not zero.

That's why ENOB is so useful. Almost any fault in an ADC will lower ENOB.

So I will stick with people like Walter Kester for information on ADCs, not your guesses.

That is why I suggested that, since you seem to have got a very clean signal, a suitable cyclic perturbation signal would be better than arbitrary, uncharacterised noise.

You don't listen. I said I don't do oversampling, I get a better converter. I never suggested a way to inject noise for oversampling. I just said oversampling won't work because the noise levels are so low.

I played with resistors and pickup by longer wires between the source and ADC to see how sensitive it was.

Adding a cyclic perturbation signal seems like a really bad idea. You should just spend $5 for a better ADC.

This is so basic, I don't see why I am having to lay this out for a third time.

I ignore you because what you are saying is at odds with recognized authorities.

You don't even know what the definition of resolution is for a ADC. Show me a link to a definition where the it's other than the number of bits output by the converter.

I have spent my career at the finest science labs in the world. Some of the world's best analog and mixed signal engineers are in these labs. As a scientist I depended on them and learned from them.

Who are you? What is the basis of your authority? You never answered whether you are really an EE.