Unformatted write to SD

What happens is that three of the steps in the 12-bit DAC result in the same value for all reads with the 10-bit AVR ADC.

The fourth step results in two values but this doesn't mean there is 1 LSB of noise. Often most of the readings are one of the values.

There is always some voltage where a tiny change will result in the next code. At this point about half the readings will be n and half will be n+1 even with very low noise.

1LSB for The 10-bit AVR ADC with 5V reference means greater than 5V/1023 of noise. You really should have more like twice that for oversampling to work. You need to get more than one code for multiple reads at every voltage.

You really need to read about ADCs, all manufacturers have good app notes on oversampling.

Almost all app notes have charts to illustrate this concept and statements like this.

In this example, the actual voltage falls between two steps of the 12-bit ADC resolution and there is no
noise riding on the signal. It is easy to see the problem. With no noise on the signal, the ADC result for
each conversion will be the same. Averaging produces no effective gain in resolution.

You really need to read about ADCs, all manufacturers have good app notes on oversampling.

I'm familiar with dithering as I said and Ive read the AVR app. note that you linked to. You suggested I should not rely too heavily on spec sheets. I certainly do not regard manufacturers' app. notes as rigorous theoretical works, though they are usually very valuable.

I am questioning the idea of decreasing the S/N of the input signal as a means of improving it's measured S/N. Especially when the source of that noise is likely to be far from a flat spectrum.

Sure, any grec that you add to a low noise signal will produce different levels in subsequent samples. Whether it does so in a statistically neutral way or whether it biases each oversampled result in an arbitrary way is what needs to be considered.

Adding truly white noise and grouping hundreds of samples would be valid, since the averaging would effectively reduce the noise component that was added at the same time as removing most of the 0.5LSB quantisation noise.

If the noise is not gaussian or it is insufficiently sampled you cannot expect to remove it effectively by averaging (oversampling). You will be injecting one or two LSB of uncharacterised noise in the hope of reducing 0.5LSB of quantisation.

If your signal is "too clean" it would seem preferable to inject a clean cyclic signal than arbitrary environmental noise of unknown quality that will almost certainly be far from gaussian.

Anyway , we're rather drifting off the topic. Hopefully I will have some hardware today so I can start testing using your libs and examples. 8)

What happens is that three of the steps in the 12-bit DAC result in the same value for all reads with the 10-bit AVR ADC.

The fourth step results in two values but this doesn't mean there is 1 LSB of noise. Often most of the readings are one of the values.

Here's an interesting test. How about you post me the two sets of data and I try to tell identify which is which ? 1024 points would be preferable.

The data looks like 10-bit stair-steps with a little fuzz every once in a while. It should look like 12-bit stair-steps. You can't see the fact that the input was from a 12-bit DAC and you can't recover the fact by averaging data.

I'm not going to waste any more of my time.

It's time for you to go back to school and learn what every young EE knows about digital converters. I work with lots of EE students and they know this stuff.

There are plenty of free sources on the web.

There are newer books but this is a great book and it's free http://www.analog.com/library/analogDialogue/archives/39-06/data_conversion_handbook.html.

Your going to have more problems using a multiplexed AVR ADC to do oversampling on multiple channels. So learn a little first.

Good luck.

I'm aware channel swapping will introduce a bunch of other issues. But the channel swapping application is a lot less exigent than your audio context. Like I said earlier, I'm just interested in learning the limits of this hardware, to establish what it's good for. This project won't be pushing it to it's limits.

Thanks for testing N/R mode. I was concerned that not using it would have had a heavier penalty. I'm still rather surprised they would have gone to that trouble if it brings no discernible benefit.

The SD logging tests provide most of what I need and will save a huge amount of time and effort.

I decided To post the DAC/ADC test data. The DAC is a MCP4921 with 5 V ref. http://ww1.microchip.com/downloads/en/DeviceDoc/22248a.pdf.

I got the noise really clean now as you will see in the attached file. Only about one reading varied in the set of 64 that I took at each DAC value.

Here is the sketch that generated the data

#include <McpDac.h>
void setup() {
  Serial.begin(9600);
  mcpDacInit();
  for (uint16_t i = 2000; i < 2024; i++) {
    mcpDacSend(i);
    for (uint16_t j = 0; j < 64; j++) {
      Serial.print(i);
      Serial.write(',');
      Serial.print(j);
      Serial.write(',');
      Serial.println(analogRead(0));
    }
    delay(500);
  }
}
void loop() {}

The sketch loads values from 2000 through 2023 into the DAC. It then does 64 reads with analogRead() for that DAC value.
There are three columns: DAC value, reading #, ADC value.

The Arduino I used has a large offset error so it reads about 3 counts low. This is not uncommon for an AVR ADC. Be sure to calibrate your Arduino http://www.atmel.com/images/doc2559.pdf.

2001 on the DAC should be 500 on the ADC (2001*1023/4095 = 499.9). The ADC reads 497.

You must be logged in to see the attached file.

Edit: I have now attached files taken with three Arduinos. Arduino two and three have the fluctuation property since the DAC/ADC values lineup just right.

This does not mean there is more noise with Arduino two and three. If you don't understand, read the app notes, you claim to be an engineer so this should be easy.

dac_adc1.csv (19.3 KB)

dac_adc2.csv (19.3 KB)

dac_adc3.csv (19.3 KB)

I missed this.

I would love to see a theoretical study that shows how adding noise to your signal improves the s/n ratio! I get the gut felling that is violating the second law somewhere on the line

Your funny, wrong but funny. Time to read the ADC theory.

Note that this averaging is possible only if the signal contains perfect equally distributed noise (i.e. if the A/D is perfect and the signal's deviation from an A/D result step lies below the threshold, the conversion result will be as inaccurate as if it had been measured by the low-resolution core A/D and the oversampling benefits will not take effect).

The above means adding noise can improve accuracy with enough oversampling. Too little noise will result in lower resolution.

This is why I never use oversampling, too much can go wrong. Better to use a more accurate ADC.

This is why I was cautious about ENOB. It is only a measure of resolution , not accuracy.

Wrong again.

ENOB specifies the number of bits in the digitized signal above the noise floor, this is accuracy. A 12-bit ADC has 12 bits of resolution but may not be accurate to 12-bits.

Are you really an EE?

Maybe I expect too much. In physics I work with EEs that design ADCs and other IC parts.

I worked on the CERN Atlas experiment that discovered the Higgs Boson. The front end electronics used ASICs (Application Specific Integrated Circuit) designed by CERN engineers. This is necessary for low noise, high speed, and these parts must be Radiation Hard.

I expect EEs to know theory.

ENOB specifies the number of bits in the digitized signal above the noise floor, this is accuracy.

No it's not , it's resolution. Your arduino with a 3LSB offset is less accurate. Does it have any less ENOBs because of that offset?

This a quote from Analog Devices (not mine).

ENOB specifies the number of bits in the digitized signal above the noise floor, this is accuracy.

Here is another definition, not mine:

The effective number of bits (ENOB) is a way of quantifying the quality of an analog to digital conversion. A higher ENOB means that voltage levels recorded in an analog to digital conversion are more accurate.

Here is another from Analog Devices.

Resolution. An N-bit binary converter has N digital data inputs (DAC) or N digital data outputs (ADC). A converter that satisfies this criterion is said to have a resolution of N bits.

Resolution has nothing to do with accuracy. It's the number of bits an ADC outputs or the number of bits of input to a DAC.

DC accuracy involves these (From Analog Devices), not resolution.

The static absolute accuracy of a DAC can be described in terms of three fundamental kinds of errors: offset errors, gain errors, and integral nonlinearity.

Another quote from Analog Devices.

The traditional static specifications such as differential nonlinearity (DNL) and integral nonlinearity (INL) are most certainly reflected in the ac performance.

That's why ENOB is a better measure for quality of signal measurements. It combines all factors regarding accuracy of the measurement.

For simple DC measurements, non-linearity is the big deal. Offset errors are easy to calibrate. Many ADCs like the MCP3421 do it automatically.

From Microchip for the MCP3421

Self Calibration of Internal Offset and Gain Per Each Conversion.

It's true offset errors may not affect AC performance. You must compensate for the AVR offset errors for DC measurements.

This is such basic stuff for an EE. Are you really an EE? If so when did you go to school?

This a quote from Analog Devices...

So having started by telling me to "ignore" their data sheets, you now wish to use them in an appeal to authority argument.

A higher ENOB means that voltage levels recorded in an analog to digital conversion are more accurate.

You seem to be reading something more into that than there is stated. Yes, better resolution can contribute to better accuracy , that does not mean resolution IS accuracy, which is what you seem to be implying it says.

Resolution has nothing to do with accuracy.

Wrong again, as you would say. Resolution is one factor that contributes to (or limits) accuracy.

You may increase resolution by dithering and oversampling, thereby improving accuracy within the limit of the quantisation error you had previously. If you have 0.5LSB of quantisation error you cannot improve the quantisation error by more than 0.5 LSB. Neither will oversampling correct non-linearaity or gain errors. The major part of the 1.5 LSB errors reducing the basic 10b sample.

You may also remove gaussain noise by oversampling. But if your noise is not gaussian it will NOT be correctly removed. There will remain a bias in the result, the sign and magnitude of which you will not be able to know. If you do not know the nature of the noise you are injecting by waving a piece of wire around, you cannot know what will be left after oversampling. You will therefore have degraded the S/N of the result (notwithstanding the limited improvement gained by dithering).

That is why I suggested that, since you seem to have got a very clean signal, a suitable cyclic perturbation signal would be better than arbitrary, uncharacterised noise.

This is so basic, I don't see why I am having to lay this out for a third time.

If you see something wrong with my argument I'd be more impressed if you could say where it was wrong rather than pretending app. notes are some kind of text book on methodology or saying "bah! go back to school" and questioning my credentials.

I deliberately left that last comment short. Yet you ignore it. I'll ask again...

ENOB specifies the number of bits in the digitized signal above the noise floor, this is accuracy.

Your arduino with a 3LSB offset is less accurate. Does it have any less ENOBs because of that offset?

the quotes are by Walt Kester he is the recognized authority on converters. His book is the standard reference http://www.amazon.com/Data-Conversion-Handbook-Analog-Devices/dp/0750678410.

Actually the 3LSB offset will reduce ENOB. The reason is that ENOB is based on how well an ADC digitizes a full scale sine wave. The 3LSB offset error will distort the wave for low values, returning zero when the voltage is not zero.

That's why ENOB is so useful. Almost any fault in an ADC will lower ENOB.

So I will stick with people like Walter Kester for information on ADCs, not your guesses.

That is why I suggested that, since you seem to have got a very clean signal, a suitable cyclic perturbation signal would be better than arbitrary, uncharacterised noise.

You don't listen. I said I don't do oversampling, I get a better converter. I never suggested a way to inject noise for oversampling. I just said oversampling won't work because the noise levels are so low.

I played with resistors and pickup by longer wires between the source and ADC to see how sensitive it was.

Adding a cyclic perturbation signal seems like a really bad idea. You should just spend $5 for a better ADC.

This is so basic, I don't see why I am having to lay this out for a third time.

I ignore you because what you are saying is at odds with recognized authorities.

You don't even know what the definition of resolution is for a ADC. Show me a link to a definition where the it's other than the number of bits output by the converter.

I have spent my career at the finest science labs in the world. Some of the world's best analog and mixed signal engineers are in these labs. As a scientist I depended on them and learned from them.

Who are you? What is the basis of your authority? You never answered whether you are really an EE.

Actually the 3LSB offset will reduce ENOB. The reason is that ENOB is based on how well an ADC digitizes a full scale sine wave. The 3LSB offset error will distort the wave for low values, returning zero when the voltage is not zero.

But only if you are sampling within 3 LSB of the extremes of the dynamic range! This is clearly a contrived response that avoids you admitting being wrong on the basic question. Of course that is the reason you avoided it the first time I asked.

If that is your level of argument we are clearly not going to get any further on the technical points so I see no point in further discussion.

I thank you again for contributing the code and the wealth of useful information you were able to provide about SD cards earlier in the thread.

You may wish to review what ATLAS say about whether they have discovered the Higgs Bosson.

http://www.atlas.ch/news/2012/latest-results-from-higgs-search.html

" In the weeks and months ahead, ATLAS will better measure these properties, enabling a clearer picture to emerge about whether this particle is the Higgs Boson, or the first of a larger family of such particles, or something else entirely."

Best wished to you.

But only if you are sampling within 3 LSB of the extremes of the dynamic range!

I don't think you meant that. Here is the definition of dynamic range for an ADC:

Dynamic Range
Typically expressed in dB, dynamic range is defined as the range between the noise floor of a device and its specified maximum output level. An ADC's dynamic range is the range of signal amplitudes which the ADC can resolve; an ADC with a dynamic range of 60dB can resolve signal amplitudes from x to 1000x. Dynamic range is important in communication applications, where signal strengths vary dramatically. If the signal is too large, it over-ranges the ADC input. If the signal is too small, it gets lost in the converter's quantization noise.

I think you meant full scale of the ADC. Here is the definition of ENOB:

Effective Number Of Bits (ENOB)
ENOB specifies the dynamic performance of an ADC at a specific input frequency and sampling rate. An ideal ADC's error consists only of quantization noise. As the input frequency increases, the overall noise (particularly in the distortion components) also increases, thereby reducing the ENOB and SINAD. (See 'Signal-to-Noise and Distortion Ratio (SINAD).') ENOB for a full-scale, sinusoidal input waveform is computed from:

ENOB = (SINAD -1.76)/6.02

Note full-scale. If you use a smaller signal, you would get the wrong (smaller answer).

Want to bet on Higgs? I may be wrong, not all properties have been verified. On the other hand not all group information is public so you might want to think before you bet. The correct decays have been seen but now the equivalent of oversampling is happening to be totally sure. Hope you read the latest paper that has just been submitted to "Physics Letters B" when it's published.

You need to lighten up a bit and so do I. Here's how How to Lighten Up: 15 Steps (with Pictures) - wikiHow. This is a summary:

  1. Stop assuming you know everything. Nobody knows everything.

  2. Stop exaggerating. Exaggerating about your abilities, qualifications, knowledge, hobbies etc. is soon tiresome.

  3. Let go of things. It's OK to lose an argument; it's OK to make mistakes.

  4. Laugh.

  5. Delegate.

  6. Stop being so rules focused.

You're welcome for the code and any information you can use. I will try to lighten up also.

You need to lighten up a bit and so do I. Here's how How to Lighten Up: 15 Steps (with Pictures) - wikiHow. This is a summary:

Good idea, nice summary.

I'll keep an eye for news on Higgs. I'm sure all public statements are being rigorously cautious. We don't want another round like the super-neutrinos :wink:

Finally got my SD hardware from Holland, took 13 days for the post !!

So now I can start some real work.

I also found out that the standard analogRead is fairly inefficient with 16b reads:

It also has what seems like a spurious 1000us delay !

// ### WTF???
	// without a delay, we seem to read from the wrong channel
//	delay(1);
...

//	low  = ADCL;
//	high = ADCH;
//	return (high << 8) | low;
//        return (ADCL | ADCH<<8);
        return ADC;  // shrinks 8ops to 3 ops !  compiler does correct read order

Added similar mod to your test code AnalogIsrLogger:

#if RECORD_EIGHT_BITS
  uint8_t d = ADCH;
#else  // RECORD_EIGHT_BITS
//  uint8_t low = ADCL;
//  uint8_t high = ADCH;
//  uint16_t d = (high << 8) | low;
  uint16_t d = ADC;

Thanks, I meant to check if the compile order was correct for 16-bit access but never got back to it.

The strange 1 ms delay was removed a long time ago.

I did a check on 1.0.1 running this sketch and it performs as expected for a 125 kHz ADC clock.

void setup() {
  Serial.begin(9600);
  uint32_t t0 = micros();
  uint16_t v0 = analogRead(0);
  uint32_t t1 = micros();
  uint16_t v1 = analogRead(0);
  uint32_t t2 = micros();
  Serial.println(t1 - t0);
  Serial.println(t2 - t1);
}
void loop() {}

It prints

212
112

The first call takes 26.5 ADC clock cycles, a bit more than 25 required by the hardware.

The second takes 14 ADC clock cycles, again a bit longer than the time required by the hardware.

I did some more ADC tests that you might look at http://arduino.cc/forum/index.php/topic,120004.0.html.

These tests show how important calibration is and examine Noise Reduction Mode.

The strange 1 ms delay was removed a long time ago.

Curious. I only got into Arduino a few weeks back and downloaded the 1.0.1 software directly (not a distro package because my distro did not have 1.0.1) and the lines I posted were present in that bundle.

Non modified files are dated 22nd May 2012.

I have set up a PWM on pin3 using timer2 but found that even a nominally "empty" ISR on that clock broke Serial.print() pretty fast, though sometimes it did get one or two chars across. The link was lost and USB device disappeared on linux host.

That connection is expendable but I anticipate similar issues with SD.

I need to control external hardware that probably needs some simple maths and an adjustment to the PWM duty cycle something like 20 - 100 times per second.

What would be required to have another ISR running without breaking SDlib ? Is that possible ?

Thanks for any pointers you can give.

BTW, I've estimated the maths and one port write ISR < 4us