[SOLVED] determine CPU speed in software ?

can the CPU speed of an arduino or avr be determined somehow in software ?

does the avr chips contain some stable thing or stable operational value that you can compare operational timings to ?

can the CPU speed of an arduino or avr be determined somehow in software ?

The CPU speed is a constant. There is a #define statement somewhere that defines the value. It's name escapes me at the moment.

But, why do you think you need to know this?

does the avr chips contain some stable thing

Yes. The number of pins, for example, is a constant.

What are you trying to determine?

amine2: can the CPU speed of an arduino or avr be determined somehow in software ?

When you say "determined", do you mean dictated, changed, or discovered?

a client want's to change the clock speed and have his programme work the same way (change the crystal (oscillator) speed) and he wants the programme to run accurately . so he asked if this would be possible . the thing is , for this to be possible you must compare values to something , like if a have a clock of exactly 1mhz at pin 0 that does not change , then i might be able to approximate the clock speed by calculation and comparison , but this guy want's to use a standalone avr or arduino and still manage to do this . to be honest i dont know how to do this , so i want to know if it's impossible or not

Well, you can run it on any crystal that can produce a clock speed that is within the AVR spec. But AVR timers will run at a different frequency in that case.

Obviously, all timing software takes into account the AVR clock speed. The IDE takes that into account automatically when you make a board selection (some have a 16Mhz and some have 8Mhz).

In a nutshell.

If you want to customize it, you're on your own I believe.

The watchdog runs off of an internal 128kHz oscillator that is separate from the processor clock. So I think it should be possible to generate a watchdog timer interrupt at specified intervals and then compare that to some timer based on the processor clock, like millis. I think it would work. Not 100% sure though.

But if you change the crystal, you already know the exact frequency. Then the above would be pointless.

You would know but the code wouldn't.

I don't understand the point of being able to swap crystals in the field without reprogramming. But assuming there is such a point I think the WDT would allow one to auto adjust for the new frequency.

i know all that , and i know that the CPU frequency is defined , and i did use that in many programs . but this guy want's to change the crystal , and still have accurate timings and that's the issue . i was going to say that this is impossible at first , but one i remember that i heard someone talking about an internal clock and fuse settings for that a such .. that's the thing that left me wondering

exactly "jboyton" something like that yes "aarg" without an internal frequency the whole process would be pointless i also dont understand mate , but if the client needs .. the client gets

I never said anything like, "without an internal frequency". What the bleep is that about? Is this about millis() timings? You could modify the millis() code.

the millis , Microsends and delays do use the defined cpu frequency i think . though this guy wants to shut the chip down change the oscillator , boot it up and still get near correct timings . the clock frequency should be determined on bootup , if somehow an internal fixed frequency would be detected by the code or the avr , i think that would be possible

How often will he be changing the crystal? I would recommend a socket for the crystal. :)

jboyton’s idea is the simplest. The watchdog timer is independent of processor speed.

Example:

#include <avr/sleep.h>
#include <avr/wdt.h>

volatile bool wdtFired;

// watchdog interrupt
ISR (WDT_vect) 
{
   wdt_disable();  // disable watchdog
   wdtFired = true;
}  // end of WDT_vect
 
void setup () 
{ 
  noInterrupts ();           // timed sequence follows
  MCUSR = 0;     
  // allow changes, disable reset
  WDTCSR = bit (WDCE) | bit (WDE);
  // set interrupt mode and an interval 
  WDTCSR = bit (WDIE);    // set WDIE, and 16 ms seconds delay
  wdt_reset();  // pat the dog
  wdtFired = false;
  interrupts ();             

  unsigned long startTime = micros ();
  while (!wdtFired)
    { }  // wait for watchdog
  unsigned long endTime = micros ();

  Serial.begin (115200);
  Serial.println ();
  Serial.print (F("Time taken = "));
  Serial.println (endTime - startTime);
}

void loop ()  { }

On my 16 MHz processor (for which this was compiled) I got:

Time taken = 15956

So, around 16 ms taken for the watchdog, after a 16 ms delay requested. Thus the board speed is as expected. But if I were to slow the fuse down to the 8 MHz internal oscillator, it only gets time to count half-way:

Time taken = 7984

Thus the figure returned is an approximation of the clock speed (multiplied by 1000 of course).

Whatever the crystal is changed to, the change needs to be known at compile time, by making a change to the configuration file for the board. That change affects primarily, F_CPU (I think that's the correct name, but don't hold me to it).

Regardless, all the magic for dealing with the crystal change happens at compile time, not at run time. Charge your customer for knowing this, but don't tell him.

"aarg" yeah :) that would be a good idea thank you . "nick" thank you very much (y) i understand the concept now "Paul" yeah :D thank you man

[SOLVED]

PaulS: Whatever the crystal is changed to, the change needs to be known at compile time, by making a change to the configuration file for the board.

Well, not necessarily. For one thing, things like register settings for baud rate are calculated at run-time (because the baud rate is a variable) based on a known constant at compile-time (F_CPU).

You could easily rewrite to make the handful of lines of code that calculate the register setting dependent on a clock speed, determined at run-time.

A simple thing would be just to have a factor. Eg. multiply baud rate by 1 if we are running at 16 MHz, multiply by 2 if we are running at 8 MHz, etc.

Similarly for delay(). For a one second delay, if we know the clock is 8 MHz (but compiled for 16 MHz) just delay for 2000 rather than 1000.

If the libraries themselves made a runtime decision, that could avoid a lot of problems with people who accidentally set the "divide clock by 8" bit, or use the internal oscillator rather than an external crystal.

Thank you guys - this worked for me.

I received some pro mini modules not clearly marked with their frequency and this allowed me to check that I had been sent the correct part rather than the low cost part.