SeveralThingsAtTheSameTimeRev1 as a state machine

I have used Robins2's SeveralThingsAtTheSameTimeRev1 as a base to write a state machine. I wanted to compare how the state machine ran compared to Robin's code.

From what I can tell, it runs faster through the loop overall. I don't have a servo motor, so I added LEDs to indicate speed and direction of the motor. Also, the button LED toggles only on HIGH to LOW transition of the switch.

Any comment on the code would be appreciated.

ServeralThingsStaterev1.ino (14.9 KB)

wanted to compare how the state machine ran compared to Robin's code.

Robin’s code is a state machine so what other state machine are you talking about?

These other files are Robin's and my state machine code with time stamps in them. the prinout shows current time thru loop, minimum time, and maximum time.

ServeralThingsAtSameTimeRev2.ino (11.1 KB)

SeveralThingsStateRev2.ino (16.1 KB)

MikeLittle:
I have used Robins2's SeveralThingsAtTheSameTimeRev1 as a base to write a state machine.

Why?

You seem to have added about 100 lines of code which IMHO will just make the project more difficult for a newbie to understand. I tried to write my code so it was easy to understand. I made no attempt to optimize its performance.

Another purpose of my Thread was to provide code that others can use and change to suit their own needs. However it would be too time consuming to analyze the differences between your version and mine. If your version meets your needs that's just great.

If you are not using the same outputs (i.e. servos) you can't compare the code performance. You don't have to have servos connected to your Arduino in order to test code with the Servo library.

...R

Robin, can you tell me what the button LED code is suppose to do? From reading it, and running it on my Mega, toggles at the sampling rate as long as you have the switch closed. I change mine to only toggle at the transition, and requires the switch to go HIGH again before allowing another toggle.

Why?

Your code is a very good learning device. A lot of people have looked at it, and hopefully learned that delay is a bad idea. I am attempting to share doing the same thing in a different way. That's all.

In my college days, we trained on the 8051, using a teletype and paper tape to load assembler language programs. And we had a class on building bit splice machines from scratch. I haven't done a whole lot of programming since (more hardware design), and had to deal with programmers complaining about not having enough RAM for their programs to run on boards already designed and prototyped.

Thinking about what the code is doing, and how well it is doing it is kind of second nature. I know that building s state machine requires some overhead to structure the code, and I wanted to see how that compared to code written in-line, like yours. I know state machines take a little to get use to. So I wanted to start with a good base (your code), and go from there.

My code copies your codes' functionality. Like I said, I don't have a servo motor to run, so I added some LEDs to show the status of what the motor was suppose to be doing (sweep out or in, different speeds). It repeats the same pattern your code does when I add that code into yours.

A fundamental difference in my code is that it is based on a 10 mSec heart beat. Everything that happens can be done around that heart beat - the fastest interval is the servoFastInterval.

As far as code size goes, your code uses 5,514 bytes bytes of program space. Mine, 5,732 bytes. I know size has nothing to do with how good or how fast it will run.

When I added time stamping to the code, I was surprised by the difference. Your code cycles through the loop around 56 to 60 micro seconds. I see a minimum of 48, with a max of 168 - this comes from the call to myservo.Write. My code cycles through the loop typically 4 to 8 micro seconds with a max of 148 micro seconds.

The code does nothing for the majority of the time. I was just surprised at how different doing nothing took.

edit - 8051

And we had a class on building bit splice machines from scratch

sp. "bit-slice"

MikeLittle:
Robin, can you tell me what the button LED code is suppose to do? From reading it, and running it on my Mega, toggles at the sampling rate as long as you have the switch closed.

Yes, I guess it does work like that, I was assuming the user would just push a button briefly to change the state. The interval is just to avoid switch bounce.

A fundamental difference in my code is that it is based on a 10 mSec heart beat. Everything that happens can be done around that heart beat

That sounds like you are adding extra code to have the same effect. What's wrong with 1 mSec steps?

Everyone has his/her own view about how to do things. I like to keep things as simple as possible. It's been my experience that when I added bells and whistles that seemed like a good idea in one project they just got in the way for another project.

And there is a huge difference between the approach to coding that would be necessary where software is being developed by a big team rather than by a single person.

Another key factor is how familiar you are with coding idioms. Things that are blatantly obvious to some are extraordinarily complex for others. I do not like programming in C/C++ so I am not familiar with its eccentricities. On my PC I use Python or Ruby.

...R

I think the heartbeat test is what speeds up your loop() execution. But you pay a price in latency for any of the enclosed state machine functions. If any of those, or other functions, had to be serviced more frequently you would see a huge increase in loop() execution time.

Another way to say it, it's very fast at doing nothing. :slight_smile:

I am a reluctant C user as well. When I was designing a PC based phone system we were looking at a RTOS (back in early 90's), so I have some of that as well.

Redid some of the time stamping to get a better look at what is going on. I time stamped each pass through the function calls in the loop and printed it out.

In Robin's code, it basically takes around 56 to 60 micro seconds to run through the code. The servo call kicks it up to 156 - 164 micro seconds.

My code has a much wider spread on the times. If the internal counter hasn't zeroed out, it returns. For example, one time thru took 12 usec, the next took 124usec, I'm guessing for the servo.

Robin's code basically runs everything all the time. Mine, on the other hand, only runs what needs to run at the time. Very hard to nail down a firm value. The 4 to 8 usec times were from the 10 msec timer in the main loop. If it hadn't timed out, it just cycled thru and did nothing (this probably happened 90% of the time). One can take advantage of this idle time, but needs to remember that sometimes things need to be done, like the servo, or printing. That's the bear in RTOS's, or in this case, a heart beat. Nothing would happen if I exceeded the heart beat time (nothing critical). And besides, I am looking at a max of 132 usec with a 10 msec timer.

MikeLittle:
In Robin's code, it basically takes around 56 to 60 micro seconds to run through the code. The servo call kicks it up to 156 - 164 micro seconds.

Considering that I made no attempt to optimize my code for speed I am very pleased with that.

...R

What you have done with the scheduling actually has the effect of lowering the priority of certain tasks. It's a wise policy if you know in advance that certain tasks do not need as frequent service. However it cannot improve latency. Sooner or later, a heartbeat block gets service and tasks that follow it will have to wait. So it only improves average processor availability. Real time bandwidth is improved but not latency. That means that unless there is a producer-consumer processing model using queues, where waits can be tolerated, there is no value in such scheduling and it is simpler to just service everything in turn.

I understand. Every time you start running things on some kind of clock, you have to worry about missing things. This is true even if you are running off the system clock.

My interest in this was to set up a FSM and compare it with an example of straight line coding. I also wanted to reduce the number of millis() compares being used and use simple decrements to mark time. I suppose I could enter each state machine each time through and check to see if a change of state is in order. If I was dealing with monitoring tight events, I would do that. But in this case, we are blinking LEDs on 1/2 second intervals.

The servo move does have a 10 millisecond update. I could move that to the top of the function calls in the main loop to avoid any skewing from the other function calls. But again, the control of the motor is off board, so there is no need for the processor to respond any faster.

It all boils down to what needs to be done, and can you do it in the time required.

MikeLittle:
It all boils down to what needs to be done, and can you do it in the time required.

Indeed.

And when you get close to the limits of the chosen microprocessor there will be no "one size fits all".

...R