void vATaskFunction( void *pvParameters )
{
for( ;; )
{
-- Task application code here. --
}
/* Tasks must not attempt to return from their implementing
function or otherwise exit. In newer FreeRTOS port
attempting to do so will result in an configASSERT() being
called if it is defined. If it is necessary for a task to
exit then have the task call vTaskDelete( NULL ) to ensure
its exit is clean. */
vTaskDelete( NULL );
}
Here comes #2, a task that does nothing on Core 0.
TaskHandle_t Task2; //->Crash2
void Crash2(void* pv) {
delay(10000); //time between continued crashes
while(true)
;
} //do nothing, don't return
void setup() {
xTaskCreatePinnedToCore(
Crash2, // Task function.
"Crash2", // name of task.
10000, // Stack size of task
NULL, // parameter of the task
1, // priority of the task
&Task2, // Task handle to keep track of created task
0); // pin task to core 0 <=======================
}
void loop() {
}
Lesson learned:
Never pin a long running task to core 0.
Core 0 does essential housekeeping and event handling. Blocking these tasks will make the watchdog bark and the system reboot.
Solutions:
dont pin a task, in detail not to core 0
don't assign "Arduino runs on Core: 0"
insert yield() or delay() in order to force a task switch
have more tasks at same or higher priority
Error log (truncated):
E (30094) task_wdt: Task watchdog got triggered. The following tasks did not reset the watchdog in time:
E (30094) task_wdt: - IDLE (CPU 0)
E (30094) task_wdt: Tasks currently running:
E (30094) task_wdt: CPU 0: Crash2
E (30094) task_wdt: CPU 1: loopTask
E (30094) task_wdt: Aborting.
Next one is a bit tricky. It's about using the handle of a deleted task. The task handle can(!) become invalid after the task is deleted, but not always.
If a task terminates itself the handle seems to remain valid, with a task state of eDeleted and using the handle does no harm. Perhaps no cleanup of the task memory will be performed?
If a task is deleted from outside then its handle can become invalid. Not always, but at least in this special case:
TaskHandle_t Task3; //->Crash3
void Crash3(void* pv) {
L:
for (int i=10; i>0; i--) {
Serial.println(i);
//vTaskSuspend(NULL); // wait for resume
delay(200);
}
//all done
goto L; //try again
}
void setup() {
Serial.begin(115200);
xTaskCreate(
Crash3, // Task function.
"Crash3", // name of task.
10000, // Stack size of task
NULL, // parameter of the task
1, // priority of the task
&Task3); // Task handle to keep track of created task
}
void loop() {
/*
eTaskState ts3 = eTaskGetState(Task3);
Serial.print("loop "); Serial.println(ts3);
*/
//kill task 3
delay(3000);
Serial.println("deleting...");
vTaskDelete(Task3); //crash here on second occurence!
Serial.println("deleted!");
delay(1000);
}
Lesson learned:
Don't use the handle of a deleted task.
Dont delete a deleted task once more.
pins short names for internal touch sensors pins (The ESP32 has 10 internal capacitive touch sensors. These can sense variations in anything that holds an electrical charge, like the human skin)
The first part of that statement does not follow from the example, and is generally bad advise. Your example code does not crash if the task is pinned to Core 1. However, if the task is created with no core affinity, then it's quite possible that the task will eventually run on Core 0 and crash. You can never tell when this will happen because, aside from trivial code like yours, the dynamics of context switching are far too complex to predict.
I'd go so far as to say that until you become significantly more skilled in working in this programming paradigm, you should always pin tasks you create to Core 1.
Additionally, the "Don't Pin" advise makes no sense when working with ESP32 variants that have a single core.
Finally, with the tight loop while that you've written for testing:
while(true)
;
there's a chance that it will be completely optimized away as the compiler can see it has no observable effects. This depends on the exact structure of the code and is again too complex for analysis. So, you should replace it with something like this:
I disagree - the observable effect is that you never reach the end of the function. It's not equivalent of not having it there in the first place, so no, it won't be optimized away.
to be fair, I think it still could because I think the implementation may assume that any thread will eventually terminate, make a call to a library I/O function, access or modify a volatile object, or perform a synchronization operation or an atomic operation. which we don't do there...
Is the affinity subject to "Arduino runs on..."? Did not test that yet.
Just then it makes sense. Who says that the function for pinned tasks is available at all on single core machines?
there's a chance that it will be completely optimized away
[/quote]
First of all the optimizing compiler will find a variable that is not further used and will eliminate such "dead" code.
I came across such tricks in a workstation presentation, where a loop took 8 seconds on the fastest machine, 8 minutes on the next machine, and on the last machine I aborted the program after an hour. All timing only depending on the compiler settings.
In my example the task will crash the system anyway. Either it alarms the watchdog or it crashes on return. In addition a task with an empty loop will consume runtime and block other tasks. That's what I want to point out, several reasons for not writing tasks according to that pattern.
Hmmm. As is, the compiler cannot distinguish tasks from ordinary functions and consequently can not hinder a task from returning. Else the compiler could insert a (hidden) vTaskDelete(NULL); at the end of the function, or the system could silently dump a returning task.
in the specific case where x is a local variable within a function and not used elsewhere in that function, the volatile keyword alone might not be sufficient to prevent the compiler from optimizing away the variable and so
I'll disagree on that one. The compiler has no way of knowing that the volatile variable isn't a memory-mapped I/O (for example). Reviewing the Herb Sutter video now (very end of Part 2). Seems Volatile variable accesses are "unoptimizable" ... although perhaps ordinary loads could be moved around them.