Weird scheduling issue when using spinlocks in critical section with multiple cores

sgmustadio
Posts: 4
Joined: Thu Mar 04, 2021 4:33 am

Weird scheduling issue when using spinlocks in critical section with multiple cores

Postby sgmustadio » Sat Mar 06, 2021 11:54 pm

This may or may not be related to my watchdog timer issue in this thread(viewtopic.php?f=2&t=19752), but it seems different enough that I decided to post it here anyway.

I'm working with the following demo code (for illustrating how spinlocks work).

Code: Select all

// Core definitions (assuming you have dual-core ESP32)
static const BaseType_t pro_cpu = 0;
static const BaseType_t app_cpu = 1;

// Settings
static const TickType_t task_hog = 200;  // Time (ms) hogging the CPU
static const TickType_t task_delay = 100; // Time (ms) each task blocks itself

// Pins (change this if your Arduino board does not have LED_BUILTIN defined)
static const int led_pin = LED_BUILTIN;

// Globals
static portMUX_TYPE spinlock = portMUX_INITIALIZER_UNLOCKED;

//*****************************************************************************
// Functions

// Hogs the processor. Accurate to about 1 second (no promises).
static void hog_delay(uint32_t ms) {
  for (uint32_t i = 0; i < ms; i++) {
    for (uint32_t j = 0; j < 40000; j++) {
      asm("nop");
    }
  }
}

//*****************************************************************************
// Tasks

// Task in Core 0
void doTask0(void *parameters) {
  
  TickType_t timestamp;
  char str[20];

  // Do forever
  while (1) {

    // Do some long critical section (this is bad practice)
    portENTER_CRITICAL(&spinlock);
    Serial.println("Task 0 critical section...");
    hog_delay(task_hog);
    Serial.println("...done");
    portEXIT_CRITICAL(&spinlock);

    // Yield processor for a while
    vTaskDelay(task_delay / portTICK_PERIOD_MS);
  }
}

// Task in Core 1
void doTask1(void *parameters) {
  
  TickType_t timestamp;
  char str[20];

  // Configure LED pin
  pinMode(led_pin, OUTPUT);

  // Do forever
  while (1) {

    // Toggle LED
    digitalWrite(led_pin, !digitalRead(led_pin));
    
    // Yield processor for a while
    vTaskDelay(task_delay / portTICK_PERIOD_MS);
  }
}

//*****************************************************************************
// Main (runs as its own task with priority 1 on core 1)

void setup() {
  
  // Configure Serial
  Serial.begin(115200);

  // Wait a moment to start (so we don't miss Serial output)
  vTaskDelay(1000 / portTICK_PERIOD_MS);
  Serial.println();
  Serial.println("---FreeRTOS Multicore Demo---");

  // Start Task 0 (in core 0)
  xTaskCreatePinnedToCore(doTask0,
                          "Task 0",
                          1024,
                          NULL,
                          1,
                          NULL,
                          pro_cpu);

  // Start Task 1 (in core 1)
  xTaskCreatePinnedToCore(doTask1,
                          "Task 1",
                          1024,
                          NULL,
                          2,
                          NULL,
                          app_cpu);

  // Delete "setup and loop" task
  vTaskDelete(NULL);
}

void loop() {
  // Execution should never get here
}
If I comment out the following lines:

Code: Select all

//portENTER_CRITICAL(&spinlock);

Code: Select all

//portEXIT_CRITICAL(&spinlock);
the code works as expected. I see
"Task 0 critical section...
...done"
being printed to the console and the LED pin toggling every 100 ms.
scope-01.png
scope-01.png (64.62 KiB) Viewed 3603 times
However, when I leave the critical section lines uncommented (there is now a critical section in task 0), the LED begins to exhibit odd behavior. It stays on for about 5 ms, turns off, and remains off for about 295 ms.
scope-02.png
scope-02.png (61.03 KiB) Viewed 3603 times
My understanding is that portENTER_CRITICAL() stops the scheduler and disables interrupts in the current core. In the demo above, that should only stop interrupts (and scheduler) in core 0. Core 1 should be unaffected, which means I should see the LED toggling at the regular 100 ms rate (like it was before putting in the critical section lines), right? Is there something I missed regarding using both cores in the ESP32?

ESP_Sprite
Posts: 9730
Joined: Thu Nov 26, 2015 4:08 am

Re: Weird scheduling issue when using spinlocks in critical section with multiple cores

Postby ESP_Sprite » Sun Mar 07, 2021 6:21 am

Silly question, but can your ESP32 be crashing when you leave the critical sections uncommented? Asking because you do a serial print inside the critical section which I think is not allowed.

sgmustadio
Posts: 4
Joined: Thu Mar 04, 2021 4:33 am

Re: Weird scheduling issue when using spinlocks in critical section with multiple cores

Postby sgmustadio » Sun Mar 07, 2021 3:11 pm

In some instances, yes, it is crashing with a "Guru Meditation Error: Core 1 panic'ed (LoadProhibited)" when I had Serial.print statements in the other core (inside a similar critical section). I didn't know that Serial usage was not allowed inside the critical sections at all, so thank you for that tip.

I removed all Serial.print lines and tried toggling GPIO pins in the different tasks (each in a different core). In Core 0, I perform the following sequence in a task:
  • Enter critical section
  • Set pin 12 high
  • Hog CPU for 200 ms (for loop)
  • Set pin 12 low
  • Exit critical section
  • Yield CPU for 100 ms
In Core 1, I perform the following sequence in a separate task:
  • Toggle pin 13
  • Yield CPU for 100 ms
Here is the code:

Code: Select all

// Core definitions (assuming you have dual-core ESP32)
static const BaseType_t pro_cpu = 0;
static const BaseType_t app_cpu = 1;

// Settings
static const TickType_t task_hog = 200;  // Time (ms) hogging the CPU
static const TickType_t task_delay = 100; // Time (ms) each task blocks itself

// Pins
static const int pin_0 = 12;
static const int pin_1 = 13;

// Globals
static portMUX_TYPE spinlock = portMUX_INITIALIZER_UNLOCKED;

//*****************************************************************************
// Functions

// Hogs the processor. Accurate to about 1 second (no promises).
static void hog_delay(uint32_t ms) {
  for (uint32_t i = 0; i < ms; i++) {
    for (uint32_t j = 0; j < 40000; j++) {
      asm("nop");
    }
  }
}

//*****************************************************************************
// Tasks

// Task in Core 0
void doTask0(void *parameters) {
  
  TickType_t timestamp;
  char str[20];

  // Configure pin
  pinMode(pin_0, OUTPUT);

  // Do forever
  while (1) {

    // Do some long critical section (this is bad practice)
    portENTER_CRITICAL(&spinlock);
    digitalWrite(pin_0, HIGH);
    hog_delay(task_hog);
    digitalWrite(pin_0, LOW);
    portEXIT_CRITICAL(&spinlock);

    // Yield processor for a while
    vTaskDelay(task_delay / portTICK_PERIOD_MS);
  }
}

// Task in Core 1
void doTask1(void *parameters) {
  
  TickType_t timestamp;
  char str[20];

  // Configure pin
  pinMode(pin_1, OUTPUT);

  // Do forever
  while (1) {

    // Toggle LED
    digitalWrite(pin_1, !digitalRead(pin_1));
    
    // Yield processor for a while
    vTaskDelay(task_delay / portTICK_PERIOD_MS);
  }
}

//*****************************************************************************
// Main (runs as its own task with priority 1 on core 1)

void setup() {

  // Start Task 0 (in core 0)
  xTaskCreatePinnedToCore(doTask0,
                          "Task 0",
                          1024,
                          NULL,
                          1,
                          NULL,
                          pro_cpu);

  // Start Task 1 (in core 1)
  xTaskCreatePinnedToCore(doTask1,
                          "Task 1",
                          1024,
                          NULL,
                          1,
                          NULL,
                          app_cpu);

  // Delete "setup and loop" task
  vTaskDelete(NULL);
}

void loop() {
  // Execution should never get here
}
Without the critical section, both pins toggle as expected. When I include the critical section lines, pin 13 no longer behaves as expected. Instead of toggling every 100 ms, it turns on for 5 ms and then turns off for 295 ms. Here is the capture from a logic analyzer:
logic-01.png
logic-01.png (57.53 KiB) Viewed 3561 times
Any help troubleshooting this problem would be appreciated!

ESP_Sprite
Posts: 9730
Joined: Thu Nov 26, 2015 4:08 am

Re: Weird scheduling issue when using spinlocks in critical section with multiple cores

Postby ESP_Sprite » Mon Mar 08, 2021 10:39 am

Ah, I think I can explain the behaviour.

To start, FreeRTOS has an internal timer variable. This is a simple uint32 which increases every portTICK_PERIOD_MS milliseconds. The thing doing this increasing is a hardware timer, which triggers an interrupt; the timer gets increased in that interrupt handler. (This only happens on the pro cpu. There's a timer/interrupt for the app cpu as well, but it doesn't handle the timer variable.)

Calls like vTaskDelay are dependent on this timer. Effectively, what you're saying is 'take the timer variable, add this amount of ticks, now wake me up again when the timer variable equals that value'. However, because you spin CPU0 in a section with interrupts disabled, the timer doesn't increase, and the task does not wake up. That's why your GPIO level doesn't change in the critical section.

But why does it change immediately after? Well, even if FreeRTOS can't handle the stuff it needs to do wrt timer handling, it does know that there's a bunch of ticks 'missing' when the critical section ends. What it'll try to do is to 'fast forward' the timer variable in order to match the 'real time'. As it does this, it also goes through all actions that were supposed to happen during those ticks.

So there's why your GPIO changes immediately after the critical section ends: FreeRTOS notices it should have unblocked that task and immediately does that. It also explains why the GPIO changes again 5 seconds after: the vTaskDelay is called when FreeRTOS (on the other core) is still 'fast forwarding' the timer. That 'fast forwarding' means the time delay ends much sooner than expected.

sgmustadio
Posts: 4
Joined: Thu Mar 04, 2021 4:33 am

Re: Weird scheduling issue when using spinlocks in critical section with multiple cores

Postby sgmustadio » Mon Mar 08, 2021 8:43 pm

Thank you! That helps to make sense of what's going on. I moved the critical section to Core 1, and it worked as expected (as Core 0 could still update the timer variable). I also changed the toggle time to 30 ms for Core 0 so it's more noticeable on the scope.

Next, I added critical section markers around the pin toggle in Core 0 to see what would happen (full code below). My expectation is that the pin in Core 1 would toggle as before (high for 200 ms in a critical section, low for 100 ms in the blocked state), and the pin in Core 0 would stay steady for 200 ms (while spinning) but toggle freely (3 times) while Core 1 was in the blocked state.

However, this does not seem to happen. When a core is spinning waiting for a spinlock, are interrupts enabled or disabled? I would have thought that they would remain enabled until the lock was obtained, but it seems they are disabled (as shown by my scope capture). The Core 1 behavior seems to indicate that the "fast forwarding" is happening while Core 0 is spinning waiting for a lock. Can you verify that this is the case? I'm having a hard time figuring out exactly how the "fast forwarding" process of the timer variable is happening.

Full code:

Code: Select all

// Core definitions (assuming you have dual-core ESP32)
static const BaseType_t pro_cpu = 0;
static const BaseType_t app_cpu = 1;

// Settings
static const TickType_t task_hog = 200;  // Time (ms) hogging the CPU in Task 1
static const TickType_t task_0_delay = 30; // Time (ms) Task 0 blocks itself
static const TickType_t task_1_delay = 100; // Time (ms) Task 1 blocks itself

// Pins
static const int pin_0 = 12;
static const int pin_1 = 13;

// Globals
static portMUX_TYPE spinlock = portMUX_INITIALIZER_UNLOCKED;

//*****************************************************************************
// Functions

// Hogs the processor. Accurate to about 1 second (no promises).
static void hog_delay(uint32_t ms) {
  for (uint32_t i = 0; i < ms; i++) {
    for (uint32_t j = 0; j < 40000; j++) {
      asm("nop");
    }
  }
}

//*****************************************************************************
// Tasks

// Task in Core 0
void doTask0(void *parameters) {

  // Configure pin
  pinMode(pin_0, OUTPUT);

  // Do forever
  while (1) {

    // Toggle LED
    portENTER_CRITICAL(&spinlock);
    digitalWrite(pin_0, !digitalRead(pin_0));
    portEXIT_CRITICAL(&spinlock);
    
    // Yield processor for a while
    vTaskDelay(task_0_delay / portTICK_PERIOD_MS);
  }
}

// Task in Core 1
void doTask1(void *parameters) {

  // Configure pin
  pinMode(pin_1, OUTPUT);

  // Do forever
  while (1) {

    // Do some long critical section (this is bad practice)
    portENTER_CRITICAL(&spinlock);
    digitalWrite(pin_1, HIGH);
    hog_delay(task_hog);
    digitalWrite(pin_1, LOW);
    portEXIT_CRITICAL(&spinlock);
    
    // Yield processor for a while
    vTaskDelay(task_1_delay / portTICK_PERIOD_MS);
  }
}

//*****************************************************************************
// Main (runs as its own task with priority 1 on core 1)

void setup() {

  // Start Task 0 (in core 0)
  xTaskCreatePinnedToCore(doTask0,
                          "Task 0",
                          1024,
                          NULL,
                          1,
                          NULL,
                          pro_cpu);

  // Start Task 1 (in core 1)
  xTaskCreatePinnedToCore(doTask1,
                          "Task 1",
                          1024,
                          NULL,
                          1,
                          NULL,
                          app_cpu);

  // Delete "setup and loop" task
  vTaskDelete(NULL);
}

void loop() {
  // Execution should never get here
}
Scope capture (pin 12 is on top, pin 13 is on bottom):
logic-02.png
logic-02.png (70.86 KiB) Viewed 3500 times

Who is online

Users browsing this forum: Google [Bot] and 66 guests