Page 1 of 1

Transparent memory sharing (malloc, free) between ESP32 cores with ESP-IDF ?

Posted: Mon Oct 08, 2018 1:45 pm
by hitagure
Can the memory management functions like malloc, free, be called by the 2 cores of the ESP32 in a transparent way without causing any problems with ESP-IDF? Should we manage a semaphore?

In general, is the management of resources by FreeRtos / ESP-IDF valid for managing a critical exclusion between cores? Is there a low level atomic instruction e.g. "test and set" ?

Re: Transparent memory sharing (malloc, free) between ESP32 cores with ESP-IDF ?

Posted: Mon Oct 08, 2018 11:13 pm
by ESP_Angus
Yes, malloc and free themselves are thread-safe and multi-core safe (they have their own internal concurrency primitives).
In general, is the management of resources by FreeRtos / ESP-IDF valid for managing a critical exclusion between cores?
Yes, standard FreeRTOS primitives are SMP safe in ESP-IDF. This is the recommended way to manage resources.

You may find some of this guide to the difference between "vanilla" and "IDF" FreeRTOS helpful:
https://docs.espressif.com/projects/esp ... s-smp.html
Is there a low level atomic instruction e.g. "test and set" ?
There are also lower level spinlock "muxes" which can be used, where you have resources which are only locked for a short period of time (these have much less overhead than using a full FreeRTOS semaphore or mutex):
https://docs.espressif.com/projects/esp ... l-sections

There is also an xtensa "test and set" instruction, and an inline function which wraps this (this is how the muxes are implemented):
https://github.com/espressif/esp-idf/bl ... cro.h#L275

As a rule of thumb, if a resource needs exclusive access for a very short time (tens or low hundreds of clock cycles) or has very low contention, use a mux (spinlock) or custom test and set. If a resource is held for longer, use FreeRTOS primitives (semaphores, mutexes, queues, etc) which will allow a lower priority task to run if a higher priority task is blocked waiting for a resource,.