Synchronizing access to file from different tasks
Synchronizing access to file from different tasks
Hi everybody!
I want to know what is the best practice to synchronize file access from different tasks?
I'm accessing single file from different tasks and want to guarantee that file access is an atomic operation (another task has to wait when first task finishes file access cycle: open file -> read/write file -> close file). I'm using mutex for these purposes, but I suspect that this is not the right way...
I want to know what is the best practice to synchronize file access from different tasks?
I'm accessing single file from different tasks and want to guarantee that file access is an atomic operation (another task has to wait when first task finishes file access cycle: open file -> read/write file -> close file). I'm using mutex for these purposes, but I suspect that this is not the right way...
Re: Synchronizing access to file from different tasks
For me, I would also explicitly use a mutex. The phrase "mutex" is a shortened form of "mutual exclusion" which seems to describe exactly the effect you are looking for. If task A is working on the file, task B should block until task A indicates it is no longer using the file.
Free book on ESP32 available here: https://leanpub.com/kolban-ESP32
Re: Synchronizing access to file from different tasks
I'm using a mutex for that in my app. I also use the same mutex in my reboot routine to make sure that an error which generates a reboot doesn't reboot in the middle of a file write. And if that is hung a WD will eventually perform the reboot.
John A
John A
Re: Synchronizing access to file from different tasks
MUTEX was my first thought too.
How are you accessing the file: sequential append or random access?
Using the mutex for the file in effect creates a lock with file granularity. If you are randomly accessing the data then a lock at a finer granularity could be used (with other supporting code).
For sequential append I could imagine a non-locking single queue writer.
How are you accessing the file: sequential append or random access?
Using the mutex for the file in effect creates a lock with file granularity. If you are randomly accessing the data then a lock at a finer granularity could be used (with other supporting code).
For sequential append I could imagine a non-locking single queue writer.
IT Professional, Maker
Santiago, Dominican Republic
Santiago, Dominican Republic
Re: Synchronizing access to file from different tasks
Thanks to everybody for replies!
In my case mutex works fine, but I noticed that sometimes file becomes corrupt. File dump analysis showed that 4 or 8 extra bytes were written somewhere in the middle of the file and all the records after this place were shifted accordingly. Since file has strongly typed structure, all records after this glitch becomes invalid.
At first I thought that this corruption happens when some other task was writing to the file and added a mutex to synchronize file access. But now I see at least 2 problems:
1. Corruption happens in the part of file, that is written as a single large block (for example I write 800 bytes with a single fwrite call and get 4 garbage bytes in the middle of these 800 bytes).
2. Uncorrupted data is shifted so it looks like someone inserted garbage bytes in the middle of block. The block size in this case does not change, so if we have 4 extra bytes in the middle, then 4 bytes at the end are lost.
Maybe problem is not in task synchronization, but somewhere else...
In my case mutex works fine, but I noticed that sometimes file becomes corrupt. File dump analysis showed that 4 or 8 extra bytes were written somewhere in the middle of the file and all the records after this place were shifted accordingly. Since file has strongly typed structure, all records after this glitch becomes invalid.
At first I thought that this corruption happens when some other task was writing to the file and added a mutex to synchronize file access. But now I see at least 2 problems:
1. Corruption happens in the part of file, that is written as a single large block (for example I write 800 bytes with a single fwrite call and get 4 garbage bytes in the middle of these 800 bytes).
2. Uncorrupted data is shifted so it looks like someone inserted garbage bytes in the middle of block. The block size in this case does not change, so if we have 4 extra bytes in the middle, then 4 bytes at the end are lost.
Maybe problem is not in task synchronization, but somewhere else...
Re: Synchronizing access to file from different tasks
What kind of file system is that, FAT or SPIFFS, and what is the storage, SD card (SDMMC or SPI?) or flash?
Corruption by 2 bytes would have pointed to a potential DMA issue (such as when one attempts to use an unaligned DMA buffer) but 4 byte shift means that this is likely something else.
Can you reduce this issue to some (preferrably small) piece of code which you can share?
Corruption by 2 bytes would have pointed to a potential DMA issue (such as when one attempts to use an unaligned DMA buffer) but 4 byte shift means that this is likely something else.
Can you reduce this issue to some (preferrably small) piece of code which you can share?
Re: Synchronizing access to file from different tasks
I'm using SPIFFS filesystem located on flash. SPIFFS partition size is 128KB, it holds only 3 files about 10-12KB in total.
Today I was able to detect the moment when file got corrupted.
Code was performing some file contents manipulation (reading and writing). These manipulations were completed successfully, file was closed. And immediately after that I got exception which caused ESP to reboot. After reboot file contents were corrupted.
At the moment I do not associate this exception directly with file operations, but I think that reboot caused file corruption. If VFS or SPIFFS driver handles file operations asynchronously this could result file corruption in such situation.
What about code sample - code is large and complicated. I'll try to make some readable code snippet to show how I work with files...
Today I was able to detect the moment when file got corrupted.
Code was performing some file contents manipulation (reading and writing). These manipulations were completed successfully, file was closed. And immediately after that I got exception which caused ESP to reboot. After reboot file contents were corrupted.
At the moment I do not associate this exception directly with file operations, but I think that reboot caused file corruption. If VFS or SPIFFS driver handles file operations asynchronously this could result file corruption in such situation.
What about code sample - code is large and complicated. I'll try to make some readable code snippet to show how I work with files...
Re: Synchronizing access to file from different tasks
A little bit more information about exception that caused ESP reboot.
It was assert at line 1442 of freertos/queue.c inside xQueueGenericReceive function. I'm not using this function directly but it is used by xSemaphoreTake macro which is used in my code, especially with mutex to synchronize file access. I have no idea what can be wrong in this function call.
It was assert at line 1442 of freertos/queue.c inside xQueueGenericReceive function. I'm not using this function directly but it is used by xSemaphoreTake macro which is used in my code, especially with mutex to synchronize file access. I have no idea what can be wrong in this function call.
Re: Synchronizing access to file from different tasks
Such crash typically indicates memory corruption, when something overwrites a member of the semaphore (queue) structure. Can you please try to set heap checking level to "comprehensive" in menuconfig?
Re: Synchronizing access to file from different tasks
Yes, I also thought that it's worth enabling an extended heap and stack checking. Hope this helps to detect the source of this problem.
But what about file corruption? Is there any way to protect data from corruption?
But what about file corruption? Is there any way to protect data from corruption?
Who is online
Users browsing this forum: No registered users and 214 guests