Page 1 of 1

wear leveling support in FATFS

Posted: Tue Aug 04, 2020 3:37 pm
by timredfern
Hello,

I'm trying to configure FATFS wear leveling support for a product.

I'm trying to verify wear leveling by reading back the partition data and confirming that writing the same file more than one time leaves multiple copies in flash storage.

With NVS and SPIFFS partitions, this is the case..

Looking at the wear_leveling example in esp_idf v3.3, it seems that the sample application will continuously reboot, mount the FAT filing system and rewrite the file /spiflash/hello.txt - when I make a copy of the flash storage used by the FATFS partition, I would expect to see multiple copies of the string "written using ESP-IDF". However I just see the string once.

Am I missing something?

Tim

Re: wear leveling support in FATFS

Posted: Wed Aug 05, 2020 1:37 pm
by timredfern
I think what I'm seeing is that fatFS performs wear levelling at the block level and zeroes the rest of the block, although I still see that it often writes to the same block.

I think the algorithm for wear levelling with FATFS is performs far worse in speed and wear protection than that used for NVS and SPIFFS.

Re: wear leveling support in FATFS

Posted: Sat Aug 08, 2020 8:40 am
by ESP_igrr
Hi timredfern,

For a small number of write/erase operations with fatfs+wear_levelling you will indeed find that most operations land into the same physical sectors. As the number of operations increases, the writes will start to get distributed over adjacent sectors in Flash. I think with the default configuration used in IDF the threshold is around 2k write operations per sector.

When the total number of write operations approaches the theoretical maximum (max flash endurance of 100k cycles per sector times the number of sectors in the partition), erase cycle distributions for SPIFFS and FAT+WL will be very similar.

Note that if you are doing a small number of write operations each time the application runs, it is important to de-initialize FATFS and wear_levelling libraries before restarting.

For illustration, here are the erase cycle distributions for FAT+WL at a different total number of write operations (100, 1000, 10000, 100000, 1000000, 3000000, 10000000)