Thanks! That's really helpful, I will try to make my own partition table and see if it works. If doesn't work resonablely I will start over with FAT file system instead.kolban wrote:If we look here:
http://esp-idf.readthedocs.io/en/latest ... ables.html
We find the documentation on the partitions table mechanism. Looking further, we find a partition called "nvs". This appears to be where in flash the NVS data is kept. It appears we could increase the size of the partition ... but beware it must not overlap other partitions. You may have to declare the NVS partitions is being somewhere else in flash space if you need a large amount of storage.
ESP_ERR_NVS_NOT_ENOUGH_SPACE when using set_nvs_bolb
Re: ESP_ERR_NVS_NOT_ENOUGH_SPACE when using set_nvs_bolb
Re: ESP_ERR_NVS_NOT_ENOUGH_SPACE when using set_nvs_bolb
Hi, are you suggesting me to use a very long binary object to store all the data? I considered that way before but when I get new data I need to load all the value saved previously and save the new one with them. This seems to be very redundent. I will try to resize the partitional table first. Thanks!WiFive wrote:You can resize nvs partition but it's probably better to just store as raw binary data because using 64 bytes to store 20 bytes is not efficient.
Re: ESP_ERR_NVS_NOT_ENOUGH_SPACE when using set_nvs_bolb
I managed to resize the partitional table and get all the pairs stored. Thanks for your help!
Re: ESP_ERR_NVS_NOT_ENOUGH_SPACE when using set_nvs_bolb
Why would you need to load them? You just need to keep a pointer to the next write location.nooooooob wrote:Hi, are you suggesting me to use a very long binary object to store all the data? I considered that way before but when I get new data I need to load all the value saved previously and save the new one with them. This seems to be very redundent. I will try to resize the partitional table first. Thanks!WiFive wrote:You can resize nvs partition but it's probably better to just store as raw binary data because using 64 bytes to store 20 bytes is not efficient.
Who is online
Users browsing this forum: Google [Bot] and 148 guests