Compression using miniz
Posted: Fri Feb 08, 2019 10:26 am
Hi,
I am trying to run the test sketch for miniz compression. However, my heap space is too small (available size is bigger, but fragmented) and I was wondering how to reduce the size used by without breaking the algorithm.
Here is the short code that I am trying to run:
I am also looking into zlib.
I am trying to compress sensor readings. My plan is to continuously read sensor data into heap memory that is say 32kB and each time I hit say 16kB, I compress this chunk and write it to external SPI flash. OR I can write everything to external flash and compress ~16MB worth of data before uploading to the cloud. What do you think is better? I don't know how long it takes to compress 16MB worth of data.
Also what compression rate at normal compression level (6 for zlib) can be expected?
I am trying to run the test sketch for miniz compression. However, my heap space is too small (available size is bigger, but fragmented) and I was wondering how to reduce the size used by
Code: Select all
tdefl_compressor
Here is the short code that I am trying to run:
Code: Select all
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "rom/miniz.h"
#include "unity.h" //testing
#include "stdlib.h"
#include "stdio.h"
#define DATASIZE (1024*1)
extern "C"{
#include "wifi_synch.h"
}
extern "C" void app_main()
{
int x;
char b;
char *inbuf, *outbuf;
tdefl_compressor *comp;
tinfl_decompressor *decomp;
tdefl_status status;
size_t inbytes = 0, outbytes = 0, inpos = 0, outpos = 0, compsz;
printf("Allocating compressor & outbuf (%d bytes)\n", sizeof(tdefl_compressor));
comp = (tdefl_compressor *) malloc(sizeof(tdefl_compressor));
TEST_ASSERT(comp != NULL);
printf("Allocating data buffer and filling it with semi-random data\n");
inbuf = (char *) malloc(DATASIZE*sizeof(char));
TEST_ASSERT(inbuf != NULL);
srand(0);
for (x = 0; x < DATASIZE; x++) {
inbuf[x] = (x & 1) ? rand() & 0xff : 0;
}
int free_space = heap_caps_get_free_size(MALLOC_CAP_8BIT);
printf("Free space is %d\n", free_space);
outbuf = (char *) malloc(DATASIZE);
TEST_ASSERT(outbuf != NULL);
printf("Compressing...\n");
status = tdefl_init(comp, NULL, NULL, TDEFL_WRITE_ZLIB_HEADER | 1500);
TEST_ASSERT(status == TDEFL_STATUS_OKAY);
while (inbytes != DATASIZE) {
outbytes = DATASIZE - outpos;
inbytes = DATASIZE - inpos;
tdefl_compress(comp, &inbuf[inpos], &inbytes, &outbuf[outpos], &outbytes, TDEFL_FINISH);
printf("...Compressed %d into %d bytes\n", inbytes, outbytes);
inpos += inbytes; outpos += outbytes;
}
compsz = outpos;
free(comp);
//Kill inbuffer
for (x = 0; x < DATASIZE; x++) {
inbuf[x] = 0;
}
free(inbuf);
inbuf = outbuf;
outbuf = (char *) malloc(DATASIZE);
TEST_ASSERT(outbuf != NULL);
printf("Reinflating...\n");
decomp = (tinfl_decompressor *) malloc(sizeof(tinfl_decompressor));
TEST_ASSERT(decomp != NULL);
tinfl_init(decomp);
inpos = 0; outpos = 0;
while (inbytes != compsz) {
outbytes = DATASIZE - outpos;
inbytes = compsz - inpos;
tinfl_decompress(decomp, (const mz_uint8 *)&inbuf[inpos], &inbytes, (uint8_t *)outbuf, (mz_uint8 *)&outbuf[outpos], &outbytes, TINFL_FLAG_PARSE_ZLIB_HEADER);
printf("...Decompressed %d into %d bytes\n", inbytes, outbytes);
inpos += inbytes; outpos += outbytes;
}
printf("Checking if same...\n");
srand(0);
for (x = 0; x < DATASIZE; x++) {
b = (x & 1) ? rand() & 0xff : 0;
if (outbuf[x] != b) {
printf("Pos %x: %hhx!=%hhx\n", x, outbuf[x], b);
TEST_ASSERT(0);
}
}
printf("Great Success!\n");
free(inbuf);
free(outbuf);
free(decomp);
while(1){
vTaskDelay(100);
}
}
I am trying to compress sensor readings. My plan is to continuously read sensor data into heap memory that is say 32kB and each time I hit say 16kB, I compress this chunk and write it to external SPI flash. OR I can write everything to external flash and compress ~16MB worth of data before uploading to the cloud. What do you think is better? I don't know how long it takes to compress 16MB worth of data.
Also what compression rate at normal compression level (6 for zlib) can be expected?