Page 1 of 1
HW-accelerated compression in future silicon
Posted: Mon Jan 29, 2024 2:22 pm
by DrMickeyLauer
Next to integrating a modern CAN-FD controller (I lamented about that before), I wonder whether integrating hardware-accelerated data compression would be feasible, perhaps with a dedicated cache RAM or a quick-path to the PSRAM.
Re: HW-accelerated compression in future silicon
Posted: Mon Jan 29, 2024 3:28 pm
by MicroController
May be feasible. - But, IMO, not going to happen.
Just imagine, if there was some dedicated cache/RAM for specialty function X. First question on the forum would be "how can we use the dedicated extra RAM for general purposes?" - Same thing for functionality: I'd rather have 50 MHz more of general computing power than dedicated hardware for a specialty function which I can easily implement in software using the extra 50 MHz, while I can also 'use' the MHz for any other purpose my application may need.
However, looking at the S3's SIMD instruction set, I feel a few sensible additions could be made which would, among other things, support data compression. Like vector->scalar min()/max() operations, or vector x scalar -> scalar "find()".
Re: HW-accelerated compression in future silicon
Posted: Tue Jan 30, 2024 12:01 am
by ESP_Sprite
The additional question is: is it worth doing HW-accelerated compression? I haven't really seen an use case where e.g. zlib or miniz is used but is too slow to be practical. Largest issue I see with compression is that on a 'bare' esp32 chip, it uses a fair bit of RAM, but that is generally fixed by adding external PSRAM nowadays.
Re: HW-accelerated compression in future silicon
Posted: Wed Jan 31, 2024 10:28 am
by MicroController
I too feel that CPU power may not necessarily be an issue in your case; it may actually be more of an API thing.
Check out
Heatshrink for example, which lets you incrementally push pieces of data in so that the CPU load for an incoming stream of messages is spread along the stream.