lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
/*
|
|
|
|
|
|
|
|
Routines in this file are based on:
|
|
|
|
Zlib (RFC1950 / RFC1951) compression for PuTTY.
|
|
|
|
|
|
|
|
PuTTY is copyright 1997-2014 Simon Tatham.
|
|
|
|
|
|
|
|
Portions copyright Robert de Bath, Joris van Rantwijk, Delian
|
|
|
|
Delchev, Andreas Schultz, Jeroen Massar, Wez Furlong, Nicolas Barry,
|
|
|
|
Justin Bradford, Ben Harris, Malcolm Smith, Ahmad Khalifa, Markus
|
|
|
|
Kuhn, Colin Watson, and CORE SDI S.A.
|
|
|
|
|
2023-06-22 00:24:42 -04:00
|
|
|
Optimised for MicroPython:
|
|
|
|
Copyright (c) 2023 by Jim Mussared
|
|
|
|
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
Permission is hereby granted, free of charge, to any person
|
|
|
|
obtaining a copy of this software and associated documentation files
|
|
|
|
(the "Software"), to deal in the Software without restriction,
|
|
|
|
including without limitation the rights to use, copy, modify, merge,
|
|
|
|
publish, distribute, sublicense, and/or sell copies of the Software,
|
|
|
|
and to permit persons to whom the Software is furnished to do so,
|
|
|
|
subject to the following conditions:
|
|
|
|
|
|
|
|
The above copyright notice and this permission notice shall be
|
|
|
|
included in all copies or substantial portions of the Software.
|
|
|
|
|
|
|
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
NONINFRINGEMENT. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE
|
|
|
|
FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
|
|
|
|
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
|
|
|
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <stdlib.h>
|
|
|
|
#include <stdint.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <assert.h>
|
|
|
|
|
|
|
|
/* ----------------------------------------------------------------------
|
|
|
|
* Zlib compression. We always use the static Huffman tree option.
|
|
|
|
* Mostly this is because it's hard to scan a block in advance to
|
|
|
|
* work out better trees; dynamic trees are great when you're
|
|
|
|
* compressing a large file under no significant time constraint,
|
|
|
|
* but when you're compressing little bits in real time, things get
|
|
|
|
* hairier.
|
|
|
|
*
|
|
|
|
* I suppose it's possible that I could compute Huffman trees based
|
|
|
|
* on the frequencies in the _previous_ block, as a sort of
|
|
|
|
* heuristic, but I'm not confident that the gain would balance out
|
|
|
|
* having to transmit the trees.
|
|
|
|
*/
|
|
|
|
|
2023-06-26 12:56:58 -04:00
|
|
|
static void outbits(uzlib_lz77_state_t *state, unsigned long bits, int nbits)
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
{
|
2023-06-26 12:56:58 -04:00
|
|
|
assert(state->noutbits + nbits <= 32);
|
|
|
|
state->outbits |= bits << state->noutbits;
|
|
|
|
state->noutbits += nbits;
|
|
|
|
while (state->noutbits >= 8) {
|
|
|
|
state->dest_write_cb(state->dest_write_data, state->outbits & 0xFF);
|
|
|
|
state->outbits >>= 8;
|
|
|
|
state->noutbits -= 8;
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-22 00:24:42 -04:00
|
|
|
static const unsigned char mirrornibbles[16] = {
|
|
|
|
0x0, 0x8, 0x4, 0xc, 0x2, 0xa, 0x6, 0xe,
|
|
|
|
0x1, 0x9, 0x5, 0xd, 0x3, 0xb, 0x7, 0xf,
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
};
|
|
|
|
|
2023-06-22 00:24:42 -04:00
|
|
|
static unsigned int mirrorbyte(unsigned int b) {
|
|
|
|
return mirrornibbles[b & 0xf] << 4 | mirrornibbles[b >> 4];
|
|
|
|
}
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
|
2023-06-22 00:24:42 -04:00
|
|
|
static int int_log2(int x) {
|
|
|
|
int r = 0;
|
|
|
|
while (x >>= 1) {
|
|
|
|
++r;
|
|
|
|
}
|
|
|
|
return r;
|
|
|
|
}
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
|
2023-06-26 12:56:58 -04:00
|
|
|
static void uzlib_literal(uzlib_lz77_state_t *state, unsigned char c)
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
{
|
|
|
|
if (c <= 143) {
|
|
|
|
/* 0 through 143 are 8 bits long starting at 00110000. */
|
2023-06-26 12:56:58 -04:00
|
|
|
outbits(state, mirrorbyte(0x30 + c), 8);
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
} else {
|
|
|
|
/* 144 through 255 are 9 bits long starting at 110010000. */
|
2023-06-26 12:56:58 -04:00
|
|
|
outbits(state, 1 + 2 * mirrorbyte(0x90 - 144 + c), 9);
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-26 12:56:58 -04:00
|
|
|
static void uzlib_match(uzlib_lz77_state_t *state, int distance, int len)
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
{
|
2023-06-22 00:24:42 -04:00
|
|
|
assert(distance >= 1 && distance <= 32768);
|
|
|
|
distance -= 1;
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
|
|
|
|
while (len > 0) {
|
|
|
|
int thislen;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We can transmit matches of lengths 3 through 258
|
|
|
|
* inclusive. So if len exceeds 258, we must transmit in
|
|
|
|
* several steps, with 258 or less in each step.
|
|
|
|
*
|
|
|
|
* Specifically: if len >= 261, we can transmit 258 and be
|
|
|
|
* sure of having at least 3 left for the next step. And if
|
|
|
|
* len <= 258, we can just transmit len. But if len == 259
|
|
|
|
* or 260, we must transmit len-3.
|
|
|
|
*/
|
|
|
|
thislen = (len > 260 ? 258 : len <= 258 ? len : len - 3);
|
|
|
|
len -= thislen;
|
|
|
|
|
2023-06-22 00:24:42 -04:00
|
|
|
assert(thislen >= 3 && thislen <= 258);
|
|
|
|
|
|
|
|
thislen -= 3;
|
|
|
|
int lcode = 28;
|
|
|
|
int x = int_log2(thislen);
|
|
|
|
int y;
|
|
|
|
if (thislen < 255) {
|
|
|
|
if (x) {
|
|
|
|
--x;
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
}
|
2023-06-22 00:24:42 -04:00
|
|
|
y = (thislen >> (x ? x - 1 : 0)) & 3;
|
|
|
|
lcode = x * 4 + y;
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Transmit the length code. 256-279 are seven bits
|
|
|
|
* starting at 0000000; 280-287 are eight bits starting at
|
|
|
|
* 11000000.
|
|
|
|
*/
|
2023-06-22 00:24:42 -04:00
|
|
|
if (lcode <= 22) {
|
2023-06-26 12:56:58 -04:00
|
|
|
outbits(state, mirrorbyte((lcode + 1) * 2), 7);
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
} else {
|
2023-06-26 12:56:58 -04:00
|
|
|
outbits(state, mirrorbyte(lcode + 169), 8);
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Transmit the extra bits.
|
|
|
|
*/
|
2023-06-22 00:24:42 -04:00
|
|
|
if (thislen < 255 && x > 1) {
|
|
|
|
int extrabits = x - 1;
|
|
|
|
int lmin = (thislen >> extrabits) << extrabits;
|
2023-06-26 12:56:58 -04:00
|
|
|
outbits(state, thislen - lmin, extrabits);
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
}
|
|
|
|
|
2023-06-22 00:24:42 -04:00
|
|
|
x = int_log2(distance);
|
|
|
|
y = (distance >> (x ? x - 1 : 0)) & 1;
|
|
|
|
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
/*
|
|
|
|
* Transmit the distance code. Five bits starting at 00000.
|
|
|
|
*/
|
2023-06-26 12:56:58 -04:00
|
|
|
outbits(state, mirrorbyte((x * 2 + y) * 8), 5);
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Transmit the extra bits.
|
|
|
|
*/
|
2023-06-22 00:24:42 -04:00
|
|
|
if (x > 1) {
|
|
|
|
int dextrabits = x - 1;
|
|
|
|
int dmin = (distance >> dextrabits) << dextrabits;
|
2023-06-26 12:56:58 -04:00
|
|
|
outbits(state, distance - dmin, dextrabits);
|
2023-06-22 00:24:42 -04:00
|
|
|
}
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-26 12:56:58 -04:00
|
|
|
void uzlib_start_block(uzlib_lz77_state_t *state)
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
{
|
2023-06-26 12:56:58 -04:00
|
|
|
// Final block (0b1)
|
|
|
|
// Static huffman block (0b01)
|
|
|
|
outbits(state, 3, 3);
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
}
|
|
|
|
|
2023-06-26 12:56:58 -04:00
|
|
|
void uzlib_finish_block(uzlib_lz77_state_t *state)
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
{
|
2023-06-26 12:56:58 -04:00
|
|
|
// Close block (0b0000000)
|
|
|
|
// Make sure all bits are flushed (0b0000000)
|
|
|
|
outbits(state, 0, 14);
|
lib/uzlib: Add memory-efficient, streaming LZ77 compression support.
The compression algorithm implemented in this commit uses much less memory
compared to the standard way of implementing it using a hash table and
large look-back window. In particular the algorithm here doesn't allocate
hash table to store indices into the history of the previously seen text.
Instead it simply does a brute-force-search of the history text to find a
match for the compressor. This is slower (linear search vs hash table
lookup) but with a small enough history (eg 512 bytes) it's not that slow.
And a small history does not impact the compression too much.
To give some more concrete numbers comparing memory use between the
approaches:
- Standard approach: inplace compression, all text to compress must be in
RAM (or at least memory addressable), and then an additional 16k bytes
RAM of hash table pointers, pointing into the text
- The approach in this commit: streaming compression, only a limited amount
of previous text must be in RAM (user selectable, defaults to 512 bytes).
To compress, say, 1k of data, the standard approach requires all that data
to be in RAM, plus an additional 16k of RAM for the hash table pointers.
With this commit, you only need the 1k of data in RAM. Or if it's
streaming from a file (or elsewhere), you could get away with only 256
bytes of RAM for the sliding history and still get very decent compression.
In summary: because compression takes such a large amount of RAM (in the
standard algorithm) and it's not really suitable for microcontrollers, the
approach taken in this commit is to minimise RAM usage as much as possible,
and still have acceptable performance (speed and compression ratio).
Signed-off-by: Damien George <damien@micropython.org>
2023-01-17 23:46:23 -05:00
|
|
|
}
|