The qstr_last_chunk is not collected by the garbage collector. This relies
on the assertion that qstr_pool_t also references the qstr_last_chunk. If
an exception is raised while allocating the qstr_pool_t, qstr_last_chunk
has to be invalidated not to become a dangling reference at the next
garbage collection.
Signed-off-by: Emilie Feral <emilie.feral@numworks.com>
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
When threads and the GIL are enabled, then the qstr mutex is not needed.
The qstr_mutex field is never used in this case because of:
#if MICROPY_PY_THREAD && !MICROPY_PY_THREAD_GIL
#define QSTR_ENTER() mp_thread_mutex_lock(&MP_STATE_VM(qstr_mutex), 1)
#define QSTR_EXIT() mp_thread_mutex_unlock(&MP_STATE_VM(qstr_mutex))
#else
#define QSTR_ENTER()
#define QSTR_EXIT()
#endif
So, we can completely remove qstr_mutex everywhere when MICROPY_PY_THREAD
&& !MICROPY_PY_THREAD_GIL.
The string length being longer than the allowed qstr length can happen in
many locations, for example in the parser with very long variable names.
Without an explicit check that the length is within range (as done in this
patch) the code would exhibit crashes and strange behaviour with truncated
strings.
So long as the input qstr identifier is valid (below the maximum number of
qstrs) the function will always return a valid pointer. This patch
eliminates the "return 0" dead-code.
The technique of using alloca is how dotted import names are composed in
mp_import_from and mp_builtin___import__, so use the same technique in the
compiler. This puts less pressure on the heap (only the stack is used if
the qstr already exists, and if it doesn't exist then the standard qstr
block memory is used for the new qstr rather than a separate chunk of the
heap) and reduces overall code size.
In both parse.c and qstr.c, an internal chunking allocator tidies up
by calling m_renew to shrink an allocated chunk to the size used, and
assumes that the chunk will not move. However, when MICROPY_ENABLE_GC
is false, m_renew calls the system realloc, which does not guarantee
this behaviour. Environments where realloc may return a different
pointer include:
(1) mbed-os with MBED_HEAP_STATS_ENABLED (which adds a wrapper around
malloc & friends; this is where I was hit by the bug);
(2) valgrind on linux (how I diagnosed it).
The fix is to call m_renew_maybe with allow_move=false.
When there're C files to be (re)compiled, they're all passed first to
preprocessor. QSTR references are extracted from preprocessed output and
split per original C file. Then all available qstr files (including those
generated previously) are catenated together. Only if the resulting content
has changed, the output file is written (causing almost global rebuild
to pick up potentially renumbered qstr's). Otherwise, it's not updated
to not cause spurious rebuilds. Related make rules are split to minimize
amount of commands executed in the interim case (when some C files were
updated, but no qstrs were changed).
The config variable MICROPY_MODULE_FROZEN is now made of two separate
parts: MICROPY_MODULE_FROZEN_STR and MICROPY_MODULE_FROZEN_MPY. This
allows to have none, either or both of frozen strings and frozen mpy
files (aka frozen bytecode).
This patch makes configurable, via MICROPY_QSTR_BYTES_IN_HASH, the
number of bytes used for a qstr hash. It was originally fixed at 2
bytes, and now defaults to 2 bytes. Setting it to 1 byte will save
ROM and RAM at a small expense of hash collisions.
Previous to this patch all interned strings lived in their own malloc'd
chunk. On average this wastes N/2 bytes per interned string, where N is
the number-of-bytes for a quanta of the memory allocator (16 bytes on 32
bit archs).
With this patch interned strings are concatenated into the same malloc'd
chunk when possible. Such chunks are enlarged inplace when possible,
and shrunk to fit when a new chunk is needed.
RAM savings with this patch are highly varied, but should always show an
improvement (unless only 3 or 4 strings are interned). New version
typically uses about 70% of previous memory for the qstr data, and can
lead to savings of around 10% of total memory footprint of a running
script.
Costs about 120 bytes code size on Thumb2 archs (depends on how many
calls to gc_realloc are made).
Previously to this patch all constant string/bytes objects were
interned by the compiler, and this lead to crashes when the qstr was too
long (noticeable now that qstr length storage defaults to 1 byte).
With this patch, long string/bytes objects are never interned, and are
referenced directly as constant objects within generated code using
load_const_obj.
This new config option sets how many fixed-number-of-bytes to use to
store the length of each qstr. Previously this was hard coded to 2,
but, as per issue #1056, this is considered overkill since no-one
needs identifiers longer than 255 bytes.
With this patch the number of bytes for the length is configurable, and
defaults to 1 byte. The configuration option filters through to the
makeqstrdata.py script.
Code size savings going from 2 to 1 byte:
- unix x64 down by 592 bytes
- stmhal down by 1148 bytes
- bare-arm down by 284 bytes
Also has RAM savings, and will be slightly more efficient in execution.
This patch consolidates all global variables in py/ core into one place,
in a global structure. Root pointers are all located together to make
GC tracing easier and more efficient.
gc.enable/disable are now the same as CPython: they just control whether
automatic garbage collection is enabled or not. If disabled, you can
still allocate heap memory, and initiate a manual collection.
Blanket wide to all .c and .h files. Some files originating from ST are
difficult to deal with (license wise) so it was left out of those.
Also merged modpyb.h, modos.h, modstm.h and modtime.h in stmhal/.
The autogenerated header files have been moved about, and an extra
include dir has been added, which means you can give a custom
BUILD=newbuilddir option to make, and everything "just works"
Also tidied up the way the different Makefiles build their include-
directory flags