MP_BC_CALL_FUNCTION will leave the result on the Python stack, so that
result must be discarded by MP_BC_POP_TOP.
Signed-off-by: Damien George <damien@micropython.org>
Usage:
mpy-tool.py -o merged.mpy --merge mod1.mpy mod2.mpy
The constituent .mpy files are executed sequentially when the merged file
is imported, and they all use the same global namespace.
Instead of encoding 4 zero bytes as placeholders for the simple_name and
source_file qstrs, and storing the qstrs after the bytecode, store the
qstrs at the location of these 4 bytes. This saves 4 bytes per bytecode
function stored in a .mpy file (for example lcd160cr.mpy drops by 232
bytes, 4x 58 functions). And resulting code size is slightly reduced on
ports that use this feature.
This patch compresses the second part of the bytecode prelude which
contains the source file name, function name, source-line-number mapping
and cell closure information. This part of the prelude now begins with a
single varible length unsigned integer which encodes 2 numbers, being the
byte-size of the following 2 sections in the header: the "source info
section" and the "closure section". After decoding this variable unsigned
integer it's possible to skip over one or both of these sections very
easily.
This scheme saves about 2 bytes for most functions compared to the original
format: one in the case that there are no closure cells, and one because
padding was eliminated.
The start of the bytecode prelude contains 6 numbers telling the amount of
stack needed for the Python values and exceptions, and the signature of the
function. Prior to this patch these numbers were all encoded one after the
other (2x variable unsigned integers, then 4x bytes), but using so many
bytes is unnecessary.
An entropy analysis of around 150,000 bytecode functions from the CPython
standard library showed that the optimal Shannon coding would need about
7.1 bits on average to encode these 6 numbers, compared to the existing 48
bits.
This patch attempts to get close to this optimal value by packing the 6
numbers into a single, varible-length unsigned integer via bit-wise
interleaving. The interleaving scheme is chosen to minimise the average
number of bytes needed, and at the same time keep the scheme simple enough
so it can be implemented without too much overhead in code size or speed.
The scheme requires about 10.5 bits on average to store the 6 numbers.
As a result most functions which originally took 6 bytes to encode these 6
numbers now need only 1 byte (in 80% of cases).
Prior to this patch mp_opcode_format would calculate the incorrect size of
the MP_BC_UNWIND_JUMP opcode, missing the additional byte. But, because
opcodes below 0x10 are unused and treated as bytes in the .mpy load/save
and freezing code, this bug did not show any symptoms, since nested unwind
jumps would rarely (if ever) reach a depth of 16 (so the extra byte of this
opcode would be between 0x01 and 0x0f and be correctly loaded/saved/frozen
simply as an undefined opcode).
This patch fixes this bug by correctly accounting for the additional byte.
.
Previously, when linking qstr objects in native code for ARM Thumb, the
index into the machine code was being incremented by 4, not 8. It should
be 8 to account for the size of the two machine instructions movw and movt.
This patch makes sure the index into the machine code is incremented by the
correct amount for all variations of qstr linking.
See issue #4829.
Fixes errors in the tool when 1) linking qstrs in native ARM-M code; 2)
freezing multiple files some of which use native code and some which don't.
Fixes issue #4829.
The qstr window size is not log-2 encoded, it's just the actual number (but
in mpy-tool.py this didn't lead to an error because the size is just used
to truncate the window so it doesn't grow arbitrarily large in memory).
Addresses issue #4635.
When encoded in the mpy file, if qstr <= QSTR_LAST_STATIC then store two
bytes: 0, static_qstr_id. Otherwise encode the qstr as usual (either with
string data or a reference into the qstr window).
Reduces mpy file size by about 5%.
Instead of emitting two bytes in the bytecode for where the linked qstr
should be written to, it is now replaced by the actual qstr data, or a
reference into the qstr window.
Reduces mpy file size by about 10%.
This is an implementation of a sliding qstr window used to reduce the
number of qstrs stored in a .mpy file. The window size is configured to 32
entries which takes a fixed 64 bytes (16-bits each) on the C stack when
loading/saving a .mpy file. It allows to remember the most recent 32 qstrs
so they don't need to be stored again in the .mpy file. The qstr window
uses a simple least-recently-used mechanism to discard the least recently
used qstr when the window overflows (similar to dictionary compression).
This scheme only needs a single pass to save/load the .mpy file.
Reduces mpy file size by about 25% with a window size of 32.
POP_BLOCK and POP_EXCEPT are now the same, and are always followed by a
JUMP. So this optimisation reduces code size, and RAM usage of bytecode by
two bytes for each try-except handler.
If you happen to only have a really simple frozen file that doesn't contain
any new qstrs then the generated frozen_mpy.c file contains an empty
enumeration which causes a C compile time error.
Following an equivalent fix to py/bc.c. The reason the incorrect values
for the opcode constants were not previously causing a bug is because they
were never being used: these opcodes always have qstr arguments so the part
of the code that was comparing them would never be reached.
Thanks to @malinah for finding the problem and providing the initial patch.
The first dynamic qstr pool is double the size of the 'alloc' field of
the last const qstr pool. The built in const qstr pool
(mp_qstr_const_pool) has a hardcoded alloc size of 10, meaning that the
first dynamic pool is allocated space for 20 entries. The alloc size
must be less than or equal to the actual number of qstrs in the pool
(the 'len' field) to ensure that the first dynamically created qstr
triggers the creation of a new pool.
When modules are frozen a second const pool is created (generally
mp_qstr_frozen_const_pool) and linked to the built in pool. However,
this second const pool had its 'alloc' field set to the number of qstrs
in the pool. When freezing a large quantity of modules this can result
in thousands of qstrs being in the pool. This means that the first
dynamically created qstr results in a massive allocation. This commit
sets the alloc size of the frozen qstr pool to 10 or less (if the number
of qstrs in the pool is less than 10). The result of this is that the
allocation behaviour when a dynamic qstr is created is identical with an
without frozen code.
Note that there is the potential for a slight memory inefficiency if the
frozen modules have less than 10 qstrs, as the first few dynamic
allocations will have quite a large overhead, but the geometric growth
soon deals with this.
This patch allows the following code to run without allocating on the heap:
super().foo(...)
Before this patch such a call would allocate a super object on the heap and
then load the foo method and call it right away. The super object is only
needed to perform the lookup of the method and not needed after that. This
patch makes an optimisation to allocate the super object on the C stack and
discard it right after use.
Changes in code size due to this patch are:
bare-arm: +128
minimal: +232
unix x64: +416
unix nanbox: +364
stmhal: +184
esp8266: +340
cc3200: +128