Instead of encoding 4 zero bytes as placeholders for the simple_name and
source_file qstrs, and storing the qstrs after the bytecode, store the
qstrs at the location of these 4 bytes. This saves 4 bytes per bytecode
function stored in a .mpy file (for example lcd160cr.mpy drops by 232
bytes, 4x 58 functions). And resulting code size is slightly reduced on
ports that use this feature.
Prior to this commit, when unwinding through an active finally the stack
was not being correctly popped/folded, which resulting in the VM crashing
for complicated unwinding of nested finallys.
This should be fixed with this commit, and more tests for return/break/
continue within a finally have been added to exercise this.
This check follows CPython's behaviour, because 'import *' always populates
the globals with the imported names, not locals.
Since it's safe to do this (doesn't lead to a crash or undefined behaviour)
the check is only enabled for MICROPY_CPYTHON_COMPAT.
Fixes issue #5121.
This patch compresses the second part of the bytecode prelude which
contains the source file name, function name, source-line-number mapping
and cell closure information. This part of the prelude now begins with a
single varible length unsigned integer which encodes 2 numbers, being the
byte-size of the following 2 sections in the header: the "source info
section" and the "closure section". After decoding this variable unsigned
integer it's possible to skip over one or both of these sections very
easily.
This scheme saves about 2 bytes for most functions compared to the original
format: one in the case that there are no closure cells, and one because
padding was eliminated.
The start of the bytecode prelude contains 6 numbers telling the amount of
stack needed for the Python values and exceptions, and the signature of the
function. Prior to this patch these numbers were all encoded one after the
other (2x variable unsigned integers, then 4x bytes), but using so many
bytes is unnecessary.
An entropy analysis of around 150,000 bytecode functions from the CPython
standard library showed that the optimal Shannon coding would need about
7.1 bits on average to encode these 6 numbers, compared to the existing 48
bits.
This patch attempts to get close to this optimal value by packing the 6
numbers into a single, varible-length unsigned integer via bit-wise
interleaving. The interleaving scheme is chosen to minimise the average
number of bytes needed, and at the same time keep the scheme simple enough
so it can be implemented without too much overhead in code size or speed.
The scheme requires about 10.5 bits on average to store the 6 numbers.
As a result most functions which originally took 6 bytes to encode these 6
numbers now need only 1 byte (in 80% of cases).
From the beginning of this project the RAISE_VARARGS opcode was named and
implemented following CPython, where it has an argument (to the opcode)
counting how many args the raise takes:
raise # 0 args (re-raise previous exception)
raise exc # 1 arg
raise exc from exc2 # 2 args (chained raise)
In the bytecode this operation therefore takes 2 bytes, one for
RAISE_VARARGS and one for the number of args.
This patch splits this opcode into 3, where each is now a single byte.
This reduces bytecode size by 1 byte for each use of raise. Every byte
counts! It also has the benefit of reducing code size (on all ports except
nanbox).
To make progress towards MicroPython supporting Python 3.5, adding the
matmul operator is important because it's a really "low level" part of the
language, being a new token and modifications to the grammar.
It doesn't make sense to make it configurable because 1) it would make the
grammar and lexer complicated/messy; 2) no other operators are
configurable; 3) it's not a feature that can be "dynamically plugged in"
via an import.
And matmul can be useful as a general purpose user-defined operator, it
doesn't have to be just for numpy use.
Based on work done by Jim Mussared.
Prior to this patch mp_opcode_format would calculate the incorrect size of
the MP_BC_UNWIND_JUMP opcode, missing the additional byte. But, because
opcodes below 0x10 are unused and treated as bytes in the .mpy load/save
and freezing code, this bug did not show any symptoms, since nested unwind
jumps would rarely (if ever) reach a depth of 16 (so the extra byte of this
opcode would be between 0x01 and 0x0f and be correctly loaded/saved/frozen
simply as an undefined opcode).
This patch fixes this bug by correctly accounting for the additional byte.
.
With this patch alignment is done relative to the start of the buffer that
is being unpacked, not the raw pointer value, as per CPython.
Fixes issue #3314.
With this patch exceptions that are re-raised have improved tracebacks
(less confusing, match CPython), and it makes re-raise slightly more
efficient (in time and RAM) because they no longer need to add a traceback.
Also general VM performance is not measurably affected.
Partially fixes issue #2928.
With this patch exception tracebacks that go through a finally are improved
(less confusing, match CPython), and it makes finally's slightly more
efficient (in time and RAM) because they no longer need to add a traceback.
Partially fixes issue #2928.
- Split 'qemu-arm' from 'unix' for generating tests.
- Add frozen module to the qemu-arm test build.
- Add test that reproduces the requirement to half-word align native
function data.
Enabled via MICROPY_PY_URE_DEBUG, disabled by default (but enabled on unix
coverage build). This is a rarely used feature that costs a lot of code
(500-800 bytes flash). Debugging of regular expressions can be done
offline with other tools.
As per PEP 485, this function appeared in for Python 3.5. Configured via
MICROPY_PY_MATH_ISCLOSE which is disabled by default, but enabled for the
ports which already have MICROPY_PY_MATH_SPECIAL_FUNCTIONS enabled.
Prior to this patch the amount of free space in an array (including
bytearray) was not being maintained correctly for the case of slice
assignment which changed the size of the array. Under certain cases (as
encoded in the new test) it was possible that the array could grow beyond
its allocated memory block and corrupt the heap.
Fixes issue #4127.
JSON requires that keys of objects be strings. CPython will therefore
automatically quote simple types (NoneType, bool, int, float) when they are
used directly as keys in JSON output. To prevent subtle bugs and emit
compliant JSON, MicroPython should at least test for such keys so they
aren't silently let through. Then doing the actual quoting is a similar
cost to raising an exception, so that's what is implemented by this patch.
Fixes issue #4790.
misc_aes.py and misc_mandel.py are adapted from sources in this repository.
misc_pystone.py is the standard Python pystone test. misc_raytrace.py is
written from scratch.
This benchmarking test suite is intended to be run on any MicroPython
target. As such all tests are parameterised with N and M: N is the
approximate CPU frequency (in MHz) of the target and M is the approximate
amount of heap memory (in kbytes) available on the target. When running
the benchmark suite these parameters must be specified and then each test
is tuned to run on that target in a reasonable time (<1 second).
The test scripts are not standalone: they require adding some extra code at
the end to run the test with the appropriate parameters. This is done
automatically by the run-perfbench.py script, in such a way that imports
are minimised (so the tests can be run on targets without filesystem
support).
To interface with the benchmarking framework, each test provides a
bm_params dict and a bm_setup function, with the later taking a set of
parameters (chosen based on N, M) and returning a pair of functions, one to
run the test and one to get the results.
When running the test the number of microseconds taken by the test are
recorded. Then this is converted into a benchmark score by inverting it
(so higher number is faster) and normalising it with an appropriate factor
(based roughly on the amount of work done by the test, eg number of
iterations).
Test outputs are also compared against a "truth" value, computed by running
the test with CPython. This provides a basic way of making sure the test
actually ran correctly.
Each test is run multiple times and the results averaged and standard
deviation computed. This is output as a summary of the test.
To make comparisons of performance across different runs the
run-perfbench.py script also includes a diff mode that reads in the output
of two previous runs and computes the difference in performance. Reports
are given as a percentage change in performance with a combined standard
deviation to give an indication if the noise in the benchmarking is less
than the thing that is being measured.
Example invocations for PC, pyboard and esp8266 targets respectively:
$ ./run-perfbench.py 1000 1000
$ ./run-perfbench.py --pyboard 100 100
$ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25