This commit removes all parts of code associated with the existing
MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE optimisation option, including the
-mcache-lookup-bc option to mpy-cross.
This feature originally provided a significant performance boost for Unix,
but wasn't able to be enabled for MCU targets (due to frozen bytecode), and
added significant extra complexity to generating and distributing .mpy
files.
The equivalent performance gain is now provided by the combination of
MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE (which has
been enabled on the unix port in the previous commit).
It's hard to provide precise performance numbers, but tests have been run
on a wide variety of architectures (x86-64, ARM Cortex, Aarch64, RISC-V,
xtensa) and they all generally agree on the qualitative improvements seen
by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and
MICROPY_OPT_MAP_LOOKUP_CACHE.
For example, on a "quiet" Linux x64 environment (i3-5010U @ 2.10GHz) the
change from CACHE_MAP_LOOKUP_IN_BYTECODE, to LOAD_ATTR_FAST_PATH combined
with MAP_LOOKUP_CACHE is:
diff of scores (higher is better)
N=2000 M=2000 bccache -> attrmapcache diff diff% (error%)
bm_chaos.py 13742.56 -> 13905.67 : +163.11 = +1.187% (+/-3.75%)
bm_fannkuch.py 60.13 -> 61.34 : +1.21 = +2.012% (+/-2.11%)
bm_fft.py 113083.20 -> 114793.68 : +1710.48 = +1.513% (+/-1.57%)
bm_float.py 256552.80 -> 243908.29 : -12644.51 = -4.929% (+/-1.90%)
bm_hexiom.py 521.93 -> 625.41 : +103.48 = +19.826% (+/-0.40%)
bm_nqueens.py 197544.25 -> 217713.12 : +20168.87 = +10.210% (+/-3.01%)
bm_pidigits.py 8072.98 -> 8198.75 : +125.77 = +1.558% (+/-3.22%)
misc_aes.py 17283.45 -> 16480.52 : -802.93 = -4.646% (+/-0.82%)
misc_mandel.py 99083.99 -> 128939.84 : +29855.85 = +30.132% (+/-5.88%)
misc_pystone.py 83860.10 -> 82592.56 : -1267.54 = -1.511% (+/-2.27%)
misc_raytrace.py 21490.40 -> 22227.23 : +736.83 = +3.429% (+/-1.88%)
This shows that the new optimisations are at least as good as the existing
inline-bytecode-caching, and are sometimes much better (because the new
ones apply caching to a wider variety of map lookups).
The new optimisations can also benefit code generated by the native
emitter, because they apply to the runtime rather than the generated code.
The improvement for the native emitter when LOAD_ATTR_FAST_PATH and
MAP_LOOKUP_CACHE are enabled is (same Linux environment as above):
diff of scores (higher is better)
N=2000 M=2000 native -> nat-attrmapcache diff diff% (error%)
bm_chaos.py 14130.62 -> 15464.68 : +1334.06 = +9.441% (+/-7.11%)
bm_fannkuch.py 74.96 -> 76.16 : +1.20 = +1.601% (+/-1.80%)
bm_fft.py 166682.99 -> 168221.86 : +1538.87 = +0.923% (+/-4.20%)
bm_float.py 233415.23 -> 265524.90 : +32109.67 = +13.756% (+/-2.57%)
bm_hexiom.py 628.59 -> 734.17 : +105.58 = +16.796% (+/-1.39%)
bm_nqueens.py 225418.44 -> 232926.45 : +7508.01 = +3.331% (+/-3.10%)
bm_pidigits.py 6322.00 -> 6379.52 : +57.52 = +0.910% (+/-5.62%)
misc_aes.py 20670.10 -> 27223.18 : +6553.08 = +31.703% (+/-1.56%)
misc_mandel.py 138221.11 -> 152014.01 : +13792.90 = +9.979% (+/-2.46%)
misc_pystone.py 85032.14 -> 105681.44 : +20649.30 = +24.284% (+/-2.25%)
misc_raytrace.py 19800.01 -> 23350.73 : +3550.72 = +17.933% (+/-2.79%)
In summary, compared to MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE, the new
MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE options:
- are simpler;
- take less code size;
- are faster (generally);
- work with code generated by the native emitter;
- can be used on embedded targets with a small and constant RAM overhead;
- allow the same .mpy bytecode to run on all targets.
See #7680 for further discussion. And see also #7653 for a discussion
about simplifying mpy-cross options.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This implements (most of) the PEP-498 spec for f-strings and is based on
https://github.com/micropython/micropython/pull/4998 by @klardotsh.
It is implemented in the lexer as a syntax translation to `str.format`:
f"{a}" --> "{}".format(a)
It also supports:
f"{a=}" --> "a={}".format(a)
This is done by extracting the arguments into a temporary vstr buffer,
then after the string has been tokenized, the lexer input queue is saved
and the contents of the temporary vstr buffer are injected into the lexer
instead.
There are four main limitations:
- raw f-strings (`fr` or `rf` prefixes) are not supported and will raise
`SyntaxError: raw f-strings are not supported`.
- literal concatenation of f-strings with adjacent strings will fail
"{}" f"{a}" --> "{}{}".format(a) (str.format will incorrectly use
the braces from the non-f-string)
f"{a}" f"{a}" --> "{}".format(a) "{}".format(a) (cannot concatenate)
- PEP-498 requires the full parser to understand the interpolated
argument, however because this entirely runs in the lexer it cannot
resolve nested braces in expressions like
f"{'}'}"
- The !r, !s, and !a conversions are not supported.
Includes tests and cpydiffs.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Per CPython everything which comes after the command, module or file
argument is not an option for the interpreter itself. Hence the processing
of options should stop when encountering those, and the remainder be passed
as sys.argv. Note the latter was already the case for a module or file but
not for a command.
This fixes issues like 'micropython myfile.py -h' showing the help and
exiting instead of passing '-h' as sys.argv[1], likewise for
'-X <something>' being treated as a special option no matter where it
occurs on the command line.
The syntax matches CPython and the semantics are equivalent except that,
unlike CPython, MicroPython allows using := to assign to comprehension
iteration variables, because disallowing this would take a lot of code to
check for it.
The new compile-time option MICROPY_PY_ASSIGN_EXPR selects this feature and
is enabled by default, following MICROPY_PY_ASYNC_AWAIT.
This adds the Python files in the tests/ directory to be formatted with
./tools/codeformat.py. The basics/ subdirectory is excluded for now so we
aren't changing too much at once.
In a few places `# fmt: off`/`# fmt: on` was used where the code had
special formatting for readability or where the test was actually testing
the specific formatting.
When this variable is set to non-empty string it triggers the REPL after a
command/module/file finishes running.
The Python file without the file extension is because the cmdline: parser
in run-test splits on spaces, so we can't use the -c option since
`import os` can't be written without a space.
This commit adds backward-word, backward-kill-word, forward-word,
forward-kill-word sequences for the REPL, with bindings to Alt+F, Alt+B,
Alt+D and Alt+Backspace respectively. It is disabled by default and can be
enabled via MICROPY_REPL_EMACS_WORDS_MOVE.
Further enabling MICROPY_REPL_EMACS_EXTRA_WORDS_MOVE adds extra bindings
for these new sequences: Ctrl+Right, Ctrl+Left and Ctrl+W.
The features are enabled on unix micropython-coverage and micropython-dev.
This check follows CPython's behaviour, because 'import *' always populates
the globals with the imported names, not locals.
Since it's safe to do this (doesn't lead to a crash or undefined behaviour)
the check is only enabled for MICROPY_CPYTHON_COMPAT.
Fixes issue #5121.
This patch compresses the second part of the bytecode prelude which
contains the source file name, function name, source-line-number mapping
and cell closure information. This part of the prelude now begins with a
single varible length unsigned integer which encodes 2 numbers, being the
byte-size of the following 2 sections in the header: the "source info
section" and the "closure section". After decoding this variable unsigned
integer it's possible to skip over one or both of these sections very
easily.
This scheme saves about 2 bytes for most functions compared to the original
format: one in the case that there are no closure cells, and one because
padding was eliminated.
The start of the bytecode prelude contains 6 numbers telling the amount of
stack needed for the Python values and exceptions, and the signature of the
function. Prior to this patch these numbers were all encoded one after the
other (2x variable unsigned integers, then 4x bytes), but using so many
bytes is unnecessary.
An entropy analysis of around 150,000 bytecode functions from the CPython
standard library showed that the optimal Shannon coding would need about
7.1 bits on average to encode these 6 numbers, compared to the existing 48
bits.
This patch attempts to get close to this optimal value by packing the 6
numbers into a single, varible-length unsigned integer via bit-wise
interleaving. The interleaving scheme is chosen to minimise the average
number of bytes needed, and at the same time keep the scheme simple enough
so it can be implemented without too much overhead in code size or speed.
The scheme requires about 10.5 bits on average to store the 6 numbers.
As a result most functions which originally took 6 bytes to encode these 6
numbers now need only 1 byte (in 80% of cases).
From the beginning of this project the RAISE_VARARGS opcode was named and
implemented following CPython, where it has an argument (to the opcode)
counting how many args the raise takes:
raise # 0 args (re-raise previous exception)
raise exc # 1 arg
raise exc from exc2 # 2 args (chained raise)
In the bytecode this operation therefore takes 2 bytes, one for
RAISE_VARARGS and one for the number of args.
This patch splits this opcode into 3, where each is now a single byte.
This reduces bytecode size by 1 byte for each use of raise. Every byte
counts! It also has the benefit of reducing code size (on all ports except
nanbox).
To make progress towards MicroPython supporting Python 3.5, adding the
matmul operator is important because it's a really "low level" part of the
language, being a new token and modifications to the grammar.
It doesn't make sense to make it configurable because 1) it would make the
grammar and lexer complicated/messy; 2) no other operators are
configurable; 3) it's not a feature that can be "dynamically plugged in"
via an import.
And matmul can be useful as a general purpose user-defined operator, it
doesn't have to be just for numpy use.
Based on work done by Jim Mussared.
POP_BLOCK and POP_EXCEPT are now the same, and are always followed by a
JUMP. So this optimisation reduces code size, and RAM usage of bytecode by
two bytes for each try-except handler.
This patch fixes a bug in the VM when breaking within a try-finally. The
bug has to do with executing a break within the finally block of a
try-finally statement. For example:
def f():
for x in (1,):
print('a', x)
try:
raise Exception
finally:
print(1)
break
print('b', x)
f()
Currently in uPy the above code will print:
a 1
1
1
segmentation fault (core dumped) micropython
Not only is there a seg fault, but the "1" in the finally block is printed
twice. This is because when the VM executes a finally block it doesn't
really know if that block was executed due to a fall-through of the try (no
exception raised), or because an exception is active. In particular, for
nested finallys the VM has no idea which of the nested ones have active
exceptions and which are just fall-throughs. So when a break (or continue)
is executed it tries to unwind all of the finallys, when in fact only some
may be active.
It's questionable whether break (or return or continue) should be allowed
within a finally block, because they implicitly swallow any active
exception, but nevertheless it's allowed by CPython (although almost never
used in the standard library). And uPy should at least not crash in such a
case.
The solution here relies on the fact that exception and finally handlers
always appear in the bytecode after the try body.
Note: there was a similar bug with a return in a finally block, but that
was previously fixed in b735208403
The way it was written previously the variable x was not an implicit
nonlocal, it was just a normal local (but the compiler has a bug which
incorrectly makes it a nonlocal).
This patch changes the way REPL autocomplete finds matches. It now probes
the target object for all qstrs via mp_load_method_maybe to look for a
match with the given input string. Similar to how the builtin dir()
function works, this new algorithm now find all methods and instances of
user-defined classes including attributes of their parent classes. This
helps a lot at the REPL prompt for user-discovery and to autocomplete names
even for classes that are derived.
The downside is that this new algorithm is slower than the previous one,
and in particular will be slower the more qstrs there are in the system.
But because REPL autocomplete is primarily used in an interactive way it is
not that important to make it fast, as long as it is "fast enough" compared
to human reaction.
On a slow microcontroller (CPU running at 16MHz) the autocomplete time for
a list of 35 names in the outer namespace (pressing tab at a bare prompt)
takes about 160ms with this algorithm, compared to about 40ms for the
previous implementation (this time includes the actual printing of the
names as well). This time of 160ms is very reasonable especially given the
new functionality of listing all the names.
This patch also decreases code size by:
bare-arm: +0
minimal x86: -128
unix x64: -128
unix nanbox: -224
stm32: -88
cc3200: -80
esp8266: -92
esp32: -84
This is to allow to place reverse ops immediately after normal ops, so
they can be tested as one range (which is optimization for reverse ops
introduction in the next patch).
It starts a dichotomy of mp_binary_op_t values which can't appear in the
bytecode. Another reason to move it is to VALUES of OP_* and OP_INPLACE_*
nicely adjacent. This also will be needed for OP_REVERSE_*, to be soon
introduced.
This patch refactors the handling of the special super() call within the
compiler. It removes the need for a global (to the compiler) state variable
which keeps track of whether the subject of an expression is super. The
handling of super() is now done entirely within one function, which makes
the compiler a bit cleaner and allows to easily add more optimisations to
super calls.
Changes to the code size are:
bare-arm: +12
minimal: +0
unix x64: +48
unix nanbox: -16
stmhal: +4
cc3200: +0
esp8266: -56
Previous to this patch any non-interned str/bytes objects would create a
special parse node that held a copy of the str/bytes data. Then in the
compiler this data would be turned into a str/bytes object. This actually
lead to 2 copies of the data, one in the parse node and one in the object.
The parse node's copy of the data would be freed at the end of the compile
stage but nevertheless it meant that the peak memory usage of the
parse/compile stage was higher than it needed to be (by an amount equal to
the number of bytes in all the non-interned str/bytes objects).
This patch changes the behaviour so that str/bytes objects are created
directly in the parser and the object stored in a const-object parse node
(which already exists for bignum, float and complex const objects). This
reduces peak RAM usage of the parse/compile stage, simplifies the parser
and compiler, and reduces code size by about 170 bytes on Thumb2 archs,
and by about 300 bytes on Xtensa archs.
The output might contain more than one line ending in 5b so properly skip
everything until the next known point.
This fixes test failures in appveyor debug builds.