Commit Graph

394 Commits

Author SHA1 Message Date
Dan Halbert
2c795acf1e py/compile.c: add missing line for native labels in await 2023-10-24 15:39:26 -04:00
Scott Shawcroft
f13ea9a49f
Fix async tests by adding back __await__ use. Remove u* lookup 2023-10-23 16:13:11 -07:00
Dan Halbert
c0a4abc03c Fix merge bugs; remove shared/tinyusb/* 2023-10-19 16:02:42 -04:00
Damien George
606ec9bfb1 py/compile: Fix async for's stack handling of iterator expression.
Prior to this fix, async for assumed the iterator expression was a simple
identifier, and used that identifier as a local to store the intermediate
iterator object.  This is incorrect behaviour.

This commit fixes the issue by keeping the iterator object on the stack as
an anonymous local variable.

Fixes issue #11511.

Signed-off-by: Damien George <damien@micropython.org>
2023-07-13 13:50:50 +10:00
Damien George
1b980c9dbe py/compile: Remove over-eager optimisation of tuples as if condition.
When a tuple is the condition of an if statement, it's only possible to
optimise that tuple away when it is a constant tuple (ie all its elements
are constants), because if it's not constant then the elements must be
evaluated in case they have side effects (even though the resulting tuple
will always be "true").

The code before this change handled the empty tuple OK (because it doesn't
need to be evaluated), but it discarded non-empty tuples without evaluating
them, which is incorrect behaviour (as show by the updated test).

This optimisation is anyway rarely applied because it's not common Python
coding practice to write things like `if (): ...` and `if (1, 2): ...`, so
removing this optimisation completely won't affect much code, if any.

Furthermore, when MICROPY_COMP_CONST_TUPLE is enabled, constant tuples are
already optimised by the parser, so expression with constant tuples like
`if (): ...` and `if (1, 2): ...` will continue to be optimised properly
(and so when this option is enabled the code that's deleted in this commit
is actually unreachable when the if condition is a constant tuple).

Signed-off-by: Damien George <damien@micropython.org>
2023-05-03 13:21:18 +10:00
Damien George
b1229efbd1 all: Fix spelling mistakes based on codespell check.
Signed-off-by: Damien George <damien@micropython.org>
2023-04-27 18:03:06 +10:00
Damien George
7c1584aef1 py/compile: Fix scope of assignment expression target in comprehensions.
When := is used in a comprehension the target variable is bound to the
parent scope, so it's either a global or a nonlocal.  Prior to this commit
that was handled by simply using the parent scope's id_info for the
target variable.  That's completely wrong because it uses the slot number
for the parent's Python stack to store the variable, rather than the slot
number for the comprehension.  This will in most cases lead to incorrect
behaviour or memory faults.

This commit fixes the scoping of the target variable by explicitly
declaring it a global or nonlocal, depending on whether the parent is the
global scope or not.  Then the id_info of the comprehension can be used to
access the target variable.  This fixes a lot of cases of using := in a
comprehension.

Code size change for this commit:

       bare-arm:    +0 +0.000%
    minimal x86:    +0 +0.000%
       unix x64:  +152 +0.019% standard
          stm32:   +96 +0.024% PYBV10
         cc3200:   +96 +0.052%
        esp8266:  +196 +0.028% GENERIC
          esp32:  +156 +0.010% GENERIC[incl +8(data)]
         mimxrt:   +96 +0.027% TEENSY40
     renesas-ra:   +88 +0.014% RA6M2_EK
            nrf:   +88 +0.048% pca10040
            rp2:  +104 +0.020% PICO
           samd:   +88 +0.033% ADAFRUIT_ITSYBITSY_M4_EXPRESS

Fixes issue #10895.

Signed-off-by: Damien George <damien@micropython.org>
2023-03-09 12:13:12 +11:00
Damien George
2283b6d68f py: Pass in address to compiled module instead of returning it.
This change makes it so the compiler and persistent code loader take a
mp_compiled_module_t* as their last argument, instead of returning this
struct.  This eliminates a duplicate context variable for all callers of
these functions (because the context is now stored in the
mp_compiled_module_t by the caller), and also eliminates any confusion
about which context to use after the mp_compile_to_raw_code or
mp_raw_code_load function returns (because there is now only one context,
that stored in mp_compiled_module_t.context).

Reduces code size by 16 bytes on ARM Cortex-based ports.

Signed-off-by: Damien George <damien@micropython.org>
2022-12-08 12:27:23 +11:00
Damien George
c0fa903d6b py/compile: Support large integers in inline-asm data directive.
Fixes issue #8956.

Signed-off-by: Damien George <damien@micropython.org>
2022-07-26 12:24:50 +10:00
Damien George
e85a096302 py/emit: Remove logic to detect last-emit-was-return-value.
This optimisation to remove dead code is not as good as it could be.

Signed-off-by: Damien George <damien@micropython.org>
2022-06-20 22:28:18 +10:00
Damien George
cbad559366 py/compile: Give the compiler a hint about num nodes being non-zero.
Without this, newer versions of gcc (eg 11.2.0) used with -O2 can warn
about `q_ptr` being maybe uninitialized, because it doesn't know that there
is at least one qstr being written in to this (alloca'd) memory.

As part of this, change the type of `n` to `size_t` so the compiler knows
it's unsigned and can generate better code.

Code size change for this commit:

       bare-arm:   -28 -0.049%
    minimal x86:    -4 -0.002%
       unix x64:    +0 +0.000%
    unix nanbox:   -16 -0.003%
          stm32:   -24 -0.006% PYBV10
         cc3200:   -32 -0.017%
        esp8266:    +8 +0.001% GENERIC
          esp32:   -52 -0.003% GENERIC
            nrf:   -24 -0.013% pca10040
            rp2:   -32 -0.006% PICO
           samd:   -28 -0.020% ADAFRUIT_ITSYBITSY_M4_EXPRESS

Signed-off-by: Damien George <damien@micropython.org>
2022-06-08 14:59:43 +10:00
Damien George
d4d53e9e11 py/emitnative: Access qstr values using indirection table qstr_table.
This changes the native emitter to access qstr values using the qstr
indirection table qstr_table, but only when generating native code that
will be saved to a .mpy file.  This makes the resulting native code fully
static, ie it does not require any fix-ups or rewriting when it is
imported.

The performance of native code is more or less unchanged.  Benchmark
results on PYBv1.0 (using --via-mpy and --emit native) are:

N=100 M=100          baseline -> this-commit     diff      diff% (error%)
bm_chaos.py            407.16 ->     411.85 :   +4.69 =  +1.152% (+/-0.01%)
bm_fannkuch.py         100.89 ->     101.20 :   +0.31 =  +0.307% (+/-0.01%)
bm_fft.py             3521.17 ->    3441.72 :  -79.45 =  -2.256% (+/-0.00%)
bm_float.py           6707.29 ->    6644.83 :  -62.46 =  -0.931% (+/-0.00%)
bm_hexiom.py            55.91 ->      55.41 :   -0.50 =  -0.894% (+/-0.00%)
bm_nqueens.py         5343.54 ->    5326.17 :  -17.37 =  -0.325% (+/-0.00%)
bm_pidigits.py         603.89 ->     632.79 :  +28.90 =  +4.786% (+/-0.33%)
core_qstr.py            64.18 ->      64.09 :   -0.09 =  -0.140% (+/-0.01%)
core_yield_from.py     313.61 ->     311.11 :   -2.50 =  -0.797% (+/-0.03%)
misc_aes.py            654.29 ->     659.75 :   +5.46 =  +0.834% (+/-0.02%)
misc_mandel.py        4205.10 ->    4272.08 :  +66.98 =  +1.593% (+/-0.01%)
misc_pystone.py       3077.79 ->    3128.39 :  +50.60 =  +1.644% (+/-0.01%)
misc_raytrace.py       388.45 ->     393.71 :   +5.26 =  +1.354% (+/-0.01%)
viper_call0.py         576.83 ->     566.76 :  -10.07 =  -1.746% (+/-0.05%)
viper_call1a.py        550.39 ->     540.12 :  -10.27 =  -1.866% (+/-0.11%)
viper_call1b.py        438.32 ->     432.09 :   -6.23 =  -1.421% (+/-0.11%)
viper_call1c.py        442.96 ->     436.11 :   -6.85 =  -1.546% (+/-0.08%)
viper_call2a.py        536.31 ->     527.37 :   -8.94 =  -1.667% (+/-0.04%)
viper_call2b.py        378.99 ->     377.50 :   -1.49 =  -0.393% (+/-0.08%)

Signed-off-by: Damien George <damien@micropython.org>
2022-05-23 15:43:06 +10:00
Damien George
8588525868 py/compile: De-duplicate constant objects in module's constant table.
The recent rework of bytecode made all constants global with respect to the
module (previously, each function had its own constant table).  That means
the constant table for a module is shared among all functions/methods/etc
within the module.

This commit add support to the compiler to de-duplicate constants in this
module constant table.  So if a constant is used more than once -- eg 1.0
or (None, None) -- then the same object is reused for all instances.

For example, if there is code like `print(1.0, 1.0)` then the parser will
create two independent constants 1.0 and 1.0.  The compiler will then (with
this commit) notice they are the same and only put one of them in the
constant table.  The bytecode will then reuse that constant twice in the
print expression.  That allows the second 1.0 to be reclaimed by the GC,
also means the constant table has one less entry so saves a word.

Signed-off-by: Damien George <damien@micropython.org>
2022-05-18 15:23:11 +10:00
Damien George
90682f43af py/compile: Allow new qstrs to be allocated at all compiler passes.
Prior to this commit, all qstrs were required to be allocated (by calling
mp_emit_common_use_qstr) in the MP_PASS_SCOPE pass (the first one).  But
this is an unnecessary restriction, which is lifted by this commit.
Lifting the restriction simplifies the compiler because it can allocate
qstrs in later passes.

This also generates better code, because in some cases (eg when a variable
is closed over) the scope of an identifier is not known until a bit later
and then the identifier no longer needs its qstr allocated in the global
table.

Code size is reduced for all ports with this commit.

Signed-off-by: Damien George <damien@micropython.org>
2022-05-17 23:39:22 +10:00
Damien George
1fb01bd6c5 py/emitnative: Put a pointer to the native prelude in child_table array.
Some architectures (like esp32 xtensa) cannot read byte-wise from
executable memory.  This means the prelude for native functions -- which is
usually located after the machine code for the native function -- must be
placed in separate memory that can be read byte-wise.  Prior to this commit
this was achieved by enabling N_PRELUDE_AS_BYTES_OBJ for the emitter and
MICROPY_EMIT_NATIVE_PRELUDE_AS_BYTES_OBJ for the runtime.  The prelude was
then placed in a bytes object, pointed to by the module's constant table.

This behaviour is changed by this commit so that a pointer to the prelude
is stored either in mp_obj_fun_bc_t.child_table, or in
mp_obj_fun_bc_t.child_table[num_children] if num_children > 0.  The reasons
for doing this are:

1. It decouples the native emitter from runtime requirements, the emitted
   code no longer needs to know if the system it runs on can/can't read
   byte-wise from executable memory.

2. It makes all ports have the same emitter behaviour, there is no longer
   the N_PRELUDE_AS_BYTES_OBJ option.

3. The module's constant table is now used only for actual constants in the
   Python code.  This allows further optimisations to be done with the
   constants (eg constant deduplication).

Code size change for those ports that enable the native emitter:
   unix x64:   +80 +0.015%
      stm32:   +24 +0.004% PYBV10
    esp8266:   +88 +0.013% GENERIC
      esp32:   -20 -0.002% GENERIC[incl -112(data)]
        rp2:   +32 +0.005% PICO

Signed-off-by: Damien George <damien@micropython.org>
2022-05-17 16:44:49 +10:00
Damien George
e52f14d057 py/parse: Factor obj extract code to mp_parse_node_extract_const_object.
Signed-off-by: Damien George <damien@micropython.org>
2022-04-14 22:44:56 +10:00
Damien George
bd556b6996 py: Fix compiling and decoding of *args at large arg positions.
There were two issues with the existing code:

1. "1 << i" is computed as a 32-bit number so would overflow when
   executed on 64-bit machines (when mp_uint_t is 64-bit).  This meant that
   *args beyond 32 positions would not be handled correctly.

2. star_args must fit as a positive small int so that it is encoded
   correctly in the emitted code.  MP_SMALL_INT_BITS is too big because it
   overflows a small int by 1 bit.  MP_SMALL_INT_BITS - 1 does not work
   because it produces a signed small int which is then sign extended when
   extracted (even by mp_obj_get_int_truncated), and this sign extension
   means that any position arg after *args is also treated as a star-arg.
   So the maximum bit position is MP_SMALL_INT_BITS - 2.  This means that
   MP_OBJ_SMALL_INT_VALUE() can be used instead of
   mp_obj_get_int_truncated() to get the value of star_args.

These issues are fixed by this commit, and a test added.

Signed-off-by: Damien George <damien@micropython.org>
2022-04-01 09:20:42 +11:00
David Lechner
783b1a868f py/runtime: Allow multiple *args in a function call.
This is a partial implementation of PEP 448 to allow unpacking multiple
star args in a function or method call.

This is implemented by changing the emitted bytecodes so that both
positional args and star args are stored as positional args.  A bitmap is
added to indicate if an argument at a given position is a positional
argument or a star arg.

In the generated code, this new bitmap takes the place of the old star arg.
It is stored as a small int, so this means only the first N arguments can
be star args where N is the number of bits in a small int.

The runtime is modified to interpret this new bytecode format while still
trying to perform as few memory reallocations as possible.

Signed-off-by: David Lechner <david@pybricks.com>
2022-03-31 16:59:30 +11:00
David Lechner
1e99d29f36 py/runtime: Allow multiple **args in a function call.
This is a partial implementation of PEP 448 to allow multiple ** unpackings
when calling a function or method.

The compiler is modified to encode the argument as a None: obj key-value
pair (similar to how regular keyword arguments are encoded as str: obj
pairs).  The extra object that was pushed on the stack to hold a single **
unpacking object is no longer used and is removed.

The runtime is modified to decode this new format.

Signed-off-by: David Lechner <david@pybricks.com>
2022-03-31 16:54:00 +11:00
Damien George
df9a412206 py/compile: Only show raw code that is bytecode.
Signed-off-by: Damien George <damien@micropython.org>
2022-03-30 16:31:53 +11:00
Damien George
538c3c0a55 py: Change jump opcodes to emit 1-byte jump offset when possible.
This commit introduces changes:

- All jump opcodes are changed to have variable length arguments, of either
  1 or 2 bytes (previously they were fixed at 2 bytes).  In most cases only
  1 byte is needed to encode the short jump offset, saving bytecode size.

- The bytecode emitter now selects 1 byte jump arguments when the jump
  offset is guaranteed to fit in 1 byte.  This is achieved by checking if
  the code size changed during the last pass and, if it did (if it shrank),
  then requesting that the compiler make another pass to get the correct
  offsets of the now-smaller code.  This can continue multiple times until
  the code stabilises.  The code can only ever shrink so this iteration is
  guaranteed to complete.  In most cases no extra passes are needed, the
  original 4 passes are enough to get it right by the 4th pass (because the
  2nd pass computes roughly the correct labels and the 3rd pass computes
  the correct size for the jump argument).

This change to the jump opcode encoding reduces .mpy files and RAM usage
(when bytecode is in RAM) by about 2% on average.

The performance of the VM is not impacted, at least within measurment of
the performance benchmark suite.

Code size is reduced for builds that include a decent amount of frozen
bytecode.  ARM Cortex-M builds without any frozen code increase by about
350 bytes.

Signed-off-by: Damien George <damien@micropython.org>
2022-03-28 15:41:38 +11:00
Damien George
1692cad673 py/showbc: Remove global variables and make DECODE_PTR work correctly.
The bytecode state variables mp_showbc_code_start and mp_showbc_constants
have been removed and made local variables passed into the various
functions.

As part of this, the DECODE_PTR macro is fixed so it extracts the relevant
pointer from the child_table (a regression introduced in
f2040bfc7e).

Signed-off-by: Damien George <damien@micropython.org>
2022-03-16 11:59:46 +11:00
Damien George
962ad8622e py/parse: Handle check for target small-int size in parser.
This means that all constants for EMIT_ARG(load_const_obj, obj) are created
in the parser (rather than some in the compiler).

Signed-off-by: Damien George <damien@micropython.org>
2022-03-16 00:41:10 +11:00
Damien George
3c7cab4e98 py/parse: Put const bytes objects in parse tree as const object.
Instead of as an intermediate qstr, which may unnecessarily intern the data
of the bytes object.

Signed-off-by: Damien George <damien@micropython.org>
2022-03-16 00:41:10 +11:00
Damien George
f2040bfc7e py: Rework bytecode and .mpy file format to be mostly static data.
Background: .mpy files are precompiled .py files, built using mpy-cross,
that contain compiled bytecode functions (and can also contain machine
code). The benefit of using an .mpy file over a .py file is that they are
faster to import and take less memory when importing.  They are also
smaller on disk.

But the real benefit of .mpy files comes when they are frozen into the
firmware.  This is done by loading the .mpy file during compilation of the
firmware and turning it into a set of big C data structures (the job of
mpy-tool.py), which are then compiled and downloaded into the ROM of a
device.  These C data structures can be executed in-place, ie directly from
ROM.  This makes importing even faster because there is very little to do,
and also means such frozen modules take up much less RAM (because their
bytecode stays in ROM).

The downside of frozen code is that it requires recompiling and reflashing
the entire firmware.  This can be a big barrier to entry, slows down
development time, and makes it harder to do OTA updates of frozen code
(because the whole firmware must be updated).

This commit attempts to solve this problem by providing a solution that
sits between loading .mpy files into RAM and freezing them into the
firmware.  The .mpy file format has been reworked so that it consists of
data and bytecode which is mostly static and ready to run in-place.  If
these new .mpy files are located in flash/ROM which is memory addressable,
the .mpy file can be executed (mostly) in-place.

With this approach there is still a small amount of unpacking and linking
of the .mpy file that needs to be done when it's imported, but it's still
much better than loading an .mpy from disk into RAM (although not as good
as freezing .mpy files into the firmware).

The main trick to make static .mpy files is to adjust the bytecode so any
qstrs that it references now go through a lookup table to convert from
local qstr number in the module to global qstr number in the firmware.
That means the bytecode does not need linking/rewriting of qstrs when it's
loaded.  Instead only a small qstr table needs to be built (and put in RAM)
at import time.  This means the bytecode itself is static/constant and can
be used directly if it's in addressable memory.  Also the qstr string data
in the .mpy file, and some constant object data, can be used directly.
Note that the qstr table is global to the module (ie not per function).

In more detail, in the VM what used to be (schematically):

    qst = DECODE_QSTR_VALUE;

is now (schematically):

    idx = DECODE_QSTR_INDEX;
    qst = qstr_table[idx];

That allows the bytecode to be fixed at compile time and not need
relinking/rewriting of the qstr values.  Only qstr_table needs to be linked
when the .mpy is loaded.

Incidentally, this helps to reduce the size of bytecode because what used
to be 2-byte qstr values in the bytecode are now (mostly) 1-byte indices.
If the module uses the same qstr more than two times then the bytecode is
smaller than before.

The following changes are measured for this commit compared to the
previous (the baseline):
- average 7%-9% reduction in size of .mpy files
- frozen code size is reduced by about 5%-7%
- importing .py files uses about 5% less RAM in total
- importing .mpy files uses about 4% less RAM in total
- importing .py and .mpy files takes about the same time as before

The qstr indirection in the bytecode has only a small impact on VM
performance.  For stm32 on PYBv1.0 the performance change of this commit
is:

diff of scores (higher is better)
N=100 M=100             baseline -> this-commit  diff      diff% (error%)
bm_chaos.py               371.07 ->  357.39 :  -13.68 =  -3.687% (+/-0.02%)
bm_fannkuch.py             78.72 ->   77.49 :   -1.23 =  -1.563% (+/-0.01%)
bm_fft.py                2591.73 -> 2539.28 :  -52.45 =  -2.024% (+/-0.00%)
bm_float.py              6034.93 -> 5908.30 : -126.63 =  -2.098% (+/-0.01%)
bm_hexiom.py               48.96 ->   47.93 :   -1.03 =  -2.104% (+/-0.00%)
bm_nqueens.py            4510.63 -> 4459.94 :  -50.69 =  -1.124% (+/-0.00%)
bm_pidigits.py            650.28 ->  644.96 :   -5.32 =  -0.818% (+/-0.23%)
core_import_mpy_multi.py  564.77 ->  581.49 :  +16.72 =  +2.960% (+/-0.01%)
core_import_mpy_single.py  68.67 ->   67.16 :   -1.51 =  -2.199% (+/-0.01%)
core_qstr.py               64.16 ->   64.12 :   -0.04 =  -0.062% (+/-0.00%)
core_yield_from.py        362.58 ->  354.50 :   -8.08 =  -2.228% (+/-0.00%)
misc_aes.py               429.69 ->  405.59 :  -24.10 =  -5.609% (+/-0.01%)
misc_mandel.py           3485.13 -> 3416.51 :  -68.62 =  -1.969% (+/-0.00%)
misc_pystone.py          2496.53 -> 2405.56 :  -90.97 =  -3.644% (+/-0.01%)
misc_raytrace.py          381.47 ->  374.01 :   -7.46 =  -1.956% (+/-0.01%)
viper_call0.py            576.73 ->  572.49 :   -4.24 =  -0.735% (+/-0.04%)
viper_call1a.py           550.37 ->  546.21 :   -4.16 =  -0.756% (+/-0.09%)
viper_call1b.py           438.23 ->  435.68 :   -2.55 =  -0.582% (+/-0.06%)
viper_call1c.py           442.84 ->  440.04 :   -2.80 =  -0.632% (+/-0.08%)
viper_call2a.py           536.31 ->  532.35 :   -3.96 =  -0.738% (+/-0.06%)
viper_call2b.py           382.34 ->  377.07 :   -5.27 =  -1.378% (+/-0.03%)

And for unix on x64:

diff of scores (higher is better)
N=2000 M=2000        baseline -> this-commit     diff      diff% (error%)
bm_chaos.py          13594.20 ->  13073.84 :  -520.36 =  -3.828% (+/-5.44%)
bm_fannkuch.py          60.63 ->     59.58 :    -1.05 =  -1.732% (+/-3.01%)
bm_fft.py           112009.15 -> 111603.32 :  -405.83 =  -0.362% (+/-4.03%)
bm_float.py         246202.55 -> 247923.81 : +1721.26 =  +0.699% (+/-2.79%)
bm_hexiom.py           615.65 ->    617.21 :    +1.56 =  +0.253% (+/-1.64%)
bm_nqueens.py       215807.95 -> 215600.96 :  -206.99 =  -0.096% (+/-3.52%)
bm_pidigits.py        8246.74 ->   8422.82 :  +176.08 =  +2.135% (+/-3.64%)
misc_aes.py          16133.00 ->  16452.74 :  +319.74 =  +1.982% (+/-1.50%)
misc_mandel.py      128146.69 -> 130796.43 : +2649.74 =  +2.068% (+/-3.18%)
misc_pystone.py      83811.49 ->  83124.85 :  -686.64 =  -0.819% (+/-1.03%)
misc_raytrace.py     21688.02 ->  21385.10 :  -302.92 =  -1.397% (+/-3.20%)

The code size change is (firmware with a lot of frozen code benefits the
most):

       bare-arm:  +396 +0.697%
    minimal x86: +1595 +0.979% [incl +32(data)]
       unix x64: +2408 +0.470% [incl +800(data)]
    unix nanbox: +1396 +0.309% [incl -96(data)]
          stm32: -1256 -0.318% PYBV10
         cc3200:  +288 +0.157%
        esp8266:  -260 -0.037% GENERIC
          esp32:  -216 -0.014% GENERIC[incl -1072(data)]
            nrf:  +116 +0.067% pca10040
            rp2:  -664 -0.135% PICO
           samd:  +844 +0.607% ADAFRUIT_ITSYBITSY_M4_EXPRESS

As part of this change the .mpy file format version is bumped to version 6.
And mpy-tool.py has been improved to provide a good visualisation of the
contents of .mpy files.

In summary: this commit changes the bytecode to use qstr indirection, and
reworks the .mpy file format to be simpler and allow .mpy files to be
executed in-place.  Performance is not impacted too much.  Eventually it
will be possible to store such .mpy files in a linear, read-only, memory-
mappable filesystem so they can be executed from flash/ROM.  This will
essentially be able to replace frozen code for most applications.

Signed-off-by: Damien George <damien@micropython.org>
2022-02-24 18:08:43 +11:00
Damien George
e6850838cd py/parse: Simplify parse nodes representing a list.
This commit simplifies and optimises the parse tree in-memory
representation of lists of expressions, for tuples and lists, and when
tuples are used on the left-hand-side of assignments and within del
statements.  This reduces memory usage of the parse tree when such code is
compiled, and also reduces the size of the compiler.

For example, (1,) was previously the following parse tree:

    expr_stmt(5) (n=2)
      atom_paren(45) (n=1)
        testlist_comp(146) (n=2)
          int(1)
          testlist_comp_3b(149) (n=1)
            NULL
      NULL

and with this commit is now:

    expr_stmt(5) (n=2)
      atom_paren(45) (n=1)
        testlist_comp(146) (n=1)
          int(1)
      NULL

Similarly, (1, 2, 3) was previously:

    expr_stmt(5) (n=2)
      atom_paren(45) (n=1)
        testlist_comp(146) (n=2)
          int(1)
          testlist_comp_3c(150) (n=2)
            int(2)
            int(3)
      NULL

and is now:

    expr_stmt(5) (n=2)
      atom_paren(45) (n=1)
        testlist_comp(146) (n=3)
          int(1)
          int(2)
          int(3)
      NULL

Signed-off-by: Damien George <damien@micropython.org>
2021-09-10 14:09:44 +10:00
Jeff Epler
f2dbc91022 py/compile: Raise an error on async with/for outside an async function.
A simple reproducer is:

   async for x in (): x

Before this change, it would cause an assertion error in mpy-cross and
micropython-coverage.
2021-05-30 10:38:48 +10:00
Damien George
d4b706c4d0 py: Add option to compile without any error messages at all.
This introduces a new option, MICROPY_ERROR_REPORTING_NONE, which
completely disables all error messages.  To be used in cases where
MicroPython needs to fit in very limited systems.

Signed-off-by: Damien George <damien@micropython.org>
2021-04-27 23:51:52 +10:00
Jonathan Hogg
37e1b5c891 py/compile: Don't await __aiter__ special method in async-for.
MicroPython's original implementation of __aiter__ was correct for an
earlier (provisional) version of PEP492 (CPython 3.5), where __aiter__ was
an async-def function.  But that changed in the final version of PEP492 (in
CPython 3.5.2) where the function was changed to a normal one.  See
https://www.python.org/dev/peps/pep-0492/#why-aiter-does-not-return-an-awaitable
See also the note at the end of this subsection in the docs:
https://docs.python.org/3.5/reference/datamodel.html#asynchronous-iterators
And for completeness the BPO: https://bugs.python.org/issue27243

To be consistent with the Python spec as it stands today (and now that
PEP492 is final) this commit changes MicroPython's behaviour to match
CPython:  __aiter__ should return an async-iterable object, but is not
itself awaitable.

The relevant tests are updated to match.

See #6267.
2020-07-25 00:58:18 +10:00
Damien George
f2e267da68 py/compile: Implement PEP 526, syntax for variable annotations.
This addition to the grammar was introduced in Python 3.6.  It allows
annotating the type of a varilable, like:

    x: int = 123
    s: str

The implementation in this commit is quite simple and just ignores the
annotation (the int and str bits above).  The reason to implement this is
to allow Python 3.6+ code that uses this feature to compile under
MicroPython without change, and for users to use type checkers.

In the future viper could use this syntax as a way to give types to
variables, which is currently done in a bit of an ad-hoc way, eg
x = int(123).  And this syntax could potentially be used in the inline
assembler to define labels in an way that's easier to read.
2020-06-16 23:18:01 +10:00
Damien George
131b0de70a py/grammar.h: Consolidate duplicate sub-rules for :test and =test. 2020-06-16 23:18:01 +10:00
Damien George
1783950311 py/compile: Implement PEP 572, assignment expressions with := operator.
The syntax matches CPython and the semantics are equivalent except that,
unlike CPython, MicroPython allows using := to assign to comprehension
iteration variables, because disallowing this would take a lot of code to
check for it.

The new compile-time option MICROPY_PY_ASSIGN_EXPR selects this feature and
is enabled by default, following MICROPY_PY_ASYNC_AWAIT.
2020-06-16 22:02:24 +10:00
Damien George
0fd91e39b1 py/compile: Convert scope test to SCOPE_IS_COMP_LIKE macro.
This macro can be used elsewhere.
2020-06-16 21:42:37 +10:00
Damien George
172fc040aa py/parse: Make mp_parse_node_extract_list return size_t instead of int.
Because this function can only return non-negative values, and having the
correct return type gives more information to the caller.
2020-05-09 00:55:44 +10:00
stijn
84fa3312cf all: Format code to add space after C++-style comment start.
Note: the uncrustify configuration is explicitly set to 'add' instead of
'force' in order not to alter the comments which use extra spaces after //
as a means of indenting text for clarity.
2020-04-23 11:24:25 +10:00
Jim Mussared
def76fe4d9 all: Use MP_ERROR_TEXT for all error messages. 2020-04-05 15:02:06 +10:00
Jim Mussared
85858e72df py/objexcept: Allow compression of exception message text.
The decompression of error-strings is only done if the string is accessed
via printing or via er.args.  Tests are added for this feature to ensure
the decompression works.
2020-04-05 15:02:06 +10:00
Jim Mussared
a9a745e4b4 py: Use preprocessor to detect error reporting level (terse/detailed).
Instead of compiler-level if-logic.  This is necessary to know what error
strings are included in the build at the preprocessor stage, so that string
compression can be implemented.
2020-04-05 14:11:51 +10:00
Damien George
69661f3343 all: Reformat C and Python source code with tools/codeformat.py.
This is run with uncrustify 0.70.1, and black 19.10b0.
2020-02-28 10:33:03 +11:00
Petr Viktorin
e6c9800645 py/compile: Allow 'return' outside function in minimal builds.
A 'return' statement on module/class level is not correct Python, but
nothing terribly bad happens when it's allowed.  So remove the check unless
MICROPY_CPYTHON_COMPAT is on.

This is similar to MicroPython's treatment of 'import *' in functions
(except 'return' has unsurprising behavior if it's allowed).
2020-02-06 00:41:55 +11:00
Petr Viktorin
57c18fdd38 py/compile: Coalesce error message for break/continue outside loop.
To reduce code size.
2019-11-21 12:13:11 +11:00
Damien George
9adedce42e py: Add new Xtensa-Windowed arch for native emitter.
Enabled via the configuration MICROPY_EMIT_XTENSAWIN.
2019-10-05 13:44:53 +10:00
Petr Viktorin
25a9bccdee py/compile: Disallow 'import *' outside module level.
This check follows CPython's behaviour, because 'import *' always populates
the globals with the imported names, not locals.

Since it's safe to do this (doesn't lead to a crash or undefined behaviour)
the check is only enabled for MICROPY_CPYTHON_COMPAT.

Fixes issue #5121.
2019-10-04 16:46:47 +10:00
Josh Lloyd
7d58a197cf py: Rename MP_QSTR_NULL to MP_QSTRnull to avoid intern collisions.
Fixes #5140.
2019-09-26 16:04:56 +10:00
Damien George
14e203282a py/compile: Use calculation instead of switch to convert token to op. 2019-09-26 14:37:26 +10:00
Milan Rossa
310b3d1b81 py: Integrate sys.settrace feature into the VM and runtime.
This commit adds support for sys.settrace, allowing to install Python
handlers to trace execution of Python code.  The interface follows CPython
as closely as possible.  The feature is disabled by default and can be
enabled via MICROPY_PY_SYS_SETTRACE.
2019-08-30 16:44:12 +10:00
Damien George
c7c6703950 py/compile: Improve the line numbering precision for lambdas.
Prior to this patch the line number for a lambda would be "line 1" if the
body of the lambda contained only a simple expression (with no line number
stored in the parse node).  Now the line number is always reported
correctly.
2019-08-30 16:43:46 +10:00
Damien George
af20c2ead3 py: Add global default_emit_opt variable to make emit kind persistent.
mp_compile no longer takes an emit_opt argument, rather this setting is now
provided by the global default_emit_opt variable.

Now, when -X emit=native is passed as a command-line option, the emitter
will be set for all compiled modules (included imports), not just the
top-level script.

In the future there could be a way to also set this variable from a script.

Fixes issue #4267.
2019-08-28 12:47:58 +10:00
Milan Rossa
ae6fe8b43c py/compile: Improve the line numbering precision for comprehensions.
The line number for comprehensions is now always reported as the correct
global location in the script, instead of just "line 1".
2019-08-19 23:50:30 +10:00
Jun Wu
089c9b71d1 py: remove "if (0)" and "if (false)" branches.
Prior to this commit, building the unix port with `DEBUG=1` and
`-finstrument-functions` the compilation would fail with an error like
"control reaches end of non-void function".  This change fixes this by
removing the problematic "if (0)" branches.  Not all branches affect
compilation, but they are all removed for consistency.
2019-05-06 18:28:28 +10:00