Constant expression like "2 ** 3" will now be folded, and the special form
"X = const(2 ** 3)" will now compile because the argument to the const is
now a constant.
Fixes issue #5865.
Only the "==" operator was tested by the test suite in for such arguments.
Other comparison operators like "<" take a different path in the code so
need to be tested separately.
This commit implements a more complete replication of CPython's behaviour
for equality and inequality testing of objects. This addresses the issues
discussed in #5382 and a few other inconsistencies. Improvements over the
old code include:
- Support for returning non-boolean results from comparisons (as used by
numpy and others).
- Support for non-reflexive equality tests.
- Preferential use of __ne__ methods and MP_BINARY_OP_NOT_EQUAL binary
operators for inequality tests, when available.
- Fallback to op2 == op1 or op2 != op1 when op1 does not implement the
(in)equality operators.
The scheme here makes use of a new flag, MP_TYPE_FLAG_NEEDS_FULL_EQ_TEST,
in the flags word of mp_obj_type_t to indicate if various shortcuts can or
cannot be used when performing equality and inequality tests. Currently
four built-in classes have the flag set: float and complex are
non-reflexive (since nan != nan) while bytearray and frozenszet instances
can equal other builtin class instances (bytes and set respectively). The
flag is also set for any new class defined by the user.
This commit also includes a more comprehensive set of tests for the
behaviour of (in)equality operators implemented in special methods.
This commit adds a generator test for throwing into a nested exception, and
one when using yield-from with a pending exception cleanup. Both these
tests currently fail on the native emitter, and are simplified versions of
native test failures from uasyncio in #5332.
Instances of the slice class are passed to __getitem__() on objects when
the user indexes them with a slice. In practice the majority of the time
(other than passing it on untouched) is to work out what the slice means in
the context of an array dimension of a particular length. Since Python 2.3
there has been a method on the slice class, indices(), that takes a
dimension length and returns the real start, stop and step, accounting for
missing or negative values in the slice spec. This commit implements such
a indices() method on the slice class.
It is configurable at compile-time via MICROPY_PY_BUILTINS_SLICE_INDICES,
disabled by default, enabled on unix, stm32 and esp32 ports.
This commit also adds new tests for slice indices and for slicing unicode
strings.
Allows assigning attributes on class instances that implement their own
__setattr__. Both object.__setattr__ and super(A, b).__setattr__ will work
with this commit.
Because CPython 3.8.0 now produces different output:
- basics/parser.py: CPython does not allow '\\\n' as input.
- import/import_override: CPython imports _io.
This commit adds a sys.implementation.mpy entry when the system supports
importing .mpy files. This entry is a 16-bit integer which encodes two
bytes of information from the header of .mpy files that are supported by
the system being run: the second and third bytes, .mpy version, and flags
and native architecture. This allows determining the supported .mpy file
dynamically by code, and also for the user to find it out by inspecting
this value. It's further possible to dynamically detect if the system
supports importing .mpy files by `hasattr(sys.implementation, 'mpy')`.
Replace the is_running field with a tri-state variable to indicate
running/not-running/pending-exception.
Update tests to cover the various cases.
This allows cancellation in uasyncio even if the coroutine hasn't been
executed yet. Fixes#5242
Prior to this commit, when unwinding through an active finally the stack
was not being correctly popped/folded, which resulting in the VM crashing
for complicated unwinding of nested finallys.
This should be fixed with this commit, and more tests for return/break/
continue within a finally have been added to exercise this.
Prior to this patch mp_opcode_format would calculate the incorrect size of
the MP_BC_UNWIND_JUMP opcode, missing the additional byte. But, because
opcodes below 0x10 are unused and treated as bytes in the .mpy load/save
and freezing code, this bug did not show any symptoms, since nested unwind
jumps would rarely (if ever) reach a depth of 16 (so the extra byte of this
opcode would be between 0x01 and 0x0f and be correctly loaded/saved/frozen
simply as an undefined opcode).
This patch fixes this bug by correctly accounting for the additional byte.
.
With this patch alignment is done relative to the start of the buffer that
is being unpacked, not the raw pointer value, as per CPython.
Fixes issue #3314.
Prior to this patch the amount of free space in an array (including
bytearray) was not being maintained correctly for the case of slice
assignment which changed the size of the array. Under certain cases (as
encoded in the new test) it was possible that the array could grow beyond
its allocated memory block and corrupt the heap.
Fixes issue #4127.
Reuse the implementation for bytes since it works the same way regardless
of the underlying type. This method gets added for CPython compatibility
of bytearray, but to keep the code simple and small array.array now also
has a working decode method, which is non-standard but doesn't hurt.
This allows figuring out the number of bytes in the memoryview object as
len(memview) * memview.itemsize.
The feature is enabled via MICROPY_PY_BUILTINS_MEMORYVIEW_ITEMSIZE and is
disabled by default.
This patch fixes a bug in the VM when breaking within a try-finally. The
bug has to do with executing a break within the finally block of a
try-finally statement. For example:
def f():
for x in (1,):
print('a', x)
try:
raise Exception
finally:
print(1)
break
print('b', x)
f()
Currently in uPy the above code will print:
a 1
1
1
segmentation fault (core dumped) micropython
Not only is there a seg fault, but the "1" in the finally block is printed
twice. This is because when the VM executes a finally block it doesn't
really know if that block was executed due to a fall-through of the try (no
exception raised), or because an exception is active. In particular, for
nested finallys the VM has no idea which of the nested ones have active
exceptions and which are just fall-throughs. So when a break (or continue)
is executed it tries to unwind all of the finallys, when in fact only some
may be active.
It's questionable whether break (or return or continue) should be allowed
within a finally block, because they implicitly swallow any active
exception, but nevertheless it's allowed by CPython (although almost never
used in the standard library). And uPy should at least not crash in such a
case.
The solution here relies on the fact that exception and finally handlers
always appear in the bytecode after the try body.
Note: there was a similar bug with a return in a finally block, but that
was previously fixed in b735208403