This is an implementation of a sliding qstr window used to reduce the
number of qstrs stored in a .mpy file. The window size is configured to 32
entries which takes a fixed 64 bytes (16-bits each) on the C stack when
loading/saving a .mpy file. It allows to remember the most recent 32 qstrs
so they don't need to be stored again in the .mpy file. The qstr window
uses a simple least-recently-used mechanism to discard the least recently
used qstr when the window overflows (similar to dictionary compression).
This scheme only needs a single pass to save/load the .mpy file.
Reduces mpy file size by about 25% with a window size of 32.
POP_BLOCK and POP_EXCEPT are now the same, and are always followed by a
JUMP. So this optimisation reduces code size, and RAM usage of bytecode by
two bytes for each try-except handler.
This patch fixes a bug in the VM when breaking within a try-finally. The
bug has to do with executing a break within the finally block of a
try-finally statement. For example:
def f():
for x in (1,):
print('a', x)
try:
raise Exception
finally:
print(1)
break
print('b', x)
f()
Currently in uPy the above code will print:
a 1
1
1
segmentation fault (core dumped) micropython
Not only is there a seg fault, but the "1" in the finally block is printed
twice. This is because when the VM executes a finally block it doesn't
really know if that block was executed due to a fall-through of the try (no
exception raised), or because an exception is active. In particular, for
nested finallys the VM has no idea which of the nested ones have active
exceptions and which are just fall-throughs. So when a break (or continue)
is executed it tries to unwind all of the finallys, when in fact only some
may be active.
It's questionable whether break (or return or continue) should be allowed
within a finally block, because they implicitly swallow any active
exception, but nevertheless it's allowed by CPython (although almost never
used in the standard library). And uPy should at least not crash in such a
case.
The solution here relies on the fact that exception and finally handlers
always appear in the bytecode after the try body.
Note: there was a similar bug with a return in a finally block, but that
was previously fixed in b735208403
All exceptions that unwind through the async-with must be caught and
BaseException is the top-level class, which includes Exception and others.
Fixes issue #4552.
As mentioned in #4450, `websocket` was experimental with a single intended
user, `webrepl`. Therefore, we'll make this change without a weak
link `websocket` -> `uwebsocket`.
Instead of assuming that the method is a bytecode object, and only
supporting load of __name__, make the operation generic by delegating the
load to the method object itself. Saves a bit of code size and fixes the
case of attempting to load __name__ on a native method, see issue #4028.
As per the machine.UART documentation, this is used to set the length of
the RX buffer. The legacy read_buf_len argument is retained for backwards
compatibility, with rxbuf overriding it if provided.
Also change the order of printing of flow so it is after stop (so bits,
parity, stop are one after the other), and reduce code size by using
mp_print_str instead of mp_printf where possible.
See issue #1981.
CPython does not have an implementation of select.poll() on some
operating systems (Windows, OSX depending on version) so skip the
test in those cases instead of failing it.
This ensures that implicit variables are only converted to implicit
closed-over variables (nonlocals) at the very end of the function scope.
If variables are closed-over when first used (read from, as was done prior
to this commit) then this can be incorrect because the variable may be
assigned to later on in the function which means they are just a plain
local, not closed over.
Fixes issue #4272.
The way it was written previously the variable x was not an implicit
nonlocal, it was just a normal local (but the compiler has a bug which
incorrectly makes it a nonlocal).
Configurable via MICROPY_MODULE_GETATTR, disabled by default. Among other
things __getattr__ for modules can help to build lazy loading / code
unloading at runtime.
Part of this test was trying to test some functionality of __getattribute__
but this method name was misspelt so it wasn't doing anything useful.
Fixing the typo in this name makes the test fail because MicroPython
doesn't support user defined __getattribute__ methods. So this part of the
test is removed. The remaining tests are modified slightly to make it
clearer what they are testing.
This test doesn't check the actual I/O behavior, just "static" invariants
like behavior on duplicate calls or calls when I/O object is not registered
with poller.
This makes these special methods have the same calling behaviour as other
methods in a class instance (mp_convert_member_lookup() is already called
by mp_obj_class_lookup()).
mp_make_raise_obj must be used to convert a possible exception type to an
instance object, otherwise the VM may raise a non-exception object.
An existing test is adjusted to test this case, with the original test
already moved to generator_throw.py.
Nan and inf (signed and unsigned) are also handled correctly by using
signbit (they were also handled correctly with "val<0", but that didn't
handle -0.0 correctly). A test case is added for this behaviour.
This commit adds the math.factorial function in two variants:
- squared difference, which is faster than the naive version, relatively
compact, and non-recursive;
- a mildly optimised recursive version, faster than the above one.
There are some more optimisations that could be done, but they tend to take
more code, and more storage space. The recursive version seems like a
sensible compromise.
The new function is disabled by default, and uses the non-optimised version
by default if it is enabled. The options are MICROPY_PY_MATH_FACTORIAL
and MICROPY_OPT_MATH_FACTORIAL.
This commit implements PEP479 which disallows raising StopIteration inside
a generator to signal that it should be finished. Instead, the generator
should simply return when it is complete.
See https://www.python.org/dev/peps/pep-0479/ for details.
Prior to this commit a function compiled with the native decorator
@micropython.native would not work correctly when accessing global
variables, because the globals dict was not being set upon function entry.
This commit fixes this problem by, upon function entry, setting as the
current globals dict the globals dict context the function was defined
within, as per normal Python semantics, and as bytecode does. Upon
function exit the original globals dict is restored.
In order to restore the globals dict when an exception is raised the native
function must guard its internals with an nlr_push/nlr_pop pair. Because
this push/pop is relatively expensive, in both C stack usage for the
nlr_buf_t and CPU execution time, the implementation here optimises things
as much as possible. First, the compiler keeps track of whether a function
even needs to access global variables. Using this information the native
emitter then generates three different kinds of code:
1. no globals used, no exception handlers: no nlr handling code and no
setting of the globals dict.
2. globals used, no exception handlers: an nlr_buf_t is allocated on the
C stack but it is not used if the globals dict is unchanged, saving
execution time because nlr_push/nlr_pop don't need to run.
3. function has exception handlers, may use globals: an nlr_buf_t is
allocated and nlr_push/nlr_pop are always called.
In the end, native functions that don't access globals and don't have
exception handlers will run more efficiently than those that do.
Fixes issue #1573.
If bytearray is constructed from str, a second argument of encoding is
required (in CPython), and third arg of Unicode error handling is allowed,
e.g.:
bytearray("str", "utf-8", "strict")
This is similar to bytes:
bytes("str", "utf-8", "strict")
This patch just allows to pass 2nd/3rd arguments to bytearray, but
doesn't try to validate them to not impact code size. (This is also
similar to how bytes constructor is handled, though it does a bit
more validation, e.g. check that in case of str arg, encoding argument
is passed.)
The native emitter keeps the current exception in a slot in its C stack
(instead of on its Python value stack), so when it catches an exception it
must explicitly clear that slot so the same exception is not reraised later
on.
Back in 8047340d75 basic support was added in
the VM to handle return statements within a finally block. But it didn't
cover all cases, in particular when some finally's were active and others
inactive when the "return" was executed.
This patch adds further support for return-within-finally by correctly
managing the currently_in_except_block flag, and should fix all cases. The
main point is that finally handlers remain on the exception stack even if
they are active (currently being executed), and the unwind return code
should only execute those finally's which are inactive.
New tests are added for the cases which now pass.
PEP479 (see https://www.python.org/dev/peps/pep-0479/) prohibited raising
StopIteration from within a generator (it is turned into a RuntimeError).
This behaviour was introduced in Python 3.5 and in 3.7 was made compulsory.
Until uPy implements PEP479, this patch adds .py.exp files for the relevant
tests so they can be run under Python 3.7.
In Python 3.7 the behaviour of repr() of an exception with one argument
changed: it no longer prints a trailing comma in the argument list. See
https://bugs.python.org/issue30399
This patch modifies tests that rely on this behaviour to not rely on it.
And the python34.py test is updated to include a test for this behaviour
with a .exp file.
Input files like basics/string_format.py and float/string_format.py have
the same basename so using that name for writing the output (.exp and .out
files) when both tests fail, results in the output of the first one being
overwritten.
Avoid this by using unique names for the output, replacing path characters
with underscores.
With the recent change b488a4a848, a
generating function now has the same layout in memory as a normal bytecode
function, and so can reuse the latter's attribute accessor code to
implement __name__.
This feature is controlled at compile time by MICROPY_PY_URE_SUB, disabled
by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by
MICROPY_PY_URE_MATCH_SPAN_START_END, disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by MICROPY_PY_URE_MATCH_GROUPS,
disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
Before this patch the context manager's __aexit__() method would not be
executed if a return/break/continue statement was used to exit an async
with block. async with now has the same semantics as normal with.
The fix here applies purely to the compiler, and does not modify the
runtime at all. It might (eventually) be better to define new bytecode(s)
to handle async with (and maybe other async constructs) in a cleaner, more
efficient way.
One minor drawback with addressing this issue purely in the compiler is
that it wasn't possible to get 100% CPython semantics. The thing that is
different here to CPython is that the __aexit__ method is not looked up in
the context manager until it is needed, which is after the body of the
async with statement has executed. So if a context manager doesn't have
__aexit__ then CPython raises an exception before the async with is
executed, whereas uPy will raise it after it is executed. Note that
__aenter__ is looked up at the beginning in uPy because it needs to be
called straightaway, so if the context manager isn't a context manager then
it'll still raise an exception at the same location as CPython. The only
difference is if the context manager has the __aenter__ method but not the
__aexit__ method, then in that case uPy has different behaviour. But this
is a very minor, and acceptable, difference.
This behaviour of a NULL write C method on a stream that uses the write
adaptor objects is no longer supported. It was only ever used by the
coverage build for testing the fail path of mp_get_stream_raise().
For i2c.py: the accelerometer now uses the new I2C driver so need to
explicitly init the legacy i2c object to get the test working.
For pyb1.py: the legacy pyb.hid() call will crash if the USB_HID object is
not initialised.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
This conditional import was only used to get the tests working on the unix
coverage build, which has now switched to use VFS by default so the uos
module alone has the required functionality.
Printing of uPy floats can differ by the floating-point precision on
different architectures (eg 64-bit vs 32-bit x86), so it's not possible to
using printing of floats in some parts of this test. Instead we can just
check for equivalence with what is known to be the correct answer.
Commit e269cabe3e added a check that the
first argument to the to_bytes() method is an integer, and now uPy
follows CPython behaviour and raises a TypeError for this test.
Note: CPython checks the argument types before checking the number of
arguments, but uPy does it the other way around, so they give different
exception messages for this test, but still the same type, a TypeError.
In adcall.py the pyb module may not be imported, so use ADCAll directly.
In dac.py the DAC object now prints more info, so update .exp file.
In spi.py the SPI should be deinitialised upon exit, so the test can run a
second time correctly.
If MICROPY_USE_INTERNAL_ERRNO is disabled, MP_EINVAL is not guaranteed
to have the value 22, so we cannot depend on OSError(22,).
Instead, to support any given port's errno values, without relying
on uerrno, we just check that the args[0] is positive.
This can be used to select the output buffer behaviour of the DAC. The
default values are chosen to retain backwards compatibility with existing
behaviour.
Thanks to @peterhinch for the initial idea to add this feature.
Reading into a bytearray will truncate values to 0xff so the assertions
checking read_timed() would previously always succeed.
Thanks to @peterhinch for finding this problem and providing the solution.
Keeping all the stress related tests in one place makes it easier to
stress-test a given port, and to also not run such tests on ports that
can't handle them.
The string "Q+?" is special in that it hashes to zero with the djb2
algorithm (among other strings), and a zero hash should be incremented to a
hash of 1.
The case of a return statement in the try suite of a try-except statement
was previously only tested by builtin_compile.py, and only then in the part
of this test which checked for the existence of the compile builtin. So
this patch adds an explicit unit test for this case.
The VM expects that, if mp_resume() returns MP_VM_RETURN_EXCEPTION, then
the returned value is an exception instance (eg to add a traceback to it).
It's possible that a value passed to a generator's throw() is not an
exception so must be explicitly checked for if the thrown value is not
intercepted by the generator.
Thanks to @jepler for finding the bug.
Prior to this patch the code would crash if a key in a ** dict was anything
other than a str or qstr. This is because mp_setup_code_state() assumes
that keys in kwargs are qstrs (for efficiency).
Thanks to @jepler for finding the bug.
This is just to test that the function exists and returns some kind of
valid value. Although this file is for testing ms/us functions, put the
ticks_cpu() test here so not to add a new test file.
This test for calling gc_realloc() while the GC is locked can be done in
pure Python, so better to do it that way since it can then be tested on
more ports.
Prior to this patch, some architectures (eg unix x86) could render floats
with "negative" digits, like ")". For example, '%.23e' % 1e-80 would come
out as "1.0000000000000000/)/(,*0e-80". This patch fixes the known cases.
Prior to this patch, some architectures (eg unix x86) could render floats
with a ":" character in them, eg 1e+39 would come out as ":e+38" (":" is
just after "9" in ASCII so this is like 10e+38). This patch fixes some of
these cases.
Prior to this patch the %f formatting of some FP values could be off by up
to 1, eg '%.0f' % 123 would return "122" (unix x64). Depending on the FP
precision (single vs double) certain numbers would format correctly, but
others wolud not. This patch should fix all cases of rounding for %f.
This patch concerns the handling of an NLR-raised StopIteration, raised
during a call to mp_resume() which is handling the yield from opcode.
Previously, commit 6738c1dded introduced code
to handle this case, along with a test. It seems that it was lucky that
the test worked because the code did not correctly handle the stack pointer
(sp).
Furthermore, commit 79d996a57b improved the
way mp_resume() propagated certain exceptions: it changed raising an NLR
value to returning MP_VM_RETURN_EXCEPTION. This change meant that the
test introduced in gen_yield_from_ducktype.py was no longer hitting the
code introduced in 6738c1dded.
The patch here does two things:
1. Fixes the handling of sp in the VM for the case that yield from is
interrupted by a StopIteration raised via NLR.
2. Introduces a new test to check this handling of sp and re-covers the
code in the VM.
Float parsing (both single and double precision) may have a relative error
of order the floating point precision, so adjust tests to take this into
account by not printing all of the digits of the answer.
These new tests cover cases that can't be reached from Python and get
coverage of py/mpz.c to 100%.
These "unreachable from Python" pieces of code could be removed but they
form an integral part of the mpz C API and may be useful for non-Python
usage of mpz.
There is a finite list of ascending primes used for the size of a hash
table, and this test tests that the code can handle a dict larger than the
maximum value in that list of primes. Adding this tests gets py/map.c to
100% coverage.
Otherwise passing -1 as maxlen will lead to a zero allocation and
subsequent unbound buffer overflow in deque.append() because i_put is
allowed to grow without bound.
Prior to this patch uPy (on a 32-bit arch) would have severe issues when
calling bytes(-1): such a call would call vstr_init_len(vstr, -1) which
would then +1 on the len and call vstr_init(vstr, 0), which would then
round this up and allocate a small amount of memory for the vstr. The
bytes constructor would then attempt to zero out all this memory, thinking
it had allocated 2^32-1 bytes.
This patch changes the way REPL autocomplete finds matches. It now probes
the target object for all qstrs via mp_load_method_maybe to look for a
match with the given input string. Similar to how the builtin dir()
function works, this new algorithm now find all methods and instances of
user-defined classes including attributes of their parent classes. This
helps a lot at the REPL prompt for user-discovery and to autocomplete names
even for classes that are derived.
The downside is that this new algorithm is slower than the previous one,
and in particular will be slower the more qstrs there are in the system.
But because REPL autocomplete is primarily used in an interactive way it is
not that important to make it fast, as long as it is "fast enough" compared
to human reaction.
On a slow microcontroller (CPU running at 16MHz) the autocomplete time for
a list of 35 names in the outer namespace (pressing tab at a bare prompt)
takes about 160ms with this algorithm, compared to about 40ms for the
previous implementation (this time includes the actual printing of the
names as well). This time of 160ms is very reasonable especially given the
new functionality of listing all the names.
This patch also decreases code size by:
bare-arm: +0
minimal x86: -128
unix x64: -128
unix nanbox: -224
stm32: -88
cc3200: -80
esp8266: -92
esp32: -84
This patch improves the builtin dir() function by probing the target object
with all possible qstrs via mp_load_method_maybe. This is very simple (in
terms of implementation), doesn't require recursion, and allows to list all
methods of user-defined classes (without duplicates) even if they have
multiple inheritance with a common parent. The downside is that it can be
slow because it has to iterate through all the qstrs in the system, but
the "dir()" function is anyway mostly used for testing frameworks and user
introspection of types, so speed is not considered a priority.
In addition to providing a more complete implementation of dir(), this
patch is simpler than the previous implementation and saves some code
space:
bare-arm: -80
minimal x86: -80
unix x64: -56
unix nanbox: -48
stm32: -80
cc3200: -80
esp8266: -104
esp32: -64
This feature is not often used so is guarded by the config option
MICROPY_PY_BUILTINS_RANGE_BINOP which is disabled by default. With this
option disabled MicroPython will always return false when comparing two
range objects for equality (unless they are exactly the same object
instance). This does not match CPython so if (in)equality between range
objects is needed then this option should be enabled.
Enabling this option costs between 100 and 200 bytes of code space
depending on the machine architecture.
Instead of putting just 'CRASH' in the .py.out file, this patch makes it so
any output from uPy that led to the crash is stored in the .py.out file, as
well as the 'CRASH' message at the end.
Prior to this patch, a float literal that was close to subnormal would
have a loss of precision when parsed. The worst case was something like
float('10000000000000000000e-326') which returned 0.0.
Note that the check for elem!=NULL is removed for the
MP_MAP_LOOKUP_ADD_IF_NOT_FOUND case because mp_map_lookup will always
return non-NULL for such a case.
CPython doesn't allow SEEK_CUR with non-zero offset for files in text mode,
and uPy inherited this behaviour for both text and binary files. It makes
sense to provide full support for SEEK_CUR of binary-mode files in uPy, and
to do this in a minimal way means also allowing to use SEEK_CUR with
non-zero offsets on text-mode files. That seems to be a fair compromise.
This implements .pend_throw(exc) method, which sets up an exception to be
triggered on the next call to generator's .__next__() or .send() method.
This is unlike .throw(), which immediately starts to execute the generator
to process the exception. This effectively adds Future-like capabilities
to generator protocol (exception will be raised in the future).
The need for such a method arised to implement uasyncio wait_for() function
efficiently (its behavior is clearly "Future" like, and normally would
require to introduce an expensive Future wrapper around all native
couroutines, like upstream asyncio does).
py/objgenerator: pend_throw: Return previous pended value.
This effectively allows to store an additional value (not necessary an
exception) in a coroutine while it's not being executed. uasyncio has
exactly this usecase: to mark a coro waiting in I/O queue (and thus
not executed in the normal scheduling queue), for the purpose of
implementing wait_for() function (cancellation of such waiting coro
by a timeout).
The whole idea of --list-tests is that we prepare a list of tests to run
later, and currently don't have a connection to target board. Similarly
for --write-exp - only "python3" binary would be required for this operation,
not "micropython".
The idea that --list-tests would be enough to produce list of tests for
tinytest-codegen didn't work, because normal run-tests processing heavily
relies on dynamic target capabilities discovery, and test filtering happens
as the result of that.
So, approach the issue from different end - allow to specify arbitrary
filtering criteria as run-tests arguments. This way, specific filters
will be still hardcoded, but at least on a particular target's side,
instead of constant patching tinytest-codegen and/or run-tests.
Lists tests to be executed, subject to all other filters requested. This
options would be useful e.g. for scripts like tools/tinytest-codegen.py,
which currently contains hardcoded filters for particular a particular
target and can't work for multiple targets.
"Builtin" tinytest-based testsuite as employed by qemu-arm (and now
generalized by me to be reusable for other targets) performs simplified
detection of skipped tests, it treats as such tests which raised SystemExit
(instead of checking got "SKIP" output). Consequently, each "SKIP" must
be accompanied by SystemExit (and conversely, SystemExit should not be
used if test is not skipped, which so far seems to be true).
These tests involves testing allocation-free function calling, and in strict
stackless mode, it's not possible to make a function call with heap locked
(because function activation record aka frame is allocated on the heap).
In strict stackless mode, it's not possible to make a function call with
heap locked (because function activation record aka frame is allocated on
heap). So, if the only purpose of function is to introduce local variable
scope, move heap lock/unlock calls inside the function.