This ensures that implicit variables are only converted to implicit
closed-over variables (nonlocals) at the very end of the function scope.
If variables are closed-over when first used (read from, as was done prior
to this commit) then this can be incorrect because the variable may be
assigned to later on in the function which means they are just a plain
local, not closed over.
Fixes issue #4272.
This commit implements PEP479 which disallows raising StopIteration inside
a generator to signal that it should be finished. Instead, the generator
should simply return when it is complete.
See https://www.python.org/dev/peps/pep-0479/ for details.
Prior to this commit a function compiled with the native decorator
@micropython.native would not work correctly when accessing global
variables, because the globals dict was not being set upon function entry.
This commit fixes this problem by, upon function entry, setting as the
current globals dict the globals dict context the function was defined
within, as per normal Python semantics, and as bytecode does. Upon
function exit the original globals dict is restored.
In order to restore the globals dict when an exception is raised the native
function must guard its internals with an nlr_push/nlr_pop pair. Because
this push/pop is relatively expensive, in both C stack usage for the
nlr_buf_t and CPU execution time, the implementation here optimises things
as much as possible. First, the compiler keeps track of whether a function
even needs to access global variables. Using this information the native
emitter then generates three different kinds of code:
1. no globals used, no exception handlers: no nlr handling code and no
setting of the globals dict.
2. globals used, no exception handlers: an nlr_buf_t is allocated on the
C stack but it is not used if the globals dict is unchanged, saving
execution time because nlr_push/nlr_pop don't need to run.
3. function has exception handlers, may use globals: an nlr_buf_t is
allocated and nlr_push/nlr_pop are always called.
In the end, native functions that don't access globals and don't have
exception handlers will run more efficiently than those that do.
Fixes issue #1573.
Back in 8047340d75 basic support was added in
the VM to handle return statements within a finally block. But it didn't
cover all cases, in particular when some finally's were active and others
inactive when the "return" was executed.
This patch adds further support for return-within-finally by correctly
managing the currently_in_except_block flag, and should fix all cases. The
main point is that finally handlers remain on the exception stack even if
they are active (currently being executed), and the unwind return code
should only execute those finally's which are inactive.
New tests are added for the cases which now pass.
Input files like basics/string_format.py and float/string_format.py have
the same basename so using that name for writing the output (.exp and .out
files) when both tests fail, results in the output of the first one being
overwritten.
Avoid this by using unique names for the output, replacing path characters
with underscores.
With the recent change b488a4a848, a
generating function now has the same layout in memory as a normal bytecode
function, and so can reuse the latter's attribute accessor code to
implement __name__.
Before this patch the context manager's __aexit__() method would not be
executed if a return/break/continue statement was used to exit an async
with block. async with now has the same semantics as normal with.
The fix here applies purely to the compiler, and does not modify the
runtime at all. It might (eventually) be better to define new bytecode(s)
to handle async with (and maybe other async constructs) in a cleaner, more
efficient way.
One minor drawback with addressing this issue purely in the compiler is
that it wasn't possible to get 100% CPython semantics. The thing that is
different here to CPython is that the __aexit__ method is not looked up in
the context manager until it is needed, which is after the body of the
async with statement has executed. So if a context manager doesn't have
__aexit__ then CPython raises an exception before the async with is
executed, whereas uPy will raise it after it is executed. Note that
__aenter__ is looked up at the beginning in uPy because it needs to be
called straightaway, so if the context manager isn't a context manager then
it'll still raise an exception at the same location as CPython. The only
difference is if the context manager has the __aenter__ method but not the
__aexit__ method, then in that case uPy has different behaviour. But this
is a very minor, and acceptable, difference.
Printing of uPy floats can differ by the floating-point precision on
different architectures (eg 64-bit vs 32-bit x86), so it's not possible to
using printing of floats in some parts of this test. Instead we can just
check for equivalence with what is known to be the correct answer.
Keeping all the stress related tests in one place makes it easier to
stress-test a given port, and to also not run such tests on ports that
can't handle them.
When requested via 'run-tests -j', more than one test will be run
at a time. On my system, (i5-3320m with 4 threads / 2 cores), this
reduces elapsed time by over 50% when testing pots/unix/micropython.
Elapsed time, seconds, best of 3 runs with each -j value:
before patchset: 18.1
-j1: 18.1
-j2: 11.3 (-37%)
-j4: 8.7 (-52%)
-j6: 8.4 (-54%)
In all cases the final output is identical:
651 tests performed (18932 individual testcases)
651 tests passed
23 tests skipped: buffered_writer...
though the individual pass/fail messages can be different/interleaved.
Instead of putting just 'CRASH' in the .py.out file, this patch makes it so
any output from uPy that led to the crash is stored in the .py.out file, as
well as the 'CRASH' message at the end.
This implements .pend_throw(exc) method, which sets up an exception to be
triggered on the next call to generator's .__next__() or .send() method.
This is unlike .throw(), which immediately starts to execute the generator
to process the exception. This effectively adds Future-like capabilities
to generator protocol (exception will be raised in the future).
The need for such a method arised to implement uasyncio wait_for() function
efficiently (its behavior is clearly "Future" like, and normally would
require to introduce an expensive Future wrapper around all native
couroutines, like upstream asyncio does).
py/objgenerator: pend_throw: Return previous pended value.
This effectively allows to store an additional value (not necessary an
exception) in a coroutine while it's not being executed. uasyncio has
exactly this usecase: to mark a coro waiting in I/O queue (and thus
not executed in the normal scheduling queue), for the purpose of
implementing wait_for() function (cancellation of such waiting coro
by a timeout).
The whole idea of --list-tests is that we prepare a list of tests to run
later, and currently don't have a connection to target board. Similarly
for --write-exp - only "python3" binary would be required for this operation,
not "micropython".
The idea that --list-tests would be enough to produce list of tests for
tinytest-codegen didn't work, because normal run-tests processing heavily
relies on dynamic target capabilities discovery, and test filtering happens
as the result of that.
So, approach the issue from different end - allow to specify arbitrary
filtering criteria as run-tests arguments. This way, specific filters
will be still hardcoded, but at least on a particular target's side,
instead of constant patching tinytest-codegen and/or run-tests.
Lists tests to be executed, subject to all other filters requested. This
options would be useful e.g. for scripts like tools/tinytest-codegen.py,
which currently contains hardcoded filters for particular a particular
target and can't work for multiple targets.
This patch improves parsing of floating point numbers by converting all the
digits (integer and fractional) together into a number 1 or greater, and
then applying the correct power of 10 at the very end. In particular the
multiple "multiply by 0.1" operations to build a fraction are now combined
together and applied at the same time as the exponent, at the very end.
This helps to retain precision during parsing of floats, and also includes
a check that the number doesn't overflow during the parsing. One benefit
is that a float will have the same value no matter where the decimal point
is located, eg 1.23 == 123e-2.