Damien George ca40eb0fda extmod/uasyncio: Delay calling Loop.call_exception_handler by 1 loop.
When a tasks raises an exception which is uncaught, and no other task
await's on that task, then an error message is printed (or a user function
called) via a call to Loop.call_exception_handler.  In CPython this call is
made when the Task object is freed (eg via reference counting) because it's
at that point that it is known that the exception that was raised will
never be handled.

MicroPython does not have reference counting and the current behaviour is
to deal with uncaught exceptions as early as possible, ie as soon as they
terminate the task.  But this can be undesirable because in certain cases
a task can start and raise an exception immediately (before any await is
executed in that task's coro) and before any other task gets a chance to
await on it to catch the exception.

This commit changes the behaviour so that tasks which end due to an
uncaught exception are scheduled one more time for execution, and if they
are not await'ed on by the next scheduling loop, then the exception handler
is called (eg the exception is printed out).

Signed-off-by: Damien George <damien@micropython.org>
2020-12-02 12:07:06 +11:00
..
2020-09-04 00:10:24 +10:00
2020-09-04 00:10:24 +10:00

This directory contains tests for various functionality areas of MicroPython.
To run all stable tests, run "run-tests" script in this directory.

Tests of capabilities not supported on all platforms should be written
to check for the capability being present. If it is not, the test
should merely output 'SKIP' followed by the line terminator, and call
sys.exit() to raise SystemExit, instead of attempting to test the
missing capability. The testing framework (run-tests in this
directory, test_main.c in qemu_arm) recognizes this as a skipped test.

There are a few features for which this mechanism cannot be used to
condition a test. The run-tests script uses small scripts in the
feature_check directory to check whether each such feature is present,
and skips the relevant tests if not.

Tests are generally verified by running the test both in MicroPython and
in CPython and comparing the outputs. If the output differs the test fails
and the outputs are saved in a .out and a .exp file respectively.
For tests that cannot be run in CPython, for example because they use
the machine module, a .exp file can be provided next to the test's .py
file. A convenient way to generate that is to run the test, let it fail
(because CPython cannot run it) and then copy the .out file (but not
before checking it manually!)

When creating new tests, anything that relies on float support should go in the
float/ subdirectory.  Anything that relies on import x, where x is not a built-in
module, should go in the import/ subdirectory.