The compiler is not picky right now, but these are actually all syntax
errors:
- await is only valid in an async function
- async functions that use yield are actually async generators (a construct
not supported by the compiler right now)
Changes in this commit:
- Manifest include's now use the directory path where possible (no longer
necessary to include the manifest.py file explicitly).
- Add manifest.py for all drivers and components that are referenced by
port/board manifests.
- Replace all uses of freeze() with package()/module(), except for port and
board modules.
- Use opt=3 everywhere, for consistency and to reduce code size.
- Use require() instead of include() for all micropython-lib references.
- Remove support for optional board-level manifest.py in mimxrt port, to
make it behave the same as other ports (the board must set
FROZEN_MANIFEST to a custom manifest.py, which can optionally include the
default, port-level manifest).
- Also reinstates modules that were accidentally removed from the esp8266
512k build in fbe9417b90.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Signed-off-by: Damien George <damien@micropython.org>
This is useful in situations where the ThreadSafeFlag is reused and needs
to be cleared of any previous, unwanted event.
For example, clear the flag at the start of an operation, trigger the
operation (eg an I2C write), then (a)wait for an external event to set the
flag (eg a pin IRQ). Further events may trigger the flag again but these
are unwanted and should be cleared before the next cycle starts.
The main aim of this change is to reduce the number of heap allocations
when writing data to a stream. This is done in two ways:
1. Eliminate appending of data when .write() is called multiple times
before calling .drain(). With this commit, the data is written out
immediately if the underlying stream is not blocked, so there is no
accumulation of the data in a temporary buffer.
2. Eliminate copying of non-bytes objects passed to .write(). Prior to
this commit, passing a bytearray or memoryview to .write() would always
result in a copy of it being made and turned into a bytes object. That
won't happen now if the underlying stream is not blocked.
Also, this change makes .write () more closely implement the CPython
documented semantics: "The method attempts to write the data to the
underlying socket immediately. If that fails, the data is queued in an
internal write buffer until it can be sent."
This fixes the cases where the task being waited on finishes just before or
just after the wait_for itself is cancelled.
Fixes issue #8717.
Signed-off-by: Damien George <damien@micropython.org>
These are internal names and can be safely renamed without affecting user
code. push_sorted() and push_head() are merged into a single push()
method, which is already how the C version is implemented. pop_head() is
simply renamed to pop().
The changes are:
- q.push_sorted(task, t) -> q.push(task, t)
- q.push_head(task) -> q.push(task)
- q.pop_head() -> q.pop()
The shorter names and removal of push_head() leads to a code size reduction
of between 40 and 64 bytes on bare-metal targets.
Signed-off-by: Damien George <damien@micropython.org>
This fixes a bug where the gather is cancelled externally and then one of
its sub-tasks (that the gather was waiting on) finishes right between the
cancellation being queued and being executed.
Signed-off-by: Damien George <damien@micropython.org>
The following fixes are made:
- cancelling a gather now cancels all sub-tasks of the gather (previously
it would only cancel the first)
- if any sub-task of a gather raises an exception then the gather finishes
(previously it would only finish if the first sub-task raised)
Fixes issues #5798, #7807, #7901.
Signed-off-by: Damien George <damien@micropython.org>
This implements a form of CPython's "add_done_callback()", but at this
stage it is a hidden feature and only intended to be used internally.
Signed-off-by: Damien George <damien@micropython.org>
Currently when using uasyncio.start_server() the socket configuration is
done inside a uasyncio.create_task() background function. If the address
and port are already in use however this throws an OSError which cannot be
cleanly caught behind the create_task().
This commit moves the getaddrinfo and socket binding to the start_server()
function, and only creates the task if that succeeds. This means that any
OSError from the initial socket configuration is propagated directly up the
call stack, compatible with CPython behaviour.
See #7444.
Signed-off-by: Damien George <damien@micropython.org>
This commit fixes a problem with a race between cancellation of task A and
completion of task B, when A waits on B. If task B completes just before
task A is cancelled then the cancellation of A does not work. Instead,
the CancelledError meant to cancel A gets passed through to B (that's
expected behaviour) but B handles it as a "Task exception wasn't retrieved"
scenario, printing out such a message (this is because finished tasks point
their "coro" attribute to themselves to indicate they are done, and
implement the throw() method, but that method inadvertently catches the
CancelledError). The correct behaviour is for B to bounce that
CancelledError back out.
This bug is mainly seen when wait_for() is used, and in that context the
symptoms are:
- occurs when using wait_for(T, S), if the task T being waited on finishes
at exactly the same time as the wait-for timeout S expires
- task T will have run to completion
- the "Task exception wasn't retrieved message" is printed with
"<class 'CancelledError'>" as the error (ie no traceback)
- the wait_for(T, S) call never returns (it's never put back on the
uasyncio run queue) and all tasks waiting on this are blocked forever
from running
- uasyncio otherwise continues to function and other tasks continue to be
scheduled as normal
The fix here reworks the "waiting" attribute of Task to be called "state"
and uses it to indicate whether a task is: running and not awaited on,
running and awaited on, finished and not awaited on, or finished and
awaited on. This means the task does not need to point "coro" to itself to
indicate finished, and also allows removal of the throw() method.
A benefit of this is that "Task exception wasn't retrieved" messages can go
back to being able to print the name of the coroutine function.
Fixes issue #7386.
Signed-off-by: Damien George <damien@micropython.org>
With docs and a multi-test using TCP server/client.
This method is a MicroPython extension, although there is discussion of
adding it to CPython: https://bugs.python.org/issue41305
Signed-off-by: Mike Teachman <mike.teachman@gmail.com>
This fix prevents server.wait_closed() from raising an AttributeError when
trying to access server.task. This can happen if it is called immediately
after start_server().
This is a MicroPython-extension that allows for code running in IRQ
(hard or soft) or scheduler context to sequence asyncio code.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This commit switches the roles of the helper task from a cancellation task
to a runner task, to get the correct semantics for cancellation of
wait_for.
Some uasyncio tests are now disabled for the native emitter due to issues
with native code generation of generators and yield-from.
Fixes#5797.
Signed-off-by: Damien George <damien@micropython.org>
This is added because task.coro==None is no longer the way to detect if a
task is finished. Providing a (CPython compatible) function for this
allows the implementation to be abstracted away.
Signed-off-by: Damien George <damien@micropython.org>
When a tasks raises an exception which is uncaught, and no other task
await's on that task, then an error message is printed (or a user function
called) via a call to Loop.call_exception_handler. In CPython this call is
made when the Task object is freed (eg via reference counting) because it's
at that point that it is known that the exception that was raised will
never be handled.
MicroPython does not have reference counting and the current behaviour is
to deal with uncaught exceptions as early as possible, ie as soon as they
terminate the task. But this can be undesirable because in certain cases
a task can start and raise an exception immediately (before any await is
executed in that task's coro) and before any other task gets a chance to
await on it to catch the exception.
This commit changes the behaviour so that tasks which end due to an
uncaught exception are scheduled one more time for execution, and if they
are not await'ed on by the next scheduling loop, then the exception handler
is called (eg the exception is printed out).
Signed-off-by: Damien George <damien@micropython.org>
Otherwise a task that continuously awaits on a large negative sleep can
monopolise the scheduler (because its wake time is always less than
everything else in the pairing heap).
Signed-off-by: Damien George <damien@micropython.org>
It raises on EOFError instead of an IncompleteReadError (which is what
CPython does). But the latter is derived from EOFError so code compatible
with MicroPython and CPython can be written by catching EOFError (eg see
included test).
Fixes issue #6156.
Signed-off-by: Damien George <damien@micropython.org>
This commit adds Loop.new_event_loop() which is used to reset the singleton
event loop. This functionality is put here instead of in Loop.close() to
make it possible to write code that is compatible with CPython.
This commit adds support for global exception handling in uasyncio
according to the CPython error handling:
https://docs.python.org/3/library/asyncio-eventloop.html#error-handling-api
This allows a program to receive exceptions from detached tasks and log
them to an appropriate location, instead of them being printed to the REPL.
The implementation preallocates a context dictionary so in case of an
exception there shouldn't be any RAM allocation.
The approach here is compatible with CPython except that in CPython the
exception handler is called once the task that threw an uncaught exception
is freed, whereas in MicroPython the exception handler is called
immediately when the exception is thrown.
Implements Task and TaskQueue classes in C, using a pairing-heap data
structure. Using this reduces RAM use of each Task, and improves overall
performance of the uasyncio scheduler.
This commit adds a completely new implementation of the uasyncio module.
The aim of this version (compared to the original one in micropython-lib)
is to be more compatible with CPython's asyncio module, so that one can
more easily write code that runs under both MicroPython and CPython (and
reuse CPython asyncio libraries, follow CPython asyncio tutorials, etc).
Async code is not easy to write and any knowledge users already have from
CPython asyncio should transfer to uasyncio without effort, and vice versa.
The implementation here attempts to provide good compatibility with
CPython's asyncio while still being "micro" enough to run where MicroPython
runs. This follows the general philosophy of MicroPython itself, to make it
feel like Python.
The main change is to use a Task object for each coroutine. This allows
more flexibility to queue tasks in various places, eg the main run loop,
tasks waiting on events, locks or other tasks. It no longer requires
pre-allocating a fixed queue size for the main run loop.
A pairing heap is used to queue Tasks.
It's currently implemented in pure Python, separated into components with
lazy importing for optional components. In the future parts of this
implementation can be moved to C to improve speed and reduce memory usage.
But the aim is to maintain a pure-Python version as a reference version.