Following other vfs_fat tests, so the test works on ports like stm32 that
only support 512-byte block size.
Signed-off-by: Damien George <damien@micropython.org>
When iterating over os.ilistdir(), the special directories '.' and '..'
are filtered from the results. But the code inadvertently also filtered
any file/directory which happened to match '..*'. This change fixes the
filter.
Fixes issue #11032.
Signed-off-by: Jeremy Rand <jeremy@rand-family.com>
During the initial handshake or subsequent renegotiation, the protocol
might need to read in order to write (or conversely to write in order
to read). It might be blocked from doing so by the state of the
underlying socket (i.e. there is no data to read, or there is no space
to write).
The library indicates this condition by returning one of the errors
`MBEDTLS_ERR_SSL_WANT_READ` or `MBEDTLS_ERR_SSL_WANT_WRITE`. When that
happens, we need to enforce that the next poll operation only considers
the direction that the library indicated.
In addition, mbedtls does its own read buffering that we need to take
into account while polling, and we need to save the last error between
read()/write() and ioctl().
Prevent handle leaks when file objects aren't closed explicitly and
fix some MICROPY_CPYTHON_COMPAT issues: this wasn't properly adhered
to because #ifdef was used so it was always on, and closing files
multiple times should be avoided unconditionally.
This is a latent issue that wasn't caught by CI because there was no
configuration that had both stackless+uasyncio.
The previous check to skip with stackless builds only worked when the
bytecode emitter was used by default. Force the check to use the bytecode
emitter.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
When iterating over filesystem/folders with os.iterdir(), an open file
(directory) handle is used internally. Currently this file handle is only
closed once the iterator is completely drained, eg. once all entries have
been looped over / converted into list etc.
If a program opens an iterdir but does not loop over it, or starts to loop
over the iterator but breaks out of the loop, then the handle never gets
closed. In this state, when the iter object is cleaned up by the garbage
collector this open handle can cause corruption of the filesystem.
Fixes issues #6568 and #8506.
Rather than drawing the entire boundary to catch missing pixels, just
detect the cases where boundary pixels are skipped during node calculation
and pre-emptively draw them then.
This adds 72 bytes on PYBV11, but makes filled poly() 20% faster.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Add method for drawing polygons.
For non-filled polygons, uses the existing line-drawing code to render
arbitrary polygons using the given coords list, at the given x,y position,
in the given colour.
For filled polygons, arbitrary closed polygons are rendered using a fast
point-in-polygon algorithm to determine where the edges of the polygon lie
on each pixel row.
Tests and documentation updates are also included.
Signed-off-by: Mat Booth <mat.booth@gmail.com>
This is useful in situations where the ThreadSafeFlag is reused and needs
to be cleared of any previous, unwanted event.
For example, clear the flag at the start of an operation, trigger the
operation (eg an I2C write), then (a)wait for an external event to set the
flag (eg a pin IRQ). Further events may trigger the flag again but these
are unwanted and should be cleared before the next cycle starts.
On ports with more than one filesystem, the type will be wrong, for example
if using LFS but FAT enabled, then the type will be FAT. So it's not
possible to use these classes to identify a file object type.
Furthermore, constructing an io.FileIO currently crashes on FAT, and
make_new isn't supported on LFS.
And the io.TextIOWrapper class does not match CPython at all.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This changes the btree implementation to use the buffer protocol for
reading key/values in all methods. `str` and `bytes` objects are not the
only bytes-like objects that could be used.
Documentation and tests are also updated.
Addresses issue #8748.
Signed-off-by: David Lechner <david@pybricks.com>
This fixes the cases where the task being waited on finishes just before or
just after the wait_for itself is cancelled.
Fixes issue #8717.
Signed-off-by: Damien George <damien@micropython.org>
Because the test modifies the (now) bytearray object, and if it's a bytes
object it's not guaranteed that it can be modified, or that this constant
object isn't used elsewhere.
Signed-off-by: Damien George <damien@micropython.org>
Non-real-time systems like Windows, Linux and macOS do not have reliable
timing, so increase the sleep intervals to make these tests more likely to
pass.
Signed-off-by: Damien George <damien@micropython.org>
This fixes a bug where the gather is cancelled externally and then one of
its sub-tasks (that the gather was waiting on) finishes right between the
cancellation being queued and being executed.
Signed-off-by: Damien George <damien@micropython.org>
This double-raise test could fail when task[0] raises and stops the gather
before task[1] raises, then task[1] is left to raise later on and spoil the
test.
Signed-off-by: Damien George <damien@micropython.org>
The following fixes are made:
- cancelling a gather now cancels all sub-tasks of the gather (previously
it would only cancel the first)
- if any sub-task of a gather raises an exception then the gather finishes
(previously it would only finish if the first sub-task raised)
Fixes issues #5798, #7807, #7901.
Signed-off-by: Damien George <damien@micropython.org>
This allows encoding things (eg a Basic-Auth header for a request) without
slicing the \n from the string, which allocates additional memory.
Co-authored-by: David Lechner <david@lechnology.com>
This achieves a substantial performance improvement when rendering glyphs
to color displays, the benefit increasing proportional to the number of
pixels in the glyph.
Prevents the finaliser from being missed if there's a dangling reference
on the stack to one of the blocks for the files (that this test checks
that they get finalised).
See github.com/micropython/micropython/pull/7659#issuecomment-899479793
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This commit fixes a problem with a race between cancellation of task A and
completion of task B, when A waits on B. If task B completes just before
task A is cancelled then the cancellation of A does not work. Instead,
the CancelledError meant to cancel A gets passed through to B (that's
expected behaviour) but B handles it as a "Task exception wasn't retrieved"
scenario, printing out such a message (this is because finished tasks point
their "coro" attribute to themselves to indicate they are done, and
implement the throw() method, but that method inadvertently catches the
CancelledError). The correct behaviour is for B to bounce that
CancelledError back out.
This bug is mainly seen when wait_for() is used, and in that context the
symptoms are:
- occurs when using wait_for(T, S), if the task T being waited on finishes
at exactly the same time as the wait-for timeout S expires
- task T will have run to completion
- the "Task exception wasn't retrieved message" is printed with
"<class 'CancelledError'>" as the error (ie no traceback)
- the wait_for(T, S) call never returns (it's never put back on the
uasyncio run queue) and all tasks waiting on this are blocked forever
from running
- uasyncio otherwise continues to function and other tasks continue to be
scheduled as normal
The fix here reworks the "waiting" attribute of Task to be called "state"
and uses it to indicate whether a task is: running and not awaited on,
running and awaited on, finished and not awaited on, or finished and
awaited on. This means the task does not need to point "coro" to itself to
indicate finished, and also allows removal of the throw() method.
A benefit of this is that "Task exception wasn't retrieved" messages can go
back to being able to print the name of the coroutine function.
Fixes issue #7386.
Signed-off-by: Damien George <damien@micropython.org>
If digest is called then the hash object is put in a "final" state and
calling update() or digest() again will raise a ValueError (instead of
silently producing the wrong result).
See issue #4119.
Signed-off-by: Damien George <damien@micropython.org>
uctypes.FLOAT32 has a special value representation and
uctypes_struct_scalar_size() should be used instead of GET_SCALAR_SIZE().
Signed-off-by: Damien George <damien@micropython.org>