Initially some of these were found building the unix coverage variant on
MacOS because that build uses clang and has -Wdouble-promotion enabled, and
clang performs more vigorous promotion checks than gcc. Additionally the
codebase has been compiled with clang and msvc (the latter with warning
level 3), and with MICROPY_FLOAT_IMPL_FLOAT to find the rest of the
conversions.
Fixes are implemented either as explicit casts, or by using the correct
type, or by using one of the utility functions to handle floating point
casting; these have been moved from nativeglue.c to the public API.
Now that error string compression is supported it's more important to have
consistent error string formatting (eg all lowercase English words,
consistent contractions). This commit cleans up some of the strings to
make them more consistent.
This commit adds Loop.new_event_loop() which is used to reset the singleton
event loop. This functionality is put here instead of in Loop.close() to
make it possible to write code that is compatible with CPython.
The latest version of BTstack has a bug fixed so that it correctly
configures scan parameters if they are set right after activating the
stack. This means that BLE.gap_scan() will correctly set the scanning to
passive and so SCAN_RSP events are not passed through, so we don't need to
explicitly filter them in our bindings.
This commit makes all functions and function wrappers in modubinascii.c
STATIC and conditional on the MICROPY_PY_UBINASCII setting, which will
exclude the file from qstr/ compressed-string searching when ubinascii is
not enabled. The now-unused modubinascii.h header file is also removed.
The cc3200 port is updated accordingly to use this module in its entirety
instead of providing its own top-level definition of ubinascii.
This was originally like this because the cc3200 port has its own ubinascii
module which referenced these methods. The plan appeared to be that the
API might diverge (e.g. hardware crc), but this should be done similar to
I2C/SPI via a port-specific handler, rather than the port having its own
definition of the module. Having a centralised module definition also
enforces consistency of the API among ports.
This commit adds support for global exception handling in uasyncio
according to the CPython error handling:
https://docs.python.org/3/library/asyncio-eventloop.html#error-handling-api
This allows a program to receive exceptions from detached tasks and log
them to an appropriate location, instead of them being printed to the REPL.
The implementation preallocates a context dictionary so in case of an
exception there shouldn't be any RAM allocation.
The approach here is compatible with CPython except that in CPython the
exception handler is called once the task that threw an uncaught exception
is freed, whereas in MicroPython the exception handler is called
immediately when the exception is thrown.
These were found by buiding the unix coverage variant on macOS (so clang
compiler). Mostly, these are fixing implicit cast of float/double to
mp_float_t which is one of those two and one mp_int_t to size_t fix for
good measure.
https://www.python.org/dev/peps/pep-0475/
This implements something similar to PEP 475 on the unix port, and for the
VfsPosix class.
There are a few differences from the CPython implementation:
- Since we call mp_handle_pending() between any ENITR's, additional
functions could be called if MICROPY_ENABLE_SCHEDULER is enabled, not
just signal handlers.
- CPython only handles signal on the main thread, so other threads will
raise InterruptedError instead of retrying. On MicroPython,
mp_handle_pending() will currently raise exceptions on any thread.
A new macro MP_HAL_RETRY_SYSCALL is introduced to reduce duplicated code
and ensure that all instances behave the same. This will also allow other
ports that use POSIX-like system calls (and use, eg, VfsPosix) to provide
their own implementation if needed.
Implements Task and TaskQueue classes in C, using a pairing-heap data
structure. Using this reduces RAM use of each Task, and improves overall
performance of the uasyncio scheduler.
This commit adds a completely new implementation of the uasyncio module.
The aim of this version (compared to the original one in micropython-lib)
is to be more compatible with CPython's asyncio module, so that one can
more easily write code that runs under both MicroPython and CPython (and
reuse CPython asyncio libraries, follow CPython asyncio tutorials, etc).
Async code is not easy to write and any knowledge users already have from
CPython asyncio should transfer to uasyncio without effort, and vice versa.
The implementation here attempts to provide good compatibility with
CPython's asyncio while still being "micro" enough to run where MicroPython
runs. This follows the general philosophy of MicroPython itself, to make it
feel like Python.
The main change is to use a Task object for each coroutine. This allows
more flexibility to queue tasks in various places, eg the main run loop,
tasks waiting on events, locks or other tasks. It no longer requires
pre-allocating a fixed queue size for the main run loop.
A pairing heap is used to queue Tasks.
It's currently implemented in pure Python, separated into components with
lazy importing for optional components. In the future parts of this
implementation can be moved to C to improve speed and reduce memory usage.
But the aim is to maintain a pure-Python version as a reference version.
Also support MP_STREAM_GET_FILENO ioctl. The stdio flush change was done
previously for the unix port in 3e0b46b9af.
These changes make this POSIX file implementation equivalent to the unix
file implementation.
Fixes UDP non-blocking recv so it returns EAGAIN instead of ETIMEDOUT.
Timeout waiting for incoming data is also improved by replacing 100ms delay
with poll_sockets(), as is done in other parts of this module.
Fixes issue #5759.
This commit changes the BLE _IRQ_SCAN_RESULT data from:
addr_type, addr, connectable, rssi, adv_data
to:
addr_type, addr, adv_type, rssi, adv_data
This allows _IRQ_SCAN_RESULT to handle all scan result types (not just
connectable and non-connectable passive scans), and to distinguish between
them using adv_type which is an integer taking values 0x00-0x04 per the BT
specification.
This is a breaking change to the API, albeit a very minor one: the existing
connectable value was a boolean and True now becomes 0x00, False becomes
0x02.
Documentation is updated and a test added.
Fixes#5738.
This commit ensures that the BLE stack is active before allowing operations
that may otherwise crash if it's not active. It also clarifies the state
better (adding the "stopping" state) and renames mp_bluetooth_is_enabled to
the more self-explanatory mp_bluetooth_is_active.
This makes a cleaner separation between the: driver, HCI UART and BT stack.
Also updated the naming to be more consistent (mp_bluetooth_hci_*).
Work done in collaboration with Jim Mussared aka @jimmo.
Move extmod/modbluetooth_nimble.* to extmod/nimble. And move common
Makefile lines to extmod/nimble/nimble.mk (which was previously only used
by stm32). This allows (upcoming) btstack to follow a similar structure.
Work done in collaboration with Jim Mussared aka @jimmo.
This provides a more consistent C-level API to raise exceptions, ie moving
away from nlr_raise towards mp_raise_XXX. It also reduces code size by a
small amount on some ports.
Most types are in rodata/ROM, and mp_obj_base_t.type is a constant pointer,
so enforce this const-ness throughout the code base. If a type ever needs
to be modified (eg a user type) then a simple cast can be used.
The struct member "dest" should never be less than "destStart", so their
difference is never negative. Cast as such to make the comparison
explicitly unsigned, ensuring the compiler produces the correct comparison
instruction, and avoiding any compiler warnings.
Move webrepl support code from ports/esp8266/modules into extmod/webrepl
(to be alongside extmod/modwebrepl.c), and use frozen manifests to include
it in the build on esp8266 and esp32.
A small modification is made to webrepl.py to make it work on non-ESP
ports, i.e. don't call dupterm_notify if not available.
The size of the event ringbuf was previously fixed to compile-time config
value, but it's necessary to sometimes increase this for applications that
have large characteristic buffers to read, or many events at once.
With this commit the size can be set via BLE.config(rxbuf=512), for
example. This also resizes the internal event data buffer which sets the
maximum size of incoming data passed to the event handler.
This allows the user to explicitly select the behaviour of the write to the
remote peripheral. This is needed for peripherals that have
characteristics with WRITE_NO_RESPONSE set (instead of normal WRITE). The
function's signature is now:
BLE.gattc_write(conn_handle, value_handle, data, mode=0)
mode=0 means write without response, while mode=1 means write with
response. The latter was the original behaviour so this commit is a change
in behaviour of this method, and one should specify 1 as the 4th argument
to get back the old behaviour.
In the future there could be more modes supported, such as long writes.
The default protection for the BLE ringbuf is to use
MICROPY_BEGIN_ATOMIC_SECTION, which disables all interrupts. On stm32 it
only needs to disable the lowest priority IRQ, pendsv, because that's the
IRQ level at which the BLE stack is driven.
This removes the limit on data coming in from a BLE.gattc_read() request,
or a notify with payload (coming in to a central). In both cases the data
coming in to the BLE callback is now limited only by the available data in
the ringbuf, whereas before it was capped at (default hard coded) 20 bytes.
Instead of enqueue_irq() inspecting the ringbuf to decide whether to
schedule the IRQ callback (if ringbuf is empty), maintain a flag that knows
if the callback is on the schedule queue or not. This saves about 150
bytes of code (for stm32 builds), and simplifies all uses of enqueue_irq()
and schedule_ringbuf().
The address, adv payload and uuid fields of the event are pre-allocated by
modbluetooth, and reused in the IRQ handler. Simplify this and move all
storage into the `mp_obj_bluetooth_ble_t` instance.
This now allows users to hold on to a reference to these instances without
crashes, although they may be overwritten by future events. If they want
to hold onto the values longer term they need to copy them.
Remove existing scan result events from the ringbuf if the ringbuf is full
and we're trying to enqueue any other event. This is needed so that events
such as SCAN_COMPLETE are always put on the ringbuf.
This commit removes the Makefile-level MICROPY_FATFS config and moves the
MICROPY_VFS_FAT config to the Makefile level to replace it. It also moves
the include of the oofatfs source files in the build from each port to a
central place in extmod/extmod.mk.
For a port to enabled VFS FAT support it should now set MICROPY_VFS_FAT=1
at the level of the Makefile. This will include the relevant oofatfs files
in the build and set MICROPY_VFS_FAT=1 at the C (preprocessor) level.
POSIX poll should always return POLLERR and POLLHUP in revents, regardless
of whether they were requested in the input events flags.
See issues #4290 and #5172.
POSIX poll should always return POLLERR and POLLHUP in revents, regardless
of whether they were requested in the input events flags.
See issues #4290 and #5172.
- Adds an explicit way to set the size of a value's internal buffer,
replacing `ble.gatts_write(handle, bytes(size))` (although that
still works).
- Add an "append" mode for values, which means that remote writes
will append to the buffer.
This commit adds helper functions to call readblocks/writeblocks with a
fourth argument, the byte offset within a block.
Although the mp_vfs_blockdev_t struct has grown here by 2 machine words, in
all current uses of this struct within this repository it still fits within
the same number of GC blocks.
For consistency with "umachine". Now that weak links are enabled
by default for built-in modules, this should be a no-op, but allows
extension of the bluetooth module by user code.
Also move registration of ubluetooth to objmodule rather than
port-specific.
NimBLE doesn't actually copy this data, it requires it to stay live.
Only dereference when we register a new set of services.
Fixes#5226
This will allow incrementally adding services in the future, so
rename `reset` to `append` to make it clearer.
Internally change the representation of UUIDs to LE uint8* to simplify this.
This allows UUIDs to be easily used in BLE payloads (such as advertising).
Ref: #5186
This avoids a confusing ENOMEM raised from gap_advertise if there is
currently an active connection. This refers to the static connection
buffer pre-allocated by Nimble (nothing to do with MicroPython heap
memory).
This is to more accurately match the BLE spec, where intervals are
configured in units of channel hop time (625us). When it was
specified in ms, not all "valid" intervals were able to be
specified.
Now that we're also allowing configuration of scan interval, this
commit updates advertising to match.
This adds two additional optional kwargs to `gap_scan()`:
- `interval_us`: How long between scans.
- `window_us`: How long to scan for during a scan.
The default with NimBLE is a 11.25ms window with a 1.28s interval.
Changing these parameters is important for detecting low-frequency
advertisements (e.g. beacons).
Note: these params are in microseconds, not milliseconds in order
to allow the 625us granularity offered by the spec.
On other ports (e.g. ESP32) they provide a complete Nimble implementation
(i.e. we don't need to use the code in extmod/nimble). This change
extracts out the bits that we don't need to use in other ports:
- malloc/free/realloc for Nimble memory.
- pendsv poll handler
- depowering the cywbt
Also cleans up the root pointer management.
With this patch alignment is done relative to the start of the buffer that
is being unpacked, not the raw pointer value, as per CPython.
Fixes issue #3314.
As per the README.md of the upstream source at
https://github.com/B-Con/crypto-algorithms, this source code was released
into the public domain, so make that explicit in the copyright line in the
header.
Enabled via MICROPY_PY_URE_DEBUG, disabled by default (but enabled on unix
coverage build). This is a rarely used feature that costs a lot of code
(500-800 bytes flash). Debugging of regular expressions can be done
offline with other tools.
The helper function exec_user_callback executes within the context of an
lwIP C callback, and the user (Python) callback to be scheduled may want to
perform further TCP/IP actions, so the latter should be scheduled to run
outside the lwIP context (otherwise it's effectively a "hard IRQ" and such
callbacks have lots of restrictions).
If tcp_write returns ERR_MEM then it's not a fatal error but instead means
the caller should retry the write later on (and this is what lwIP's netconn
API does).
This fixes problems where a TCP send would raise OSError(ENOMEM) in
situations where the TCP/IP stack is under heavy load. See eg issues #1897
and #1971.
Setting MICROPY_PY_USSL and MICROPY_SSL_MBEDTLS at the Makefile-level will
now build mbedTLS from source and include it in the build, with the ussl
module using this TLS library. Extra settings like MBEDTLS_CONFIG_FILE may
need to be provided by a given port.
If a port wants to use its own mbedTLS library then it should not set
MICROPY_SSL_MBEDTLS at the Makefile-level but rather set it at the C level,
and provide the library as part of the build in its own way (see eg esp32
port).
In d5f0c87bb9 this call to tcp_poll() was
added to put a timeout on closing TCP sockets. But after calling
tcp_close() the PCB may be freed and therefore invalid, so tcp_poll() can
not be used at that point. As a fix this commit calls tcp_poll() before
closing the TCP PCB. If the PCB is subsequently closed and freed by
tcp_close() or tcp_abort() then the PCB will not be on any active list and
the callback will not be executed, which is the desired behaviour (the
_lwip_tcp_close_poll() callback only needs to be called if the PCB remains
active for longer than the timeout).
Commit 2848a613ac introduced a bug where
lwip_socket_free_incoming() accessed pcb.tcp->state after the PCB was
closed. The state may have changed due to that close call, or the PCB may
be freed and therefore invalid. This commit fixes that by calling
lwip_socket_free_incoming() before the PCB is closed.
For example: i2c.writevto(addr, (buf1, buf2)). This allows to efficiently
(wrt memory) write data composed of separate buffers, such as a command
followed by a large amount of data.
It consists of:
1. "do_handhake" param (default True) to wrap_socket(). If it's False,
handshake won't be performed by wrap_socket(), as it would be done in
blocking way normally. Instead, SSL socket can be set to non-blocking mode,
and handshake would be performed before the first read/write request (by
just returning EAGAIN to these requests, while instead reading/writing/
processing handshake over the connection). Unfortunately, axTLS doesn't
really support non-blocking handshake correctly. So, while framework for
this is implemented on MicroPython's module side, in case of axTLS, it
won't work reliably.
2. Implementation of .setblocking() method. It must be called on SSL socket
for blocking vs non-blocking operation to be handled correctly (for
example, it's not enough to wrap non-blocking socket with wrap_socket()
call - resulting SSL socket won't be itself non-blocking). Note that
.setblocking() propagates call to the underlying socket object, as
expected.
For this, add wrap_socket(do_handshake=False) param. CPython doesn't have
such a param at a module's global function, and at SSLContext.wrap_socket()
it has do_handshake_on_connect param, but that uselessly long.
Beyond that, make write() handle not just MBEDTLS_ERR_SSL_WANT_WRITE, but
also MBEDTLS_ERR_SSL_WANT_READ, as during handshake, write call may be
actually preempted by need to read next handshake message from peer.
Likewise, for read(). And even after the initial negotiation, situations
like that may happen e.g. with renegotiation. Both
MBEDTLS_ERR_SSL_WANT_READ and MBEDTLS_ERR_SSL_WANT_WRITE are however mapped
to the same None return code. The idea is that if the same read()/write()
method is called repeatedly, the progress will be made step by step anyway.
The caveat is if user wants to add the underlying socket to uselect.poll().
To be reliable, in this case, the socket should be polled for both POLL_IN
and POLL_OUT, as we don't know the actual expected direction. But that's
actually problematic. Consider for example that write() ends with
MBEDTLS_ERR_SSL_WANT_READ, but gets converted to None. We put the
underlying socket on pull using POLL_IN|POLL_OUT but that probably returns
immediately with POLL_OUT, as underlyings socket is writable. We call the
same ussl write() again, which again results in MBEDTLS_ERR_SSL_WANT_READ,
etc. We thus go into busy-loop.
So, the handling in this patch is temporary and needs fixing. But exact way
to fix it is not clear. One way is to provide explicit function for
handshake (CPython has do_handshake()), and let *that* return distinct
codes like WANT_READ/WANT_WRITE. But as mentioned above, past the initial
handshake, such situation may happen again with at least renegotiation. So
apparently, the only robust solution is to return "out of bound" special
sentinels like WANT_READ/WANT_WRITE from read()/write() directly. CPython
throws exceptions for these, but those are expensive to adopt that way for
efficiency-conscious implementation like MicroPython.
In CPython the random module is seeded differently on each import, and so
this new macro option MICROPY_PY_URANDOM_SEED_INIT_FUNC allows to implement
such a behaviour.
Since commit da938a83b5 the tcp_arg() that is
set for the new connection is the new connection itself, and the parent
listening socket is found in the pcb->connected entry.