The goal of this commit is to allow using ble.gatts_notify() at any time,
even if the stack is not ready to send the notification right now. It also
addresses the same issue for ble.gatts_indicate() and ble.gattc_write()
(without response). In addition this commit fixes the case where the
buffer passed to write-with-response wasn't copied, meaning it could be
modified by the caller, affecting the in-progress write.
The changes are:
- gatts_notify/indicate will now run in the background if the ACL buffer is
currently full, meaning that notify/indicate can be called at any time.
- gattc_write(mode=0) (no response) will now allow for one outstanding
write.
- gattc_write(mode=1) (with response) will now copy the buffer so that it
can't be modified by the caller while the write is in progress.
All four paths also now track the buffer while the operation is in
progress, which prevents the GC free'ing the buffer while it's still
needed.
With only `sp_func_proto_paren = remove` set there are some cases where
uncrustify misses removing a space between the function name and the
opening '('. This sets all of the related options to `force` as well.
The ring buffer previously used a single unsigned byte field to save the
length, meaning that it would overflow for large characteristic value
responses.
With this commit it now use a 16-bit length instead and has code to
explicitly truncate at UINT16_MAX (although this should be impossible to
achieve in practice).
This commit makes sure that all discovery complete and read/write status
events set the status to zero on success.
The status value will be implementation-dependent on non-success cases.
On btstack there's no status associated with the read result, it comes
through as a separate event. This allows you to detect read failures or
timeouts.
There doesn't appear to be any use for only triggering on specific events,
so it's just easier to number them sequentially. This makes them smaller
values so they take up only 1 byte in the ringbuf, only 1 byte for the
opcode in the bytecode, and makes room for more events.
Also add a couple of new event types that need to be implemented (to avoid
re-numbering later).
And rename _COMPLETE and _STATUS to _DONE for consistency.
In the future the "trigger" keyword argument can be reinstated by requiring
the user to compute the bitmask, eg:
ble.irq(handler, 1 << _IRQ_SCAN_RESULT | 1 << _IRQ_SCAN_DONE)
Before this change, any NimBLE error that does not appear in the
ble_hs_err_to_errno_table maps to return code 0, meaning success. If we
miss adding an error code to the table we end up returning success in case
of failure.
Instead, handle the zero case explicitly and default to MP_EIO. This
allows removing the now-redundant MP_EIO entries from the mapping.
This commit allows the user to set/get the GAP device name used by service
0x1800, characteristic 0x2a00. The usage is:
BLE.config(gap_name="myname")
print(BLE.config("gap_name"))
As part of this change the compile-time setting
MICROPY_PY_BLUETOOTH_DEFAULT_NAME is renamed to
MICROPY_PY_BLUETOOTH_DEFAULT_GAP_NAME to emphasise its link to GAP and this
new "gap_name" config value. And the default value of this for the NimBLE
bindings is changed from "PYBD" to "MPY NIMBLE" to be more generic.
Ujson should only worry about whitespace before JSON. This becomes apparent when you are using MP stream protocol to read directly from input buffers.
When you attempt to read(1) on a UART (and possibly other protocols) you have to wait for either the byte or the timeout.
Fixes:
- Waiting for a timeout after you have completed reading a correct and complete JSON off the input.
- Raising an OSError after reading a correct and complete JSON off the input.
- Eating more data than semantically owned off the input buffer.
- Blocking to start parsing JSON until the entire JSON body has been loaded into a potentially large, contiguous Python object.
Code you would write before:
```
line = board_busio_uart_port.read_line()
json_dict = json.loads(line)
```
or reaching for fixed buffers and swapping them around in Python.
Code that did not work before that does now:
```
json_dict = json.load(board_busio_uart_port)
```
- This removes the need for intermediate copies of data when reading JSON from micropython stream protocol inputs.
- It also increases total application speed by parsing JSON concurrently with receiving on boards that read from UART via DMA.
- It simplifies code that users write while improving their apps.
If the new name start with '/', cur_dir is not prepened any more, so that
the current working directory is respected. And extend the test cases for
rename to cover this functionality.
This change scans for '.', '..' and multiple '/' and normalizes the new
path name. If the resulting path does not exist, an error is raised.
Non-existing interim path elements are ignored if they are removed during
normalization.
This fixes the bug, that stat(filename) would not consider the current
working directory. So if e.g. the cwd is "lib", then stat("main.py") would
return the info for "/main.py" instead of "/lib/main.py".
For ports that have a system malloc which is not garbage collected (eg
unix, esp32), the stream object for the DB must be retained separately to
prevent it from being reclaimed by the MicroPython GC (because the
berkeley-db library uses malloc to allocate the DB structure which stores
the only reference to the stream).
Although in some cases the user code will explicitly retain a reference to
the underlying stream because it needs to call close() on it, this is not
always the case, eg in cases where the DB is intended to live forever.
Fixes issue #5940.
But only when bluetooth is enabled, i.e. if building the dev or coverage
variants, and we have libusb available.
Update travis to match, i.e. specify the variant when doing
`make submodules`.
This commit adds full support to the unix port for Bluetooth using the
common extmod/modbluetooth Python bindings. This uses the libusb HCI
transport, which supports many common USB BT adaptors.
This change is made for two reasons:
1. A 3rd-party library (eg berkeley-db-1.xx, axtls) may use the system
provided errno for certain errors, and yet MicroPython stream objects
that it calls will be using the internal mp_stream_errno. So if the
library returns an error it is not known whether the corresponding errno
code is stored in the system errno or mp_stream_errno. Using the system
errno in all cases (eg in the mp_stream_posix_XXX wrappers) fixes this
ambiguity.
2. For systems that have threading the system-provided errno should always
be used because the errno value is thread-local.
For systems that do not have an errno, the new lib/embed/__errno.c file is
provided.
Note: the uncrustify configuration is explicitly set to 'add' instead of
'force' in order not to alter the comments which use extra spaces after //
as a means of indenting text for clarity.
Initially some of these were found building the unix coverage variant on
MacOS because that build uses clang and has -Wdouble-promotion enabled, and
clang performs more vigorous promotion checks than gcc. Additionally the
codebase has been compiled with clang and msvc (the latter with warning
level 3), and with MICROPY_FLOAT_IMPL_FLOAT to find the rest of the
conversions.
Fixes are implemented either as explicit casts, or by using the correct
type, or by using one of the utility functions to handle floating point
casting; these have been moved from nativeglue.c to the public API.
Now that error string compression is supported it's more important to have
consistent error string formatting (eg all lowercase English words,
consistent contractions). This commit cleans up some of the strings to
make them more consistent.
This commit adds Loop.new_event_loop() which is used to reset the singleton
event loop. This functionality is put here instead of in Loop.close() to
make it possible to write code that is compatible with CPython.
The latest version of BTstack has a bug fixed so that it correctly
configures scan parameters if they are set right after activating the
stack. This means that BLE.gap_scan() will correctly set the scanning to
passive and so SCAN_RSP events are not passed through, so we don't need to
explicitly filter them in our bindings.
This commit makes all functions and function wrappers in modubinascii.c
STATIC and conditional on the MICROPY_PY_UBINASCII setting, which will
exclude the file from qstr/ compressed-string searching when ubinascii is
not enabled. The now-unused modubinascii.h header file is also removed.
The cc3200 port is updated accordingly to use this module in its entirety
instead of providing its own top-level definition of ubinascii.
This was originally like this because the cc3200 port has its own ubinascii
module which referenced these methods. The plan appeared to be that the
API might diverge (e.g. hardware crc), but this should be done similar to
I2C/SPI via a port-specific handler, rather than the port having its own
definition of the module. Having a centralised module definition also
enforces consistency of the API among ports.
This commit adds support for global exception handling in uasyncio
according to the CPython error handling:
https://docs.python.org/3/library/asyncio-eventloop.html#error-handling-api
This allows a program to receive exceptions from detached tasks and log
them to an appropriate location, instead of them being printed to the REPL.
The implementation preallocates a context dictionary so in case of an
exception there shouldn't be any RAM allocation.
The approach here is compatible with CPython except that in CPython the
exception handler is called once the task that threw an uncaught exception
is freed, whereas in MicroPython the exception handler is called
immediately when the exception is thrown.
These were found by buiding the unix coverage variant on macOS (so clang
compiler). Mostly, these are fixing implicit cast of float/double to
mp_float_t which is one of those two and one mp_int_t to size_t fix for
good measure.
https://www.python.org/dev/peps/pep-0475/
This implements something similar to PEP 475 on the unix port, and for the
VfsPosix class.
There are a few differences from the CPython implementation:
- Since we call mp_handle_pending() between any ENITR's, additional
functions could be called if MICROPY_ENABLE_SCHEDULER is enabled, not
just signal handlers.
- CPython only handles signal on the main thread, so other threads will
raise InterruptedError instead of retrying. On MicroPython,
mp_handle_pending() will currently raise exceptions on any thread.
A new macro MP_HAL_RETRY_SYSCALL is introduced to reduce duplicated code
and ensure that all instances behave the same. This will also allow other
ports that use POSIX-like system calls (and use, eg, VfsPosix) to provide
their own implementation if needed.
Implements Task and TaskQueue classes in C, using a pairing-heap data
structure. Using this reduces RAM use of each Task, and improves overall
performance of the uasyncio scheduler.
This commit adds a completely new implementation of the uasyncio module.
The aim of this version (compared to the original one in micropython-lib)
is to be more compatible with CPython's asyncio module, so that one can
more easily write code that runs under both MicroPython and CPython (and
reuse CPython asyncio libraries, follow CPython asyncio tutorials, etc).
Async code is not easy to write and any knowledge users already have from
CPython asyncio should transfer to uasyncio without effort, and vice versa.
The implementation here attempts to provide good compatibility with
CPython's asyncio while still being "micro" enough to run where MicroPython
runs. This follows the general philosophy of MicroPython itself, to make it
feel like Python.
The main change is to use a Task object for each coroutine. This allows
more flexibility to queue tasks in various places, eg the main run loop,
tasks waiting on events, locks or other tasks. It no longer requires
pre-allocating a fixed queue size for the main run loop.
A pairing heap is used to queue Tasks.
It's currently implemented in pure Python, separated into components with
lazy importing for optional components. In the future parts of this
implementation can be moved to C to improve speed and reduce memory usage.
But the aim is to maintain a pure-Python version as a reference version.
Also support MP_STREAM_GET_FILENO ioctl. The stdio flush change was done
previously for the unix port in 3e0b46b9af.
These changes make this POSIX file implementation equivalent to the unix
file implementation.
Fixes UDP non-blocking recv so it returns EAGAIN instead of ETIMEDOUT.
Timeout waiting for incoming data is also improved by replacing 100ms delay
with poll_sockets(), as is done in other parts of this module.
Fixes issue #5759.
This commit changes the BLE _IRQ_SCAN_RESULT data from:
addr_type, addr, connectable, rssi, adv_data
to:
addr_type, addr, adv_type, rssi, adv_data
This allows _IRQ_SCAN_RESULT to handle all scan result types (not just
connectable and non-connectable passive scans), and to distinguish between
them using adv_type which is an integer taking values 0x00-0x04 per the BT
specification.
This is a breaking change to the API, albeit a very minor one: the existing
connectable value was a boolean and True now becomes 0x00, False becomes
0x02.
Documentation is updated and a test added.
Fixes#5738.
This commit ensures that the BLE stack is active before allowing operations
that may otherwise crash if it's not active. It also clarifies the state
better (adding the "stopping" state) and renames mp_bluetooth_is_enabled to
the more self-explanatory mp_bluetooth_is_active.
This makes a cleaner separation between the: driver, HCI UART and BT stack.
Also updated the naming to be more consistent (mp_bluetooth_hci_*).
Work done in collaboration with Jim Mussared aka @jimmo.
Move extmod/modbluetooth_nimble.* to extmod/nimble. And move common
Makefile lines to extmod/nimble/nimble.mk (which was previously only used
by stm32). This allows (upcoming) btstack to follow a similar structure.
Work done in collaboration with Jim Mussared aka @jimmo.
This provides a more consistent C-level API to raise exceptions, ie moving
away from nlr_raise towards mp_raise_XXX. It also reduces code size by a
small amount on some ports.
Most types are in rodata/ROM, and mp_obj_base_t.type is a constant pointer,
so enforce this const-ness throughout the code base. If a type ever needs
to be modified (eg a user type) then a simple cast can be used.
The struct member "dest" should never be less than "destStart", so their
difference is never negative. Cast as such to make the comparison
explicitly unsigned, ensuring the compiler produces the correct comparison
instruction, and avoiding any compiler warnings.
Move webrepl support code from ports/esp8266/modules into extmod/webrepl
(to be alongside extmod/modwebrepl.c), and use frozen manifests to include
it in the build on esp8266 and esp32.
A small modification is made to webrepl.py to make it work on non-ESP
ports, i.e. don't call dupterm_notify if not available.
The size of the event ringbuf was previously fixed to compile-time config
value, but it's necessary to sometimes increase this for applications that
have large characteristic buffers to read, or many events at once.
With this commit the size can be set via BLE.config(rxbuf=512), for
example. This also resizes the internal event data buffer which sets the
maximum size of incoming data passed to the event handler.
Protocols are nice, but there is no way for C code to verify whether
a type's "protocol" structure actually implements some particular
protocol. As a result, you can pass an object that implements the
"vfs" protocol to one that expects the "stream" protocol, and the
opposite of awesomeness ensues.
This patch adds an OPTIONAL (but enabled by default) protocol identifier
as the first member of any protocol structure. This identifier is
simply a unique QSTR chosen by the protocol designer and used by each
protocol implementer. When checking for protocol support, instead of
just checking whether the object's type has a non-NULL protocol field,
use `mp_proto_get` which implements the protocol check when possible.
The existing protocols are now named:
protocol_framebuf
protocol_i2c
protocol_pin
protocol_stream
protocol_spi
protocol_vfs
(most of these are unused in CP and are just inherited from MP; vfs and
stream are definitely used though)
I did not find any crashing examples, but here's one to give a flavor of what
is improved, using `micropython_coverage`. Before the change,
the vfs "ioctl" protocol is invoked, and the result is not intelligible
as json (but it could have resulted in a hard fault, potentially):
>>> import uos, ujson
>>> u = uos.VfsPosix('/tmp')
>>> ujson.load(u)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: syntax error in JSON
After the change, the vfs object is correctly detected as not supporting
the stream protocol:
>>> ujson.load(p)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: stream operation not supported
This allows the user to explicitly select the behaviour of the write to the
remote peripheral. This is needed for peripherals that have
characteristics with WRITE_NO_RESPONSE set (instead of normal WRITE). The
function's signature is now:
BLE.gattc_write(conn_handle, value_handle, data, mode=0)
mode=0 means write without response, while mode=1 means write with
response. The latter was the original behaviour so this commit is a change
in behaviour of this method, and one should specify 1 as the 4th argument
to get back the old behaviour.
In the future there could be more modes supported, such as long writes.
The default protection for the BLE ringbuf is to use
MICROPY_BEGIN_ATOMIC_SECTION, which disables all interrupts. On stm32 it
only needs to disable the lowest priority IRQ, pendsv, because that's the
IRQ level at which the BLE stack is driven.
This removes the limit on data coming in from a BLE.gattc_read() request,
or a notify with payload (coming in to a central). In both cases the data
coming in to the BLE callback is now limited only by the available data in
the ringbuf, whereas before it was capped at (default hard coded) 20 bytes.
Instead of enqueue_irq() inspecting the ringbuf to decide whether to
schedule the IRQ callback (if ringbuf is empty), maintain a flag that knows
if the callback is on the schedule queue or not. This saves about 150
bytes of code (for stm32 builds), and simplifies all uses of enqueue_irq()
and schedule_ringbuf().
The address, adv payload and uuid fields of the event are pre-allocated by
modbluetooth, and reused in the IRQ handler. Simplify this and move all
storage into the `mp_obj_bluetooth_ble_t` instance.
This now allows users to hold on to a reference to these instances without
crashes, although they may be overwritten by future events. If they want
to hold onto the values longer term they need to copy them.
Remove existing scan result events from the ringbuf if the ringbuf is full
and we're trying to enqueue any other event. This is needed so that events
such as SCAN_COMPLETE are always put on the ringbuf.
This commit removes the Makefile-level MICROPY_FATFS config and moves the
MICROPY_VFS_FAT config to the Makefile level to replace it. It also moves
the include of the oofatfs source files in the build from each port to a
central place in extmod/extmod.mk.
For a port to enabled VFS FAT support it should now set MICROPY_VFS_FAT=1
at the level of the Makefile. This will include the relevant oofatfs files
in the build and set MICROPY_VFS_FAT=1 at the C (preprocessor) level.
POSIX poll should always return POLLERR and POLLHUP in revents, regardless
of whether they were requested in the input events flags.
See issues #4290 and #5172.
POSIX poll should always return POLLERR and POLLHUP in revents, regardless
of whether they were requested in the input events flags.
See issues #4290 and #5172.
- Adds an explicit way to set the size of a value's internal buffer,
replacing `ble.gatts_write(handle, bytes(size))` (although that
still works).
- Add an "append" mode for values, which means that remote writes
will append to the buffer.
This commit adds helper functions to call readblocks/writeblocks with a
fourth argument, the byte offset within a block.
Although the mp_vfs_blockdev_t struct has grown here by 2 machine words, in
all current uses of this struct within this repository it still fits within
the same number of GC blocks.
For consistency with "umachine". Now that weak links are enabled
by default for built-in modules, this should be a no-op, but allows
extension of the bluetooth module by user code.
Also move registration of ubluetooth to objmodule rather than
port-specific.
NimBLE doesn't actually copy this data, it requires it to stay live.
Only dereference when we register a new set of services.
Fixes#5226
This will allow incrementally adding services in the future, so
rename `reset` to `append` to make it clearer.
Internally change the representation of UUIDs to LE uint8* to simplify this.
This allows UUIDs to be easily used in BLE payloads (such as advertising).
Ref: #5186
This avoids a confusing ENOMEM raised from gap_advertise if there is
currently an active connection. This refers to the static connection
buffer pre-allocated by Nimble (nothing to do with MicroPython heap
memory).
This is to more accurately match the BLE spec, where intervals are
configured in units of channel hop time (625us). When it was
specified in ms, not all "valid" intervals were able to be
specified.
Now that we're also allowing configuration of scan interval, this
commit updates advertising to match.
This adds two additional optional kwargs to `gap_scan()`:
- `interval_us`: How long between scans.
- `window_us`: How long to scan for during a scan.
The default with NimBLE is a 11.25ms window with a 1.28s interval.
Changing these parameters is important for detecting low-frequency
advertisements (e.g. beacons).
Note: these params are in microseconds, not milliseconds in order
to allow the 625us granularity offered by the spec.
On other ports (e.g. ESP32) they provide a complete Nimble implementation
(i.e. we don't need to use the code in extmod/nimble). This change
extracts out the bits that we don't need to use in other ports:
- malloc/free/realloc for Nimble memory.
- pendsv poll handler
- depowering the cywbt
Also cleans up the root pointer management.
With this patch alignment is done relative to the start of the buffer that
is being unpacked, not the raw pointer value, as per CPython.
Fixes issue #3314.
As per the README.md of the upstream source at
https://github.com/B-Con/crypto-algorithms, this source code was released
into the public domain, so make that explicit in the copyright line in the
header.
Enabled via MICROPY_PY_URE_DEBUG, disabled by default (but enabled on unix
coverage build). This is a rarely used feature that costs a lot of code
(500-800 bytes flash). Debugging of regular expressions can be done
offline with other tools.
The helper function exec_user_callback executes within the context of an
lwIP C callback, and the user (Python) callback to be scheduled may want to
perform further TCP/IP actions, so the latter should be scheduled to run
outside the lwIP context (otherwise it's effectively a "hard IRQ" and such
callbacks have lots of restrictions).
If tcp_write returns ERR_MEM then it's not a fatal error but instead means
the caller should retry the write later on (and this is what lwIP's netconn
API does).
This fixes problems where a TCP send would raise OSError(ENOMEM) in
situations where the TCP/IP stack is under heavy load. See eg issues #1897
and #1971.
Setting MICROPY_PY_USSL and MICROPY_SSL_MBEDTLS at the Makefile-level will
now build mbedTLS from source and include it in the build, with the ussl
module using this TLS library. Extra settings like MBEDTLS_CONFIG_FILE may
need to be provided by a given port.
If a port wants to use its own mbedTLS library then it should not set
MICROPY_SSL_MBEDTLS at the Makefile-level but rather set it at the C level,
and provide the library as part of the build in its own way (see eg esp32
port).
In d5f0c87bb9 this call to tcp_poll() was
added to put a timeout on closing TCP sockets. But after calling
tcp_close() the PCB may be freed and therefore invalid, so tcp_poll() can
not be used at that point. As a fix this commit calls tcp_poll() before
closing the TCP PCB. If the PCB is subsequently closed and freed by
tcp_close() or tcp_abort() then the PCB will not be on any active list and
the callback will not be executed, which is the desired behaviour (the
_lwip_tcp_close_poll() callback only needs to be called if the PCB remains
active for longer than the timeout).
Commit 2848a613ac introduced a bug where
lwip_socket_free_incoming() accessed pcb.tcp->state after the PCB was
closed. The state may have changed due to that close call, or the PCB may
be freed and therefore invalid. This commit fixes that by calling
lwip_socket_free_incoming() before the PCB is closed.
For example: i2c.writevto(addr, (buf1, buf2)). This allows to efficiently
(wrt memory) write data composed of separate buffers, such as a command
followed by a large amount of data.
It consists of:
1. "do_handhake" param (default True) to wrap_socket(). If it's False,
handshake won't be performed by wrap_socket(), as it would be done in
blocking way normally. Instead, SSL socket can be set to non-blocking mode,
and handshake would be performed before the first read/write request (by
just returning EAGAIN to these requests, while instead reading/writing/
processing handshake over the connection). Unfortunately, axTLS doesn't
really support non-blocking handshake correctly. So, while framework for
this is implemented on MicroPython's module side, in case of axTLS, it
won't work reliably.
2. Implementation of .setblocking() method. It must be called on SSL socket
for blocking vs non-blocking operation to be handled correctly (for
example, it's not enough to wrap non-blocking socket with wrap_socket()
call - resulting SSL socket won't be itself non-blocking). Note that
.setblocking() propagates call to the underlying socket object, as
expected.
For this, add wrap_socket(do_handshake=False) param. CPython doesn't have
such a param at a module's global function, and at SSLContext.wrap_socket()
it has do_handshake_on_connect param, but that uselessly long.
Beyond that, make write() handle not just MBEDTLS_ERR_SSL_WANT_WRITE, but
also MBEDTLS_ERR_SSL_WANT_READ, as during handshake, write call may be
actually preempted by need to read next handshake message from peer.
Likewise, for read(). And even after the initial negotiation, situations
like that may happen e.g. with renegotiation. Both
MBEDTLS_ERR_SSL_WANT_READ and MBEDTLS_ERR_SSL_WANT_WRITE are however mapped
to the same None return code. The idea is that if the same read()/write()
method is called repeatedly, the progress will be made step by step anyway.
The caveat is if user wants to add the underlying socket to uselect.poll().
To be reliable, in this case, the socket should be polled for both POLL_IN
and POLL_OUT, as we don't know the actual expected direction. But that's
actually problematic. Consider for example that write() ends with
MBEDTLS_ERR_SSL_WANT_READ, but gets converted to None. We put the
underlying socket on pull using POLL_IN|POLL_OUT but that probably returns
immediately with POLL_OUT, as underlyings socket is writable. We call the
same ussl write() again, which again results in MBEDTLS_ERR_SSL_WANT_READ,
etc. We thus go into busy-loop.
So, the handling in this patch is temporary and needs fixing. But exact way
to fix it is not clear. One way is to provide explicit function for
handshake (CPython has do_handshake()), and let *that* return distinct
codes like WANT_READ/WANT_WRITE. But as mentioned above, past the initial
handshake, such situation may happen again with at least renegotiation. So
apparently, the only robust solution is to return "out of bound" special
sentinels like WANT_READ/WANT_WRITE from read()/write() directly. CPython
throws exceptions for these, but those are expensive to adopt that way for
efficiency-conscious implementation like MicroPython.
In CPython the random module is seeded differently on each import, and so
this new macro option MICROPY_PY_URANDOM_SEED_INIT_FUNC allows to implement
such a behaviour.
Since commit da938a83b5 the tcp_arg() that is
set for the new connection is the new connection itself, and the parent
listening socket is found in the pcb->connected entry.
Use uos.dupterm for REPL configuration of the main USB_VCP(0) stream on
dupterm slot 1, if USB is enabled. This means dupterm can also be used to
disable the boot REPL port if desired, via uos.dupterm(None, 1).
For efficiency this adds a simple hook to the global uos.dupterm code to
work with streams that are known to be native streams.
Some users of this module may require the LwIP stack to run at an elevated
priority, to protect against concurrency issues with processing done by the
underlying network interface. Since LwIP doesn't provide such protection
it must be done here (the other option is to run LwIP in a separate thread,
and use thread protection mechanisms, but that is a more heavyweight
solution).
The bug polling for readability was: if alloc==0 and tcp.item==NULL then
the code would incorrectly check tcp.array[iget] which is an invalid
dereference when alloc==0. This patch refactors the code to use a helper
function lwip_socket_incoming_array() to return the correct pointer for the
incomming connection array.
Fixes issue #4511.
This feature is controlled at compile time by MICROPY_PY_URE_SUB, disabled
by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by
MICROPY_PY_URE_MATCH_SPAN_START_END, disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by MICROPY_PY_URE_MATCH_GROUPS,
disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
As mentioned in #4450, `websocket` was experimental with a single intended
user, `webrepl`. Therefore, we'll make this change without a weak
link `websocket` -> `uwebsocket`.
Previously crypto-algorithms impl was included even if MICROPY_SSL_MBEDTLS
was in effect, thus we relied on the compiler/linker to cut out the unused
functions.
Python defines warnings as belonging to categories, where category is a
warning type (descending from exception type). This is useful, as e.g.
allows to disable warnings selectively and provide user-defined warning
types. So, implement this in MicroPython, except that categories are
represented just with strings. However, enough hooks are left to implement
categories differently per-port (e.g. as types), without need to patch each
and every usage.
This header is deprecated as of mbedtls 2.8.0, as shipped with Ubuntu
18.04. Leads to #warning which is promoted to error with uPy compile
options.
Note that the current version of mbedtls is 2.14 at the time of writing.