This PR refines the _bleio API. It was originally motivated by
the addition of a new CircuitPython service that enables reading
and modifying files on the device. Moving the BLE lifecycle outside
of the VM motivated a number of changes to remove heap allocations
in some APIs.
It also motivated unifying connection initiation to the Adapter class
rather than the Central and Peripheral classes which have been removed.
Adapter now handles the GAP portion of BLE including advertising, which
has moved but is largely unchanged, and scanning, which has been enhanced
to return an iterator of filtered results.
Once a connection is created (either by us (aka Central) or a remote
device (aka Peripheral)) it is represented by a new Connection class.
This class knows the current connection state and can discover and
instantiate remote Services along with their Characteristics and
Descriptors.
Relates to #586
If kw_args is NULL then memcpy() gets a NULL source argument.
This is undefined behavior under the C standard, even if 0 bytes
are being copied.
This problem was found using clang 7's scan-build static analyzer.
Left shift of negative numbers is undefined in the "C" standard. Multiplying
by 128 has the intended effect (in the absence of integer overflow, anyway),
can be implemented using the same shift instruction, but does not invoke
undefined behavior.
This problem was found using clang 7's scan-build static analyzer.
The remaining assignment was added in upstream micropython; the
deleted assignment was added in circuitpython as part of the long-lived
object area feature. During the merge, the redundant assignment
was not removed.
(since collected is a local variable and no pointers to it escape,
it doesn't seem possible for the placement of the assignment before
or after GC_ENTER() is important)
This diagnostic was found by clang 7's scan-build static analyzer.
Left shift of negative numbers is undefined in the "C" standard. Multiplying
by 256 has the intended effect (in the absence of integer overflow, anyway),
can be implemented using the same shift instruction, but does not invoke
undefined behavior.
This problem was found using clang 7's scan-build static analyzer.
While finding sources of clicks and buzzes in nrf i2sout, I identified
this site as one which could be long running. Reproducer code was to
play a 22.05kHz sample and repeatedly print `os.listdir('')`
While finding sources of clicks and buzzes in nrf i2sout, I identified
this site as one which could be long running. Reproducer code was to
play a 22.05kHz sample and repeatedly print `os.listdir('')`
Testing performed: That the shipped .mpy files on a PyPortal (CP 4.x)
still work (play audio) with this branch, instead of erroring because
`WaveFile` can't be found in `audioio`.
Flash usage grew by 28 bytes. (I expected 24, there must be some other
effect on size/alignment that I didn't predict)
In #2013, @danh says:
My choice of where to put the semicolon is deliberate,
so that we can say
RUN_BACKGROUND_TASKS;
not have a redundant semicolon, and not confuse C code formatting.
.. such as namedtuple and attrtuple objects. This is the same
predicate used elsewhere in the file to check for adequate compatibility
between the types.
This was discovered due to crashing `time.time()` on the nrf port.
Closes: #2052
It is possible for this routine to expand some inputs, and in fact
it does for certan strings in the proposed Korean translation of
CircuitPython (#1858). I did not determine what the maximum
expansion is -- it's probably modest, like len()/7+2 bytes or
something -- so I tried to just make enc[] an adequate
over-allocation, and then ensured that all the strings in the
proposed ko.po now worked. The worst actual expansion seems to be a
string that goes from 65 UTF-8-encoded bytes to 68 compressed bytes
(+4.6%). Only a few out of all strings are reported as
non-compressed.
When nrf pwm audio is introduced, it will be called `audiopwmio`. To
enable code sharing with the existing (dac-based) `audioio`, factor
the sample and mixer types to `audiocore`.
INCOMPATIBLE CHANGE: Now, `Mixer`, `RawSample` and `WaveFile` must
be imported from `audiocore`, not `audioio`.
This also improves Palette so it stores the original RGB888 colors.
Lastly, it adds I2CDisplay as a display bus to talk over I2C. Particularly
useful for the SSD1306.
Fixes#1828. Fixes#1956
If a native displayio object is accessed before it's super().__init__()
has been called, then a placeholder is given that will cause a crash if
accessed. This is tricky to get right so we detect this case and raise
a NotInplementedError instead of crashing.
Fixes#1881
For both small and long integers, raise an exception if calling
struct.pack, adding an element to an array.array, or formatting an int
with int.to_bytes would overflow the requested size.
* Update pybadge pins and flash for rev D
* TileGrid now validates the type of the pixel_shader.
* Display actually handles incoming subclass objects.
* MicroPython will inspect native parents to see if special
accessors are used.
This fixes a crash on boards with built-in displays which statically
allocate the display bus. When the pointer is provided to never
free, it tries to allocate on the non-existant heap and crashes.
This feature is controlled at compile time by MICROPY_PY_URE_SUB, disabled
by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by
MICROPY_PY_URE_MATCH_SPAN_START_END, disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by MICROPY_PY_URE_MATCH_GROUPS,
disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This changes a number of things in displayio:
* Introduces BuiltinFont and Glyph so the built in font can be used by libraries. For boards with
a font it is available as board.TERMINAL_FONT. Fixes#1172
* Remove _load_row from Bitmap in favor of bitmap[] access. Index can be x/y tuple or overall index. Fixes#1191
* Add width and height properties to Bitmap.
* Add insert and [] access to Group. Fixes#1518
* Add index param to pop on Group.
* Terminal no longer takes unicode character info. It takes a BuiltinFont instead.
* Fix Terminal's handling of [###D vt100 commands used when up arrowing into repl history.
* Add x and y positions to Group plus scale as well.
* Add bitmap accessor for BuiltinFont
This creates a common safe mode mechanic that ports can share.
As a result, the nRF52 now has safe mode support as well.
The common safe mode adds a 700ms delay at startup where a reset
during that window will cause a reset into safe mode. This window
is designated by a yellow status pixel and flashing the single led
three times.
A couple NeoPixel fixes are included for the nRF52 as well.
Fixes#1034. Fixes#990. Fixes#615.
The backtrace cannot be given because it relies on the validity
of the qstr data structures on the heap which may have been
corrupted.
In fact, it still can crash hard when the bytecode itself is
overwritten. To fix, we'd need a way to skip gathering the
backtrace completely.
This also increases the default stack size on M4s so it can
accomodate the stack needed by ASF4s nvm API.
This adds support for the OSError attributes : errno, strerror, filename and filename2.
CPython only sets errno if 2 arguments has been passed in. This has not been implemented here.
CPython OSError.args is capped at 2 items for backward compatibility reasons. This has not been
implemented here.
MICROPY_CPYTHON_COMPAT has to be enabled to get these attributes.
mp_common_errno_to_str() has been extended to check mp_errno_to_str() as well. This is done to ease
reuse for the strerror argument.
Add traceback chain to sys.exec_info()[2].
No actual frame info is added, but just enough to recreate the printed
exception traceback.
Used by the unittest module which collects errors and failures and prints
them at the end.
This gives access to the function underlying the bound method.
Used in the converted CPython stdlib logging.Formatter class to handle
overrriding a default converter method bound to a class variable.
The method becomes bound when accessed from an instance of that class.
I didn't investigate why CircuitPython turns it into a bound method.
This reverts commit 869024dd6e.
Ctrl-C stopped producing KeyboardInterrupt with this change on CircuitPython.
The Unix and stm32 ports handles Ctrl-C differently with a handler which is
probably why they where not affected.
Fixes#1092
This saves code space in builds which use link-time optimization.
The optimization drops the untranslated strings and replaces them
with a compressed_string_t struct. It can then be decompressed to
a c string.
Builds without LTO work as well but include both untranslated
strings and compressed strings.
This work could be expanded to include QSTRs and loaded strings if
a compress method is added to C. Its tracked in #531.
Imports generate a lot of garbage so cleaning it up immediately
reduces the likelihood longer lived data structures don't end up in
the middle of the heap.
Fixes#856
The existing mp_get_stream_raise() helper does explicit checks that the
input object is a real pointer object, has a non-NULL stream protocol, and
has the desired stream C method (read/write/ioctl). In most cases it is
not necessary to do these checks because it is guaranteed that the input
object has the stream protocol and desired C methods. For example, native
objects that use the stream wrappers (eg mp_stream_readinto_obj) in their
locals dict always have the stream protocol (or else they shouldn't have
these wrappers in their locals dict).
This patch introduces an efficient mp_get_stream() which doesn't do any
checks and just extracts the stream protocol struct. This should be used
in all cases where the argument object is known to be a stream. The
existing mp_get_stream_raise() should be used primarily to verify that an
object does have the correct stream protocol methods.
All uses of mp_get_stream_raise() in py/stream.c have been converted to use
mp_get_stream() because the argument is guaranteed to be a proper stream
object.
This patch improves efficiency of stream operations and reduces code size.
This patch changes dupterm to call the native C stream methods on the
connected stream objects, instead of calling the Python readinto/write
methods. This is much more efficient for native stream objects like UART
and webrepl and doesn't require allocating a special dupterm array.
This change is a minor breaking change from the user's perspective because
dupterm no longer accepts pure user stream objects to duplicate on. But
with the recent addition of uio.IOBase it is possible to still create such
classes just by inheriting from uio.IOBase, for example:
import uio, uos
class MyStream(uio.IOBase):
def write(self, buf):
# existing write implementation
def readinto(self, buf):
# existing readinto implementation
uos.dupterm(MyStream())
Via the config value MICROPY_PY_UHASHLIB_SHA256. Default to enabled to
keep backwards compatibility.
Also add default value for the sha1 class, to at least document its
existence.
A user class derived from IOBase and implementing readinto/write/ioctl can
now be used anywhere a native stream object is accepted.
The mapping from C to Python is:
stream_p->read --> readinto(buf)
stream_p->write --> write(buf)
stream_p->ioctl --> ioctl(request, arg)
Among other things it allows the user to:
- create an object which can be passed as the file argument to print:
print(..., file=myobj), and then print will pass all the data to the
object via the objects write method (same as CPython)
- pass a user object to uio.BufferedWriter to buffer the writes (same as
CPython)
- use select.select on a user object
- register user objects with select.poll, in particular so user objects can
be used with uasyncio
- create user files that can be returned from user filesystems, and import
can import scripts from these user files
For example:
class MyOut(io.IOBase):
def write(self, buf):
print('write', repr(buf))
return len(buf)
print('hello', file=MyOut())
The feature is enabled via MICROPY_PY_IO_IOBASE which is disabled by
default.
This patch adds the gc_sweep_all() function which does a garbage collection
without tracing any root pointers, so frees all the memory, and most
importantly runs any remaining finalisers.
This helps primarily for soft reset: it will close any open files, any open
sockets, and help to get the system back to a clean state upon soft reset.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
This VFS component allows to mount a host POSIX filesystem within the uPy
VFS sub-system. All traditional POSIX file access then goes through the
VFS, allowing to sandbox a uPy process to a certain sub-dir of the host
system, as well as mount other filesystem types alongside the host
filesystem.
Since a long time now, mp_obj_type_t no longer refers explicitly to
mp_stream_p_t but rather to an abstract "const void *protocol". So there's
no longer any need to define mp_stream_p_t in obj.h and it can go with all
its associated definitions in stream.h. Pretty much all users of this type
will already include the stream header.
We can provide a basic version of mp_errno_to_str even if the uerrno
module won't be provided. Rather than looking errno names up in the
uerrno module's globals dict, we'll just rely on a simple mapping in the
function itself.
The code_state.old_globals variable is there to save the globals state so
should be used for this purpose, to avoid the need for additional local
variables on the C stack.
Without this, if GC threshold is hit and there is not enough memory left to
satisfy the request, gc_collect() will run a second time and the search for
memory will happen again and will fail again.
Thanks to @adritium for pointing out this issue, see #3786.
Under ubsan, when evaluating hash(-0.) the following diagnostic occurs:
../../py/objfloat.c:102:15: runtime error: negation of
-9223372036854775808 cannot be represented in type 'mp_int_t' (aka
'long'); cast to an unsigned type to negate this value to itself
So do just that, to tell the compiler that we want to perform this
operation using modulo arithmetic rules.
Before this, ubsan would detect a problem when executing
hash(006699999999999999999999999999999999999999999999999999999999999999999999)
../../py/mpz.c:1539:20: runtime error: left shift of 1067371580458 by
32 places cannot be represented in type 'mp_int_t' (aka 'long')
When the overflow does occur it now happens as defined by the rules of
unsigned arithmetic.
When computing e.g. hash(0.4e3) with ubsan enabled, a diagnostic like the
following would occur:
../../py/objfloat.c:91:30: runtime error: shift exponent 44 is too
large for 32-bit type 'int'
By casting constant "1" to the right type the intended value is preserved.