Some downstream projects may use tags in their repositories for more than
just designating MicroPython releases. In those cases, the
makeversionhdr.py script would end up using a different tag than intended.
So tell `git describe` to only match tags that look like a MicroPython
version tag, such as `v1.12` or `v2.0`.
This already begins obscuring things, because now there are two sets of
shared-module functions for manipulating the same structure, e.g.,
common_hal_canio_remote_transmission_request_get_id and
common_hal_canio_message_get_id
Calling the bytes constructor on a bytes object returns the original bytes
object. This saves allocating a new instance, and matches CPython.
Signed-off-by: Iyassou Shimels <s.iyassou@gmail.com>
New contributor @mdroberts1243 encountered an interesting problem in
which the argument they had named "column_underscore_and_page_addressing"
simply couldn't be used; I discovered that internally this had been
transformed into "column_underscore∧page_addressing", because QSTR
makes _ENTITY_ stand for the same thing as &ENTITY; does in HTML.
This might be nice for some things, but we don't want it here!
I was unable to find a sensible way to "escape" and prevent this entity
coding, so instead I ripped out support for the _and_ and _or_ escapes.
Tested & working:
* Send standard packets
* Receive standard packets (1 FIFO, no filter)
Interoperation between SAM E54 Xplained running this tree and
MicroPython running on STM32F405 Feather with an external
transceiver was also tested.
Many other aspects of a full implementation are not yet present,
such as error detection and recovery.
Discord user Folknology encountered a problem building with Python 3.6.9,
`TypeError: ord() expected a character, but string of length 0 found`.
I was able to reproduce the problem using Python3.5*, and discovered that
the meaning of the regular expression `"|."` had changed in 3.7. Before,
```
>>> [m.group(0) for m in re.finditer("|.", "hello")]
['', '', '', '', '', '']
```
After:
```
>>> [m.group(0) for m in re.finditer("|.", "hello")]
['', 'h', '', 'e', '', 'l', '', 'l', '', 'o', '']
```
Check if `words` is empty and if so use `"."` as the regular expression
instead. This gives the same result on both versions:
```
['h', 'e', 'l', 'l', 'o']
```
and fixes the generation of the huffman dictionary.
Folknology verified that this fix worked for them.
* I could easily install 3.5 but not 3.6. 3.5 reproduced the same problem
This construct (which I added without sufficient testing,
apparently) is only supported in Python 3.7 and newer. Make it
optional so that this script works on other Python versions. This
means that if you have a system with non-UTF-8 encoding you will
need to use Python 3.7.
In particular, this affects a problem building circuitpython in
github's ubuntu-18.04 virtual environment when Python 3.7 is not
explicitly installed. cookie-cuttered libraries call for Python
3.6:
```
- name: Set up Python 3.6
uses: actions/setup-python@v1
with:
python-version: 3.6
```
Since CircuitPython's own build calls for 3.8, this problem was not
detected.
This problem was also encountered by discord user mdroberts1243.
The failure I encountered was here:
https://github.com/jepler/Jepler_CircuitPython_udecimal/runs/1138045020?check_suite_focus=true
.. while my step of "clone and build circuitpython unix port" is
unusual, I think the same problem would have affected "build assets"
if that step had been reached.
For time-based functions that work with absolute time there is the need for
an Epoch, to set the zero-point at which the absolute time starts counting.
Such functions include time.time() and filesystem stat return values. And
different ports may use a different Epoch.
To make it clearer what functions use the Epoch (whatever it may be), and
make the ports more consistent with their use of the Epoch, this commit
renames all Epoch related functions to include the word "epoch" in their
name (and remove references to "2000").
Along with this rename, the following things have changed:
- mp_hal_time_ns() is now specified to return the number of nanoseconds
since the Epoch, rather than since 1970 (but since this is an internal
function it doesn't change anything for the user).
- littlefs timestamps on the esp8266 have been fixed (they were previously
off by 30 years in nanoseconds).
Otherwise, there is no functional change made by this commit.
Signed-off-by: Damien George <damien@micropython.org>
Most users and the CI system are running in configurations where Python
configures stdout and stderr in UTF-8 mode. However, Windows is different,
setting values like CP1252. This led to a build failure on Windows, because
makeqstrdata printed Unicode strings to its stdout, expecting them to be
encoded as UTF-8.
This script is writing (stdout) to a compiler input file and potentially
printing messages (stderr) to a log or console. Explicitly configure stdout to
use utf-8 to get consistent behavior on all platforms, and configure stderr so
that if any log/diagnostic messages are printed that cannot be displayed
correctly, they are still displayed instead of creating an error while trying
to print the diagnostic information.
I considered setting the encodings both to ascii, but this would just be
occasionally inconvenient to developers like me who want to show diagnostic
info on stderr and in comments while working with the compression code.
Closes: #3408
While checking whether we can enable -Wimplicit-fallthrough, I encountered
a diagnostic in mp_binary_set_val_array_from_int which led to discovering
the following bug:
```
>>> struct.pack("xb", 3)
b'\x03\x03'
```
That is, the next value (3) was used as the value of a padding byte, while
standard Python always fills "x" bytes with zeros. I initially thought
this had to do with the unintentional fallthrough, but it doesn't.
Instead, this code would relate to an array.array with a typecode of
padding ('x'), which is ALSO not desktop Python compliant:
```
>>> array.array('x', (1, 2, 3))
array('x', [1, 0, 0])
```
Possibly this is dead code that used to be shared between struct-setting
and array-setting, but it no longer is.
I also discovered that the argument list length for struct.pack
and struct.pack_into were not checked, and that the length of binary data
passed to array.array was not checked to be a multiple of the element
size.
I have corrected all of these to conform more closely to standard Python
and revised some tests where necessary. Some tests for micropython-specific
behavior that does not conform to standard Python and is not present
in CircuitPython was deleted outright.
Massive savings. Thanks so much @ciscorn for providing the initial
code for choosing the dictionary.
This adds a bit of time to the build, both to find the dictionary
but also because (for reasons I don't fully understand), the binary
search in the compress() function no longer worked and had to be
replaced with a linear search.
I think this is because the intended invariant is that for codebook
entries that encode to the same number of bits, the entries are ordered
in ascending value. However, I mis-placed the transition from "words"
to "byte/char values" so the codebook entries for words are in word-order
rather than their code order.
Because this price is only paid at build time, I didn't care to determine
exactly where the correct fix was.
I also commented out a line to produce the "estimated total memory size"
-- at least on the unix build with TRANSLATION=ja, this led to a build
time KeyError trying to compute the codebook size for all the strings.
I think this occurs because some single unicode code point ('ァ') is
no longer present as itself in the compressed strings, due to always
being replaced by a word.
As promised, this seems to save hundreds of bytes in the German translation
on the trinket m0.
Testing performed:
- built trinket_m0 in several languages
- built and ran unix port in several languages (en, de_DE, ja) and ran
simple error-producing codes like ./micropython -c '1/0'
Prior to this commit, pow(-2, float('nan')) would return (nan+nanj), or
raise an exception on targets that don't support complex numbers. This is
fixed to return simply nan, as CPython does.
Signed-off-by: Damien George <damien@micropython.org>
This is consistent with the other 'micro' modules and allows implementing
additional features in Python via e.g. micropython-lib's sys.
Note this is a breaking change (not backwards compatible) for ports which
do not enable weak links, as "import sys" must now be replaced with
"import usys".
Compress common unicode bigrams by making code points in the range
0x80 - 0xbf (inclusive) represent them. Then, they can be greedily
encoded and the substituted code points handled by the existing Huffman
compression. Normally code points in the range 0x80-0xbf are not used
in Unicode, so we stake our own claim. Using the more arguably correct
"Private Use Area" (PUA) would mean that for scripts that only use
code points under 256 we would use more memory for the "values" table.
bigram means "two letters", and is also sometimes called a "digram".
It's nothing to do with "big RAM". For our purposes, a bigram represents
two successive unicode code points, so for instance in our build on
trinket m0 for english the most frequent are:
['t ', 'e ', 'in', 'd ', ...].
The bigrams are selected based on frequency in the corpus, but the
selection is not necessarily optimal, for these reasons I can think of:
* Suppose the corpus was just "tea" repeated 100 times. The
top bigrams would be "te", and "ea". However,
overlap, "te" could never be used. Thus, some bigrams might actually
waste space
* I _assume_ this has to be why e.g., bigram 0x86 "s " is more
frequent than bigram 0x85 " a" in English for Trinket M0, because
sequences like "can't add" would get the "t " digram and then
be unable to use the " a" digram.
* And generally, if a bigram is frequent then so are its constituents.
Say that "i" and "n" both encode to just 5 or 6 bits, then the huffman
code for "in" had better compress to 10 or fewer bits or it's a net
loss!
* I checked though! "i" is 5 bits, "n" is 6 bits (lucky guess)
but the bigram 0x83 also just 6 bits, so this one is a win of
5 bits for every "it" minus overhead. Yay, this round goes to team
compression.
* On the other hand, the least frequent bigram 0x9d " n" is 10 bits
long and its constituent code points are 4+6 bits so there's no
savings, but there is the cost of the table entry.
* and somehow 0x9f 'an' is never used at all!
With or without accounting for overlaps, there is some optimum number
of bigrams. Adding one more bigram uses at least 2 bytes (for the
entry in the bigram table; 4 bytes if code points >255 are in the
source text) and also needs a slot in the Huffman dictionary, so
adding bigrams beyond the optimim number makes compression worse again.
If it's an improvement, the fact that it's not guaranteed optimal
doesn't seem to matter too much. It just leaves a little more fruit
for the next sweep to pick up. Perhaps try adding the most frequent
bigram not yet present, until it doesn't improve compression overall.
Right now, de_DE is again the "fullest" build on trinket_m0. (It's
reclaimed that spot from the ja translation somehow) This change saves
104 bytes there, increasing free space about 6.8%. In the larger
(but not critically full) pyportal build it saves 324 bytes.
The specific number of bigrams used (32) was chosen as it is the max
number that fit within the 0x80..0xbf range. Larger tables would
require the use of 16 bit code points in the de_DE build, losing savings
overall.
(Side note: The most frequent letters in English have been said
to be: ETA OIN SHRDLU; but we have UAC EIL MOPRST in our corpus)
A crash like the following occurs in the unix port:
```
Program received signal SIGSEGV, Segmentation fault.
0x00005555555a2d7a in mp_obj_module_set_globals (self_in=0x55555562c860 <ulab_user_cmodule>, globals=0x55555562c840 <mp_module_ulab_globals>) at ../../py/objmodule.c:145
145 self->globals = globals;
(gdb) up
#1 0x00005555555b2781 in mp_builtin___import__ (n_args=5, args=0x7fffffffdbb0) at ../../py/builtinimport.c:496
496 mp_obj_module_set_globals(outer_module_obj,
(gdb)
#2 0x00005555555940c9 in mp_import_name (name=824, fromlist=0x555555621f10 <mp_const_none_obj>, level=0x1) at ../../py/runtime.c:1392
1392 return mp_builtin___import__(5, args);
```
I don't understand how it doesn't happen on the embedded ports, because
the module object should reside in ROM and the assignment of self->globals
should trigger a Hard Fault.
By checking VERIFY_PTR, we know that the pointed-to data is on the heap
so we can do things like mutate it.
Updating to Black v20.8b1 there are two changes that affect the code in
this repository:
- If there is a trailing comma in a list (eg [], () or function call) then
that list is now written out with one line per element. So remove such
trailing commas where the list should stay on one line.
- Spaces at the start of """ doc strings are removed.
Signed-off-by: Damien George <damien@micropython.org>
In #2689, hitting ctrl-c during the printing of an object with a lot of sub-objects could cause the screen to stop updating (without showing a KeyboardInterrupt). This makes the printing of such objects acutally interruptable, and also correctly handles the KeyboardInterrupt:
```
>>> l = ["a" * 100] * 200
>>> l
['aaaaaaaaaaaaaaaaaaaaaa...aaaaaaaaaaa', Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyboardInterrupt:
>>>
```
As per CPython behaviour, compile(stmt, "file", "single") should create
code which prints to stdout (via __repl_print__) the results of any
expressions in stmt.
Signed-off-by: Damien George <damien@micropython.org>
The font is missing many characters and the build needs the space.
We can optimize font storage when we get a good font.
The serial output will work as usual.
This check as implemented is misleading, because it compares the
compressed size in bytes (including the length indication) with the source
string length in Unicode code points. For English this is approximately
fair, but for Japanese this is quite unfair and produces an excess of
"increased length" messages.
This message might have existed for one of two reasons:
* to alert to an improperly function huffman compression
* to call attention to a need for a "string is stored uncompressed" case
We know by now that the huffman compression is functioning as designed and
effective in general.
Just to be on the safe side, I did some back-of-the-envelope estimates.
I considered these three replacements for "the true source string size, in bytes":
+ decompressed_len_utf8 = len(decompressed.encode('utf-8'))
+ decompressed_len_utf16 = len(decompressed.encode('utf-16be'))
+ decompressed_len_bitsize = ((1+len(decompressed)) * math.ceil(math.log(1+len(values), 2)) + 7) // 8
The third counts how many bits each character requires (fewer than 128
characters in the source character set = 7, fewer than 256 = 8, fewer than 512
= 9, etc, adding a string-terminating value) and is in some way representative
of the best way we would be able to store "uncompressed strings". The Japanese
translation (largest as of writing) has just a few strings which increase by
this metric. However, the amount of loss due to expansion in those cases is
outweighed by the cost of adding 1 bit per string to indicate whether it's
compressed or not. For instance, in the BOARD=trinket_m0 TRANSLATION=ja build
the loss is 47 bytes over 300 strings. Adding 1 bit to each of 300 strings will
cost about 37 bytes, leaving just 5 Thumb instructions to implement the code to
check and decode "uncompressed" strings in order to break even.
This is a slight trade-off with code size, in places where a "_varg"
mp_raise variant is now used. The net savings on trinket_m0 is
just 32 bytes.
It also means that the translation will include the original English
text, and cannot be translated. These are usually names of Python
types such as int, set, or dict or special values such as "inf" or
"Nan".
On ports where normal heap memory can contain executable code (eg ARM-based
ports such as stm32), native code loaded from an .mpy file may be reclaimed
by the GC because there's no reference to the very start of the native
machine code block that is reachable from root pointers (only pointers to
internal parts of the machine code block are reachable, but that doesn't
help the GC find the memory).
This commit fixes this issue by maintaining an explicit list of root
pointers pointing to native code that is loaded from an .mpy file. This
is not needed for all ports so is selectable by the new configuration
option MICROPY_PERSISTENT_CODE_TRACK_RELOC_CODE. It's enabled by default
if a port does not specify any special functions to allocate or commit
executable memory.
A test is included to test that native code loaded from an .mpy file does
not get reclaimed by the GC.
Fixes#6045.
Signed-off-by: Damien George <damien@micropython.org>
This version
* moves source files to reflect module structure
* adds inline documentation suitable for extract_pyi
* incompatibly moves spectrogram to fft
* incompatibly removes "extras"
There are some remaining markup errors in the specific revision of
extmod/ulab but they do not prevent the doc building process from
completing.
MicroPython's original implementation of __aiter__ was correct for an
earlier (provisional) version of PEP492 (CPython 3.5), where __aiter__ was
an async-def function. But that changed in the final version of PEP492 (in
CPython 3.5.2) where the function was changed to a normal one. See
https://www.python.org/dev/peps/pep-0492/#why-aiter-does-not-return-an-awaitable
See also the note at the end of this subsection in the docs:
https://docs.python.org/3.5/reference/datamodel.html#asynchronous-iterators
And for completeness the BPO: https://bugs.python.org/issue27243
To be consistent with the Python spec as it stands today (and now that
PEP492 is final) this commit changes MicroPython's behaviour to match
CPython: __aiter__ should return an async-iterable object, but is not
itself awaitable.
The relevant tests are updated to match.
See #6267.
MicroPython's original implementation of __aiter__ was correct for an
earlier (provisional) version of PEP492 (CPython 3.5), where __aiter__ was
an async-def function. But that changed in the final version of PEP492 (in
CPython 3.5.2) where the function was changed to a normal one. See
https://www.python.org/dev/peps/pep-0492/#why-aiter-does-not-return-an-awaitable
See also the note at the end of this subsection in the docs:
https://docs.python.org/3.5/reference/datamodel.html#asynchronous-iterators
And for completeness the BPO: https://bugs.python.org/issue27243
To be consistent with the Python spec as it stands today (and now that
PEP492 is final) this commit changes MicroPython's behaviour to match
CPython: __aiter__ should return an async-iterable object, but is not
itself awaitable.
The relevant tests are updated to match.
See #6267.
coroutines don't have __next__; they also call themselves coroutines.
This does not change the fact that `async def` methods are generators,
but it does make them behave more like CPython.
Otherwise functions like memset might get optimised to call themselves (eg
with gcc 10). And provide CFLAGS_BUILTIN so these options can be changed
by a port if needed.
Fixes issue #6053.
Because the argument arrays may overlap, as show by the new tests in this
commit.
Also remove the debugging comments for these macros, add a new comment
about overlapping regions, and separate the macros by blank lines to make
them easier to read.
Fixes issue #6244.
Signed-off-by: Damien George <damien@micropython.org>
This commit fixes lookups of class members to make it so that built-in
functions that are used as methods/functions of a class work correctly.
The mp_convert_member_lookup() function is pretty much completely changed
by this commit, but for the most part it's just reorganised and the
indenting changed. The functional changes are:
- staticmethod and classmethod checks moved to later in the if-logic,
because they are less common and so should be checked after the more
common cases.
- The explicit mp_obj_is_type(member, &mp_type_type) check is removed
because it's now subsumed by other, more general tests in this function.
- MP_TYPE_FLAG_BINDS_SELF and MP_TYPE_FLAG_BUILTIN_FUN type flags added to
make the checks in this function much simpler (now they just test this
bit in type->flags).
- An extra check is made for mp_obj_is_instance_type(type) to fix lookup of
built-in functions.
Fixes#1326 and #6198.
Signed-off-by: Damien George <damien@micropython.org>
Testing performed: That a card is successfully mounted on Pygamer with
the built in SD card slot
This module is enabled for most FULL_BUILD boards, but is disabled for
samd21 ("M0"), litex, and pca10100 for various reasons.
This allows complex binary operations to fail gracefully with unsupported
operation rather than raising an exception, so that special methods work
correctly.
Signed-off-by: Damien George <damien@micropython.org>
uint types in viper mode can now be used for all binary operators except
floor-divide and modulo.
Fixes issue #1847 and issue #6177.
Signed-off-by: Damien George <damien@micropython.org>
An OrderedDict can now be used for the locals when creating a type
explicitly via type(name, bases, locals).
Signed-off-by: Damien George <damien@micropython.org>
I noticed that this code was referring to samd-specific functionality,
and isn't enabled except in one samd board (pewpew10). Move it.
There is incomplte support for _pew in mimxrt10xx which then caused build
errors; adding a #if guard to check for _pew being enabled fixes it.
The _pew module is not likely to be important on mimxrt but I'll leave the
choice to remove it to someone else.
This addition to the grammar was introduced in Python 3.6. It allows
annotating the type of a varilable, like:
x: int = 123
s: str
The implementation in this commit is quite simple and just ignores the
annotation (the int and str bits above). The reason to implement this is
to allow Python 3.6+ code that uses this feature to compile under
MicroPython without change, and for users to use type checkers.
In the future viper could use this syntax as a way to give types to
variables, which is currently done in a bit of an ad-hoc way, eg
x = int(123). And this syntax could potentially be used in the inline
assembler to define labels in an way that's easier to read.
The syntax matches CPython and the semantics are equivalent except that,
unlike CPython, MicroPython allows using := to assign to comprehension
iteration variables, because disallowing this would take a lot of code to
check for it.
The new compile-time option MICROPY_PY_ASSIGN_EXPR selects this feature and
is enabled by default, following MICROPY_PY_ASYNC_AWAIT.
Formatting for `* sizeof` was fixed in uncrustify v0.71, so we no longer
need the fixups for it. Also, there was one file where the updated
uncrustify caught a problem that the regex didn't pick up, which is updated
in this commit.
Signed-off-by: David Lechner <david@pybricks.com>
Long ago, prior to 0ef01d0a75, fixed and
ordered maps were the same setting with the "table_is_fixed_array" member
of mp_map_t. But these settings are actually independent, and it is
possible to have is_fixed=1, is_ordered=0 (although this can currently
only be done by tools/cc1). So update the comments to reflect this.
The resulting dict is now marked as read-only (is_fixed=1) to enforce the
fact that changes to this dict will not be reflected in the class instance.
This commit reduces code size by about 20 bytes, and should be more
efficient because it creates a direct copy of the dict rather than
reinserting all elements.
The behavior mirrors the instance object dict attribute where a copy of the
local attributes are provided (unless the dict is read-only, then that dict
itself is returned, as an optimisation). MicroPython does not support
modifying this dict because the changes will not be reflected in the class.
The feature is only enabled if MICROPY_CPYTHON_COMPAT is set, the same as
the instance version.
There doesn't appear to be any use for only triggering on specific events,
so it's just easier to number them sequentially. This makes them smaller
values so they take up only 1 byte in the ringbuf, only 1 byte for the
opcode in the bytecode, and makes room for more events.
Also add a couple of new event types that need to be implemented (to avoid
re-numbering later).
And rename _COMPLETE and _STATUS to _DONE for consistency.
In the future the "trigger" keyword argument can be reinstated by requiring
the user to compute the bitmask, eg:
ble.irq(handler, 1 << _IRQ_SCAN_RESULT | 1 << _IRQ_SCAN_DONE)
* Fix flash writes that don't end on a sector boundary. Fixes#2944
* Fix enum incompatibility with IDF.
* Fix printf output so it goes out debug UART.
* Increase stack size to 8k.
* Fix sleep of less than a tick so it doesn't crash.
Length was stored as a 16-bit number always. Most translations have
a max length far less. For example, US English translation lengths
always fit in just 8 bits. probably all languages fit in 9 bits.
This also has the side effect of reducing the alignment of
compressed_string_t from 2 bytes to 1.
testing performed: ran in german and english on pyruler, printed messages
looked right.
Firmware size, en_US
Before: 3044 bytes free in flash
After: 3408 bytes free in flash
Firmware size, de_DE (with #2967 merged to restore translations)
Before: 1236 bytes free in flash
After: 1600 bytes free in flash
Older implementations deal with infinity/negative zero incorrectly. This
commit adds generic fixes that can be enabled by any port that needs them,
along with new tests cases.
Otherwise functions like memset might get optimised to call themselves (eg
with gcc 10). And provide CFLAGS_BUILTIN so these options can be changed
by a port if needed.
Fixes issue #6053.
This adds an exception to be raised when the WatchDogTimer times out.
Note that this currently causes a HardFault, and it's not clear why it's
not behaving properly.
Signed-off-by: Sean Cross <sean@xobs.io>
vectorio builds on m4 express feather
Concrete shapes are composed into a VectorShape which is put into a displayio Group for display.
VectorShape provides transpose and x/y positioning for shape implementations.
Included Shapes:
* Circle
- A radius; Circle is positioned at its axis in the VectorShape.
- You can freely modify the radius to grow and shrink the circle in-place.
* Polygon
- An ordered list of points.
- Beteween each successive point an edge is inferred. A final edge closing the shape is inferred between the last
point and the first point.
- You can modify the points in a Polygon. The points' coordinate system is relative to (0, 0) so if you'd like a
top-center justified 10x20 rectangle you can do points [(-5, 0), (5, 0), (5, 20), (0, 20)] and your VectorShape
x and y properties will position the rectangle relative to its top center point
* Rectangle
A width and a height.
This adds initial support for an AES module named aesio. This
implementation supports only a subset of AES modes, namely
ECB, CBC, and CTR modes.
Example usage:
```
>>> import aesio
>>>
>>> key = b'Sixteen byte key'
>>> cipher = aesio.AES(key, aesio.MODE_ECB)
>>> output = bytearray(16)
>>> cipher.encrypt_into(b'Circuit Python!!', output)
>>> output
bytearray(b'E\x14\x85\x18\x9a\x9c\r\x95>\xa7kV\xa2`\x8b\n')
>>>
```
This key is 16-bytes, so it uses AES128. If your key is 24- or 32-
bytes long, it will switch to AES192 or AES256 respectively.
This has been tested with many of the official NIST test vectors,
such as those used in `pycryptodome` at
39626a5b01/lib/Crypto/SelfTest/Cipher/test_vectors/AES
CTR has not been tested as NIST does not provide test vectors for it.
Signed-off-by: Sean Cross <sean@xobs.io>
Constant expression like "2 ** 3" will now be folded, and the special form
"X = const(2 ** 3)" will now compile because the argument to the const is
now a constant.
Fixes issue #5865.
This allows user code that inherits from uio.IOBase to return an errno
error code from the user readinto/write function, by returning a negative
value. Eg returning -123 means an errno of 123. This is already how the
custom ioctl works.
This change is made for two reasons:
1. A 3rd-party library (eg berkeley-db-1.xx, axtls) may use the system
provided errno for certain errors, and yet MicroPython stream objects
that it calls will be using the internal mp_stream_errno. So if the
library returns an error it is not known whether the corresponding errno
code is stored in the system errno or mp_stream_errno. Using the system
errno in all cases (eg in the mp_stream_posix_XXX wrappers) fixes this
ambiguity.
2. For systems that have threading the system-provided errno should always
be used because the errno value is thread-local.
For systems that do not have an errno, the new lib/embed/__errno.c file is
provided.
Note: the uncrustify configuration is explicitly set to 'add' instead of
'force' in order not to alter the comments which use extra spaces after //
as a means of indenting text for clarity.
Error string compression is not deterministic in certain cases: it depends
on the Python version (whether dicts are ordered by default or not) and
probably also the order files are passed to this script, leading to a
difference in which words are included in the top 128 most common.
The changes in this commit use OrderedDict to keep parsed lines in a known
order, and, when computing how many bytes are saved by a given word, it
uses the word itself to break ties (which would otherwise be "random").
For combinations of certain versions of glibc and gcc the definition of
fpclassify always takes float as argument instead of adapting itself to
float/double/long double as required by the C99 standard. At the time of
writing this happens for instance for glibc 2.27 with gcc 7.5.0 when
compiled with -Os and glibc 3.0.7 with gcc 9.3.0. When calling fpclassify
with double as argument, as in objint.c, this results in an implicit
narrowing conversion which is not really correct plus results in a warning
when compiled with -Wfloat-conversion. So fix this by spelling out the
logic manually.
Initially some of these were found building the unix coverage variant on
MacOS because that build uses clang and has -Wdouble-promotion enabled, and
clang performs more vigorous promotion checks than gcc. Additionally the
codebase has been compiled with clang and msvc (the latter with warning
level 3), and with MICROPY_FLOAT_IMPL_FLOAT to find the rest of the
conversions.
Fixes are implemented either as explicit casts, or by using the correct
type, or by using one of the utility functions to handle floating point
casting; these have been moved from nativeglue.c to the public API.
This gets all the purely internal references. Some uses of
protomatter/Protomatter/PROTOMATTER remain, as they are references
to symbols in the Protomatter C library itself.
I originally believed that there would be a wrapper library around it,
like with _pixelbuf; but this proves not to be the case, as there's
too little for the library to do.
This commit provides a typedef for mp_rom_error_text_t, and a macro define
for MP_COMPRESSED_ROM_TEXT, when MICROPY_ROM_TEXT_COMPRESSION is disabled.
This simplifies the configuration (it no longer has a special case for
MICROPY_ENABLE_DYNRUNTIME) and makes it work for other cases that don't use
compression (eg examples/embedding). This commit also ensures
MICROPY_ROM_TEXT_COMPRESSION is defined during qstr processing.
Now that error string compression is supported it's more important to have
consistent error string formatting (eg all lowercase English words,
consistent contractions). This commit cleans up some of the strings to
make them more consistent.
Because the atomic section starts after checking whether the scheduler
state is pending, it's possible it can become a different state by the time
the atomic section starts.
This is especially likely on ports where MICROPY_BEGIN_ATOMIC_SECTION is
implemented with a mutex (i.e. it might block), but the race exists
regardless, i.e. if a context switch occurs between those two lines.
TimeoutError was added back in 077812b2ab for
the cc3200 port. In f522849a4d the cc3200
port enabled use of it in the socket module aliased to socket.timeout. So
it was never added to the builtins. Then it was replaced by
OSError(ETIMEDOUT) in 047af9b10b.
The esp32 port enables this exception, since the very beginning of that
port, but it could never be accessed because it's not in builtins.
It's being removed: 1) to not encourage its use; 2) because there are a lot
of other OSError subclasses which are not defined at all, and having
TimeoutError is a bit inconsistent.
Note that ports can add anything to the builtins via MICROPY_PORT_BUILTINS.
And they can also define their own exceptions using the
MP_DEFINE_EXCEPTION() macro.
In this part of the code there is no way to get the ** operator, so no need
to check for it.
This commit also adds tests for this, and other related, invalid const
operations.
The decompression of error-strings is only done if the string is accessed
via printing or via er.args. Tests are added for this feature to ensure
the decompression works.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
Instead of compiler-level if-logic. This is necessary to know what error
strings are included in the build at the preprocessor stage, so that string
compression can be implemented.
These were found by buiding the unix coverage variant on macOS (so clang
compiler). Mostly, these are fixing implicit cast of float/double to
mp_float_t which is one of those two and one mp_int_t to size_t fix for
good measure.
Implements Task and TaskQueue classes in C, using a pairing-heap data
structure. Using this reduces RAM use of each Task, and improves overall
performance of the uasyncio scheduler.
To enable lazy loading of submodules (among other things), which is very
useful for MicroPython libraries that want to have optional subcomponents.
Disabled explicitly on minimal ports.
Formerly, if you wrote
SPI.frequency = 0
you would get the sightly erroneous error message
AttributeError: 'SPI' object has no attribute 'frequency'
In this case, a better message would read
AttributeError: 'SPI' object cannot assign attribute 'frequency'
This new message will both be used in the case where the attribute doesn't
exist at all (and the object has no dynamic attributes; most instances of
built in types behave this way), or if the attribute exists but is
read-only.
This commit adds micropython.heap_locked() which returns the current
lock-depth of the heap, and can be used by Python code to check if the heap
is locked or not. This new function is configured via
MICROPY_PY_MICROPYTHON_HEAP_LOCKED and is disabled by default.
This commit also changes the return value of micropython.heap_unlock() so
it returns the current lock-depth as well.
This eliminates the need for the sizeof regex fixup by rearranging things a
bit. All other bitfields already use the parentheses around expressions
with sizeof, so one case is fixed by following this convention.
VM_MAX_STATE_ON_STACK is the only remaining problem and it can be worked
around by changing the order of the operands.
The double-% was added in 11de8399fe (Jun
2014) when such errors were formatted with printf. But then
55830dd9bf (Dec 2018) changed
mp_obj_new_exception_msg() to not format the message, as discussed
in #3004. So such error strings are no longer formatted and a % is just
that.
This should reclaim *most* code space added to handle f-strings.
However, there may be some small code growth as parse_string_literal
takes a new parameter (which will always be 0, so hopefully the optimizer
eliminates it)
This implements (most of) the PEP-498 spec for f-strings, with two
exceptions:
- raw f-strings (`fr` or `rf` prefixes) raise `NotImplementedError`
- one special corner case does not function as specified in the PEP
(more on that in a moment)
This is implemented in the core as a syntax translation, brute-forcing
all f-strings to run through `String.format`. For example, the statement
`x='world'; print(f'hello {x}')` gets translated *at a syntax level*
(injected into the lexer) to `x='world'; print('hello {}'.format(x))`.
While this may lead to weird column results in tracebacks, it seemed
like the fastest, most efficient, and *likely* most RAM-friendly option,
despite being implemented under the hood with a completely separate
`vstr_t`.
Since [string concatenation of adjacent literals is implemented in the
lexer](534b7c368d),
two side effects emerge:
- All strings with at least one f-string portion are concatenated into a
single literal which *must* be run through `String.format()` wholesale,
and:
- Concatenation of a raw string with interpolation characters with an
f-string will cause `IndexError`/`KeyError`, which is both different
from CPython *and* different from the corner case mentioned in the PEP
(which gave an example of the following:)
```python
x = 10
y = 'hi'
assert ('a' 'b' f'{x}' '{c}' f'str<{y:^4}>' 'd' 'e') == 'ab10{c}str< hi >de'
```
The above-linked commit detailed a pretty solid case for leaving string
concatenation in the lexer rather than putting it in the parser, and
undoing that decision would likely be disproportionately costly on
resources for the sake of a probably-low-impact corner case. An
alternative to become complaint with this corner case of the PEP would
be to revert to string concatenation in the parser *only when an
f-string is part of concatenation*, though I've done no investigation on
the difficulty or costs of doing this.
A decent set of tests is included. I've manually tested this on the
`unix` port on Linux and on a Feather M4 Express (`atmel-samd`) and
things seem sane.
Before this, such names would instead cause an assertion error inside
qstr_from_strn.
A simple reproducer is a python source file containing the letter "a"
repeated 256 times
This string is recognised by uncrustify, to disable formatting in the
region marked by these comments. This is necessary in the qstrdef*.h files
to prevent modification of the strings within the Q(...). In other places
it is used to prevent excessive reformatting that would make the code less
readable.
This only fixes the `import` portion. It doesn't actually change
reference behavior because modules within a package could already
be referenced through the parent package even though an error should
have been thrown.
And rename it to mp_obj_cast_to_native_base() to indicate this. This
allows users of this function to easily support native and native-subclass
objects in the same way (by just passing the object through this function).
Since commit 3aab54bf43 this piece of code is
no longer needed because the top-level function mp_obj_equal_not_equal()
now handles the case of user types, and will never call tuple's binary_op
function with MP_BINARY_OP_EQUAL and a non-tuple on the RHS.
Follow up to recent commit ad7213d3c3, the
name "varg2" is misleading, vlist describes better that the argument is a
va_list. This name also matches CircuitPython, which already has such
helper functions.
This provides a more consistent C-level API to raise exceptions, ie moving
away from nlr_raise towards mp_raise_XXX. It also reduces code size by a
small amount on some ports.
Both bool and namedtuple will check against other types for equality; int,
float and complex for bool, and tuple for namedtuple. So to make them work
after the recent commit 3aab54bf43 they would
need MP_TYPE_FLAG_NEEDS_FULL_EQ_TEST set. But that makes all bool and
namedtuple equality checks less efficient because mp_obj_equal_not_equal()
could no longer short-cut x==x, and would need to try __ne__. To improve
this, this commit splits the MP_TYPE_FLAG_NEEDS_FULL_EQ_TEST flags into 3
separate flags to give types more fine-grained control over how their
equality behaves. These new flags are then used to fix bool and namedtuple
equality.
Fixes issue #5615 and #5620.
This is a more logical place to clear the KeyboardInterrupt traceback,
right before it is set as a pending exception. The clearing is also
optimised from a function call to a simple store of NULL.
Functions like mp_keyboard_interrupt() may need to be called from an IRQ
handler and may need to be in a special memory section, so provide a
generic wrapping macro for a port to do this. The macro name is chosen to
be MICROPY_WRAP_<function name in uppercase> so that (in the future with
more wrappers) each function could potentially be handled separately.
This function is tightly coupled to the state and behaviour of the
scheduler, and is a core part of the runtime: to schedule a pending
exception. So move it there.
Previous behaviour is when this argument is set to "true", in which case
the function will raise any pending exception. Setting it to "false" will
cancel any pending exception.
A 'return' statement on module/class level is not correct Python, but
nothing terribly bad happens when it's allowed. So remove the check unless
MICROPY_CPYTHON_COMPAT is on.
This is similar to MicroPython's treatment of 'import *' in functions
(except 'return' has unsurprising behavior if it's allowed).
This commit implements a more complete replication of CPython's behaviour
for equality and inequality testing of objects. This addresses the issues
discussed in #5382 and a few other inconsistencies. Improvements over the
old code include:
- Support for returning non-boolean results from comparisons (as used by
numpy and others).
- Support for non-reflexive equality tests.
- Preferential use of __ne__ methods and MP_BINARY_OP_NOT_EQUAL binary
operators for inequality tests, when available.
- Fallback to op2 == op1 or op2 != op1 when op1 does not implement the
(in)equality operators.
The scheme here makes use of a new flag, MP_TYPE_FLAG_NEEDS_FULL_EQ_TEST,
in the flags word of mp_obj_type_t to indicate if various shortcuts can or
cannot be used when performing equality and inequality tests. Currently
four built-in classes have the flag set: float and complex are
non-reflexive (since nan != nan) while bytearray and frozenszet instances
can equal other builtin class instances (bytes and set respectively). The
flag is also set for any new class defined by the user.
This commit also includes a more comprehensive set of tests for the
behaviour of (in)equality operators implemented in special methods.
This modifies the signature of mp_thread_set_state() to use
mp_state_thread_t* instead of void*. This matches the return type of
mp_thread_get_state(), which returns the same value.
`struct _mp_state_thread_t;` had to be moved before
`#include <mpthreadport.h>` since the stm32 port uses it in its
mpthreadport.h file.
The loop searches backwards for a target, but doesn't stop after finding
the first result, meaning that it'll always end up at the outermost
exception handler.
This previously made the native emitter incompatible with the bytecode
emitter, and mp_resume (and subsequently mp_obj_generator_resume) expects
the bytecode emitter behavior (i.e. throw==NULL).
Commit d96cfd13e3 introduced a regression in
testing for bool objects, that such objects were in some cases no longer
recognised and bools, eg when using mp_obj_is_type(o, &mp_type_bool), or
mp_obj_is_integer(o).
This commit fixes that problem by adding mp_obj_is_bool(o). Builds with
MICROPY_OBJ_IMMEDIATE_OBJS enabled check if the object is any of the const
True or False objects. Builds without it use the old method of ->type
checking, which compiles to smaller code (compared with the former
mentioned method).
Fixes#5538.
When threads and the GIL are enabled, then the qstr mutex is not needed.
The qstr_mutex field is never used in this case because of:
#if MICROPY_PY_THREAD && !MICROPY_PY_THREAD_GIL
#define QSTR_ENTER() mp_thread_mutex_lock(&MP_STATE_VM(qstr_mutex), 1)
#define QSTR_EXIT() mp_thread_mutex_unlock(&MP_STATE_VM(qstr_mutex))
#else
#define QSTR_ENTER()
#define QSTR_EXIT()
#endif
So, we can completely remove qstr_mutex everywhere when MICROPY_PY_THREAD
&& !MICROPY_PY_THREAD_GIL.
When threads and the GIL are enabled, then the GC mutex is not needed. The
gc_mutex field is never used in this case because of:
#if MICROPY_PY_THREAD && !MICROPY_PY_THREAD_GIL
#define GC_ENTER() mp_thread_mutex_lock(&MP_STATE_MEM(gc_mutex), 1)
#define GC_EXIT() mp_thread_mutex_unlock(&MP_STATE_MEM(gc_mutex))
#else
#define GC_ENTER()
#define GC_EXIT()
#endif
So, we can completely remove gc_mutex everywhere when MICROPY_PY_THREAD
&& !MICROPY_PY_THREAD_GIL.
Introduces a way to place CircuitPython code and data into
tightly coupled memory (TCM) which is accessible by the CPU in a
single cycle. It also frees up room in the corresponding cache for
intermittent data. Loading from external flash is slow!
The data cache is also now enabled.
Adds support for the iMX RT 1021 chip. Adds three new boards:
* iMX RT 1020 EVK
* iMX RT 1060 EVK
* Teensy 4.0
Related to #2492, #2472 and #2477. Fixes#2475.
Can be used where mp_obj_int_get_checked() will overflow due to the
sign-bit solely. This returns an mp_uint_t, so it also verifies the given
integer is not negative.
Currently implemented only for mpz configurations.
This function is called often and with immediate objects enabled it has
more cases, so optimise it for speed. With this optimisation the runtime
is now slightly faster with immediate objects enabled than with them
disabled.
This option (enabled by default for object representation A, B, C) makes
None/False/True objects immediate objects, ie they are no longer a concrete
object in ROM but are rather just values, eg None=0x6 for representation A.
Doing this saves a considerable amount of code size, due to these objects
being widely used:
bare-arm: -392 -0.591%
minimal x86: -252 -0.170% [incl +52(data)]
unix x64: -624 -0.125% [incl -128(data)]
unix nanbox: +0 +0.000%
stm32: -1940 -0.510% PYBV10
cc3200: -1216 -0.659%
esp8266: -404 -0.062% GENERIC
esp32: -732 -0.064% GENERIC[incl +48(data)]
nrf: -988 -0.675% pca10040
samd: -564 -0.556% ADAFRUIT_ITSYBITSY_M4_EXPRESS
Thanks go to @Jongy aka Yonatan Goldschmidt for the idea.
This commit adjusts the definition of qstr encoding in all object
representations by taking a single bit from the qstr space and using it to
distinguish between qstrs and a new kind of literal object: immediate
objects. In other words, the qstr space is divided in two pieces, one half
for qstrs and the other half for immediate objects.
There is still enough room for qstr values (29 bits in representation A on
a 32-bit architecture, and 19 bits in representation C) and the new
immediate objects can be used for things like None, False and True.
This moves the MICROPY_PORT_INIT_FUNC hook to the end of mp_init(), just
before MP_THREAD_GIL_ENTER(), so that everything (in particular the GIL
mutex) is intialized before the hook is called. MICROPY_PORT_DEINIT_FUNC
is also moved to be symmetric (but there is no functional change there).
If a port needs to perform initialisation earlier than
MICROPY_PORT_INIT_FUNC then it can do it before calling mp_init().
This commit adds backward-word, backward-kill-word, forward-word,
forward-kill-word sequences for the REPL, with bindings to Alt+F, Alt+B,
Alt+D and Alt+Backspace respectively. It is disabled by default and can be
enabled via MICROPY_REPL_EMACS_WORDS_MOVE.
Further enabling MICROPY_REPL_EMACS_EXTRA_WORDS_MOVE adds extra bindings
for these new sequences: Ctrl+Right, Ctrl+Left and Ctrl+W.
The features are enabled on unix micropython-coverage and micropython-dev.
Most types are in rodata/ROM, and mp_obj_base_t.type is a constant pointer,
so enforce this const-ness throughout the code base. If a type ever needs
to be modified (eg a user type) then a simple cast can be used.
In this unusual case, (len + 1) is zero, the allocation in vstr_init
succeeds (allocating 1 byte), and then the caller is likely to erroneously
access outside the allocated region, for instance with a memset().
This could be triggered with os.urandom(-1) after it was converted to use
mp_obj_new_bytes_of_zeros.
PacketBuffer facilitates packet oriented BLE protocols such as BLE
MIDI and the Apple Media Service.
This also adds PHY, MTU and connection event extension negotiation
to speed up data transfer when possible.
Instances of the slice class are passed to __getitem__() on objects when
the user indexes them with a slice. In practice the majority of the time
(other than passing it on untouched) is to work out what the slice means in
the context of an array dimension of a particular length. Since Python 2.3
there has been a method on the slice class, indices(), that takes a
dimension length and returns the real start, stop and step, accounting for
missing or negative values in the slice spec. This commit implements such
a indices() method on the slice class.
It is configurable at compile-time via MICROPY_PY_BUILTINS_SLICE_INDICES,
disabled by default, enabled on unix, stm32 and esp32 ports.
This commit also adds new tests for slice indices and for slicing unicode
strings.
In CPython, EnvironmentError and IOError are now aliases of OSError so no
need to have them listed in the code. OverflowError inherits from
ArithmeticError because it's intended to be raised "when the result of an
arithmetic operation is too large to be represented" (per CPython docs),
and MicroPython aims to match the CPython exception hierarchy.
For the 3 ports that already make use of this feature (stm32, nrf and
teensy) this doesn't make any difference, it just allows to disable it from
now on.
For other ports that use pyexec, this decreases code size because the debug
printing code is dead (it can't be enabled) but the compiler can't deduce
that, so code is still emitted.
The qst value is always small enough to fit in 31-bits (even less) and
using a 32-bit shift rather than a 64-bit shift reduces code size by about
300 bytes.
A user-defined type that defines __iter__ doesn't need any memory to be
pre-allocated for its iterator (because it can't use such memory). So
optimise for this case by not allocating the iter-buf.
In commit 71a3d6ec3b mp_setup_code_state was
changed from a 5-arg function to a 4-arg function, and at that point 5-arg
calls in native code were no longer needed. See also commit
4f9842ad80.
Allows assigning attributes on class instances that implement their own
__setattr__. Both object.__setattr__ and super(A, b).__setattr__ will work
with this commit.
This makes the loading of viper-code-with-relocations a bit neater and
easier to understand, by treating the rodata/bss like a special object to
be loaded into the constant table (which is how it behaves).
By having an order-only dependency on the directory itself, the directory
is sure to be created before the rule to create a .mo file is.
This fixes a low-freqency error on github actions such as
> msgfmt: error while opening "build/genhdr/en_US.mo" for writing: No such file or directory
These s16-s21 registers are used by gcc so need to be saved. Future
versions of gcc (beyond v9.1.0), or other compilers, may eventually need
additional registers saved/restored.
See issue #4844.
Recent versions of gcc perform optimisations which can lead to the
following code from the MP_NLR_JUMP_HEAD macro being omitted:
top->ret_val = val; \
MP_NLR_RESTORE_PYSTACK(top); \
*_top_ptr = top->prev; \
This is noticeable (at least) in the unix coverage on x86-64 built with gcc
9.1.0. This is because the nlr_jump function is marked as no-return, so
gcc deduces that the above code has no effect.
Adding MP_UNREACHABLE tells the compiler that the asm code may branch
elsewhere, and so it cannot optimise away the code.
We don't want to add a feature flag to .mpy files that indicate float
support because it will get complex and difficult to use. Instead the .mpy
is built using whatever precision it chooses (float or double) and the
native glue API will convert between this choice and what the host runtime
actually uses.
This commit adds a new tool called mpy_ld.py which is essentially a linker
that builds .mpy files directly from .o files. A new header file
(dynruntime.h) and makefile fragment (dynruntime.mk) are also included
which allow building .mpy files from C source code. Such .mpy files can
then be dynamically imported as though they were a normal Python module,
even though they are implemented in C.
Converting .o files directly (rather than pre-linked .elf files) allows the
resulting .mpy to be more efficient because it has more control over the
relocations; for example it can skip PLT indirection. Doing it this way
also allows supporting more architectures, such as Xtensa which has
specific needs for position-independent code and the GOT.
The tool supports targets of x86, x86-64, ARM Thumb and Xtensa (windowed
and non-windowed). BSS, text and rodata sections are supported, with
relocations to all internal sections and symbols, as well as relocations to
some external symbols (defined by dynruntime.h), and linking of qstrs.
Implements text, rodata and bss generalised relocations, as well as generic
qstr-object linking. This allows importing dynamic native modules on all
supported architectures in a unified way.
Protocols are nice, but there is no way for C code to verify whether
a type's "protocol" structure actually implements some particular
protocol. As a result, you can pass an object that implements the
"vfs" protocol to one that expects the "stream" protocol, and the
opposite of awesomeness ensues.
This patch adds an OPTIONAL (but enabled by default) protocol identifier
as the first member of any protocol structure. This identifier is
simply a unique QSTR chosen by the protocol designer and used by each
protocol implementer. When checking for protocol support, instead of
just checking whether the object's type has a non-NULL protocol field,
use `mp_proto_get` which implements the protocol check when possible.
The existing protocols are now named:
protocol_framebuf
protocol_i2c
protocol_pin
protocol_stream
protocol_spi
protocol_vfs
(most of these are unused in CP and are just inherited from MP; vfs and
stream are definitely used though)
I did not find any crashing examples, but here's one to give a flavor of what
is improved, using `micropython_coverage`. Before the change,
the vfs "ioctl" protocol is invoked, and the result is not intelligible
as json (but it could have resulted in a hard fault, potentially):
>>> import uos, ujson
>>> u = uos.VfsPosix('/tmp')
>>> ujson.load(u)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: syntax error in JSON
After the change, the vfs object is correctly detected as not supporting
the stream protocol:
>>> ujson.load(p)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: stream operation not supported
If a translation only has unicode code points 255 and below, the "values"
array can be 8 bits instead of 16 bits. This reclaims some code size,
e.g., in a local build, trinket_m0 / en_US reclaimed 112 bytes and de_DE
reclaimed 104 bytes. However, languages like zh_Latn_pinyin, which use
code points above 255, did not benefit.
By treating each unicode code-point as a single entity for huffman
compression, the overall compression rate can be somewhat improved
without changing the algorithm. On the decompression side, when
compressed values above 127 are encountered, they need to be
converted from a 16-bit Unicode code point into a UTF-8 byte
sequence.
Doing this returns approximately 1.5kB of flash storage with the
zh_Latn_pinyin translation. (292 -> 1768 bytes remaining in my build
of trinket_m0)
Other "more ASCII" translations benefit less, and in fact
zh_Latn_pinyin is no longer the most constrained translation!
(de_DE 1156 -> 1384 bytes free in flash, I didn't check others
before pushing for CI)
English is slightly pessimized, 2840 -> 2788 bytes, probably mostly
because the "values" array was changed from uint8_t to uint16_t,
which is strictly not required for an all-ASCII translation. This
could probably be avoided in this case, but as English is not the
most constrained translation it doesn't really matter.
Testing performed: built for feather nRF52840 express and trinket m0
in English and zh_Latn_pinyin; ran and verified the localized
messages such as
Àn xià rènhé jiàn jìnrù REPL. Shǐyòng CTRL-D chóngxīn jiāzài.
and
Press any key to enter the REPL. Use CTRL-D to reload.
were properly displayed.
When adding the ability for boards to turn on the `@micropython.native`, `viper`, and `asm_thumb` decorators it was pointed out that it's somewhat awkward to write libraries and drivers that can take advantage of this since the decorators raise `SyntaxErrors` if they aren't enabled. In the case of `viper` and `asm_thumb` this behavior makes sense as they require writing non-normative code. Drivers could have a normal and viper/thumb implementation and implement them as such:
```python
try:
import _viper_impl as _impl
except SyntaxError:
import _python_impl as _impl
def do_thing():
return _impl.do_thing()
```
For `native`, however, this behavior and the pattern to work around it is less than ideal. Since `native` code should also be valid Python code (although not necessarily the other way around) using the pattern above means *duplicating* the Python implementation and adding `@micropython.native` in the code. This is an unnecessary maintenance burden.
This commit *modifies* the behavior of the `@micropython.native` decorator. On boards with `CIRCUITPY_ENABLE_MPY_NATIVE` turned on it operates as usual. On boards with it turned off it does *nothing*- it doesn't raise a `SyntaxError` and doesn't apply optimizations. This means we can write our drivers/libraries once and take advantage of speedups on boards where they are enabled.
With the memcpy() call placed last it avoids the effects of registers
clobbering. It's definitely effective in non-inlined functions, but even
here it is still making a small difference. For example, on stm32, this
saves an extra `ldr` instruction to load `o->vstr` after the memcpy()
returns.
The string length being longer than the allowed qstr length can happen in
many locations, for example in the parser with very long variable names.
Without an explicit check that the length is within range (as done in this
patch) the code would exhibit crashes and strange behaviour with truncated
strings.
This improves performance of running python code by 34%, based
on the "pystone" benchmark on metro m4 express at 5000 passes
(1127.65 -> 1521.6 passes/second).
In addition, by instrumenting the tick function and monitoring on an
oscilloscope, the time actually spent in run_background_tasks() on
the metro m4 decreases from average 43% to 0.5%. (however, there's
some additional overhead that is moved around and not accounted for
in that "0.5%" figure, each time supervisor_run_background_tasks_if_tick
is called but no tick has occurred)
On the CPB, it increases pystone from 633 to 769, a smaller percentage
increase of 21%. I did not measure the time actually spent in
run_background_tasks() on CPB.
Testing performed: on metro m4 and cpb, run pystone adapted from python3.4
(change time.time to time.monotonic for sub-second resolution)
Besides running a 5000 pass test, I also ran a 50-pass test while
scoping how long an output pin was set. Average: 34.59ms or 1445/s on m4,
67.61ms or 739/s on cbp, both matching the other pystone result reasonably
well.
import pystone
import board
import digitalio
import time
d = digitalio.DigitalInOut(board.D13)
d.direction = digitalio.Direction.OUTPUT
while True:
d.value = 0
time.sleep(.01)
d.value = 1
pystone.main(50)
This code is shared by most parts, except where not all the #ifdefs
inside the tick function were present in all ports. This mostly would
have broken gamepad tick support on non-samd ports.
The "ms32" and "ms64" variants of the tick functions are introduced
because there is no 64-bit atomic read. Disabling interrupts avoids
a low probability bug where milliseconds could be off by ~49.5 days
once every ~49.5 days (2^32 ms).
Avoiding disabling interrupts when only the low 32 bits are needed is a minor
optimization.
Testing performed: on metro m4 express, USB still works and
time.monotonic_ns() still counts up
To benefit from gcc's "once-only headers" implementation, the
"wrapper-#ifndef" must be the first non-comment part of the file,
according to the manual for various gcc/cpp versions.
This commit adds a sys.implementation.mpy entry when the system supports
importing .mpy files. This entry is a 16-bit integer which encodes two
bytes of information from the header of .mpy files that are supported by
the system being run: the second and third bytes, .mpy version, and flags
and native architecture. This allows determining the supported .mpy file
dynamically by code, and also for the user to find it out by inspecting
this value. It's further possible to dynamically detect if the system
supports importing .mpy files by `hasattr(sys.implementation, 'mpy')`.
Replace the is_running field with a tri-state variable to indicate
running/not-running/pending-exception.
Update tests to cover the various cases.
This allows cancellation in uasyncio even if the coroutine hasn't been
executed yet. Fixes#5242
This wasn't necessary as the wrapped function already has a reference to
its globals. But it had a dual purpose of tracking whether the function
was currently running, so replace it with a bool.
runtime0.h is part of the MicroPython ABI so it's simpler if it's
independent of config options, like MICROPY_PY_REVERSE_SPECIAL_METHODS.
What's effectively done here is to move MP_BINARY_OP_DIVMOD and
MP_BINARY_OP_CONTAINS up in the enum, then remove the #if
MICROPY_PY_REVERSE_SPECIAL_METHODS conditional.
Without this change .mpy files would need to have a feature flag for
MICROPY_PY_REVERSE_SPECIAL_METHODS (when embedding native code that uses
this enum).
This commit has no effect when MICROPY_PY_REVERSE_SPECIAL_METHODS is
disabled. With this option enabled this commit reduces code size by about
60 bytes.
For consistency with "umachine". Now that weak links are enabled
by default for built-in modules, this should be a no-op, but allows
extension of the bluetooth module by user code.
Also move registration of ubluetooth to objmodule rather than
port-specific.
This commit implements automatic module weak links for all built-in
modules, by searching for "ufoo" in the built-in module list if "foo"
cannot be found. This means that all modules named "ufoo" are always
available as "foo". Also, a port can no longer add any other weak links,
which makes strict the definition of a weak link.
It saves some code size (about 100-200 bytes) on ports that previously had
lots of weak links.
Some changes from the previous behaviour:
- It doesn't intern the non-u module names (eg "foo" is not interned),
which saves code size, but will mean that "import foo" creates a new qstr
(namely "foo") in RAM (unless the importing module is frozen).
- help('modules') no longer lists non-u module names, only the u-variants;
this reduces duplication in the help listing.
Weak links are effectively the same as having a set of symbolic links on
the filesystem that is searched last. So an "import foo" will search
built-in modules first, then all paths in sys.path, then weak links last,
importing "ufoo" if it exists. Thus a file called "foo.py" somewhere in
sys.path will still have precedence over the weak link of "foo" to "ufoo".
See issues: #1740, #4449, #5229, #5241.
This PR refines the _bleio API. It was originally motivated by
the addition of a new CircuitPython service that enables reading
and modifying files on the device. Moving the BLE lifecycle outside
of the VM motivated a number of changes to remove heap allocations
in some APIs.
It also motivated unifying connection initiation to the Adapter class
rather than the Central and Peripheral classes which have been removed.
Adapter now handles the GAP portion of BLE including advertising, which
has moved but is largely unchanged, and scanning, which has been enhanced
to return an iterator of filtered results.
Once a connection is created (either by us (aka Central) or a remote
device (aka Peripheral)) it is represented by a new Connection class.
This class knows the current connection state and can discover and
instantiate remote Services along with their Characteristics and
Descriptors.
Relates to #586
When loading a manifest file, e.g. by include(), it will chdir first to the
directory of that manifest. This means that all file operations within a
manifest are relative to that manifest's location.
As a consequence of this, additional environment variables are needed to
find absolute paths, so the following are added: $(MPY_LIB_DIR),
$(PORT_DIR), $(BOARD_DIR). And rename $(MPY) to $(MPY_DIR) to be
consistent.
Existing manifests are updated to match.
This introduces a new build variable FROZEN_MANIFEST which can be set to a
manifest listing (written in Python) that describes the set of files to be
frozen in to the firmware.
Instead of encoding 4 zero bytes as placeholders for the simple_name and
source_file qstrs, and storing the qstrs after the bytecode, store the
qstrs at the location of these 4 bytes. This saves 4 bytes per bytecode
function stored in a .mpy file (for example lcd160cr.mpy drops by 232
bytes, 4x 58 functions). And resulting code size is slightly reduced on
ports that use this feature.
If kw_args is NULL then memcpy() gets a NULL source argument.
This is undefined behavior under the C standard, even if 0 bytes
are being copied.
This problem was found using clang 7's scan-build static analyzer.
Left shift of negative numbers is undefined in the "C" standard. Multiplying
by 128 has the intended effect (in the absence of integer overflow, anyway),
can be implemented using the same shift instruction, but does not invoke
undefined behavior.
This problem was found using clang 7's scan-build static analyzer.
The remaining assignment was added in upstream micropython; the
deleted assignment was added in circuitpython as part of the long-lived
object area feature. During the merge, the redundant assignment
was not removed.
(since collected is a local variable and no pointers to it escape,
it doesn't seem possible for the placement of the assignment before
or after GC_ENTER() is important)
This diagnostic was found by clang 7's scan-build static analyzer.
Left shift of negative numbers is undefined in the "C" standard. Multiplying
by 256 has the intended effect (in the absence of integer overflow, anyway),
can be implemented using the same shift instruction, but does not invoke
undefined behavior.
This problem was found using clang 7's scan-build static analyzer.
In which case place the native function prelude in a bytes object, linked
from the const_table of that function. An architecture should define
N_PRELUDE_AS_BYTES_OBJ to 1 before including py/emitnative.c to emit
correct machine code, then enable MICROPY_EMIT_NATIVE_PRELUDE_AS_BYTES_OBJ
so the runtime can correctly handle the prelude being in a bytes object.
Such that args/return regs for the parent are different to args/return regs
for child calls. For an architecture to use this feature it should define
the REG_PARENT_xxx macros before including py/emitnative.c.
Prior to this commit, when unwinding through an active finally the stack
was not being correctly popped/folded, which resulting in the VM crashing
for complicated unwinding of nested finallys.
This should be fixed with this commit, and more tests for return/break/
continue within a finally have been added to exercise this.
As of 7d58a197cf, `NULL` should no longer be
here because it's allowed (MP_QSTRnull took its place). This entry was
preventing the use of MP_QSTR_NULL to mean "NULL" (although this is not
currently used).
A blacklist should not be needed because it should be possible to intern
all strings.
Fixes issue #5140.
This check follows CPython's behaviour, because 'import *' always populates
the globals with the imported names, not locals.
Since it's safe to do this (doesn't lead to a crash or undefined behaviour)
the check is only enabled for MICROPY_CPYTHON_COMPAT.
Fixes issue #5121.
This patch compresses the second part of the bytecode prelude which
contains the source file name, function name, source-line-number mapping
and cell closure information. This part of the prelude now begins with a
single varible length unsigned integer which encodes 2 numbers, being the
byte-size of the following 2 sections in the header: the "source info
section" and the "closure section". After decoding this variable unsigned
integer it's possible to skip over one or both of these sections very
easily.
This scheme saves about 2 bytes for most functions compared to the original
format: one in the case that there are no closure cells, and one because
padding was eliminated.
The start of the bytecode prelude contains 6 numbers telling the amount of
stack needed for the Python values and exceptions, and the signature of the
function. Prior to this patch these numbers were all encoded one after the
other (2x variable unsigned integers, then 4x bytes), but using so many
bytes is unnecessary.
An entropy analysis of around 150,000 bytecode functions from the CPython
standard library showed that the optimal Shannon coding would need about
7.1 bits on average to encode these 6 numbers, compared to the existing 48
bits.
This patch attempts to get close to this optimal value by packing the 6
numbers into a single, varible-length unsigned integer via bit-wise
interleaving. The interleaving scheme is chosen to minimise the average
number of bytes needed, and at the same time keep the scheme simple enough
so it can be implemented without too much overhead in code size or speed.
The scheme requires about 10.5 bits on average to store the 6 numbers.
As a result most functions which originally took 6 bytes to encode these 6
numbers now need only 1 byte (in 80% of cases).
From the beginning of this project the RAISE_VARARGS opcode was named and
implemented following CPython, where it has an argument (to the opcode)
counting how many args the raise takes:
raise # 0 args (re-raise previous exception)
raise exc # 1 arg
raise exc from exc2 # 2 args (chained raise)
In the bytecode this operation therefore takes 2 bytes, one for
RAISE_VARARGS and one for the number of args.
This patch splits this opcode into 3, where each is now a single byte.
This reduces bytecode size by 1 byte for each use of raise. Every byte
counts! It also has the benefit of reducing code size (on all ports except
nanbox).
To make progress towards MicroPython supporting Python 3.5, adding the
matmul operator is important because it's a really "low level" part of the
language, being a new token and modifications to the grammar.
It doesn't make sense to make it configurable because 1) it would make the
grammar and lexer complicated/messy; 2) no other operators are
configurable; 3) it's not a feature that can be "dynamically plugged in"
via an import.
And matmul can be useful as a general purpose user-defined operator, it
doesn't have to be just for numpy use.
Based on work done by Jim Mussared.
While finding sources of clicks and buzzes in nrf i2sout, I identified
this site as one which could be long running. Reproducer code was to
play a 22.05kHz sample and repeatedly print `os.listdir('')`
While finding sources of clicks and buzzes in nrf i2sout, I identified
this site as one which could be long running. Reproducer code was to
play a 22.05kHz sample and repeatedly print `os.listdir('')`
Prior to this patch mp_opcode_format would calculate the incorrect size of
the MP_BC_UNWIND_JUMP opcode, missing the additional byte. But, because
opcodes below 0x10 are unused and treated as bytes in the .mpy load/save
and freezing code, this bug did not show any symptoms, since nested unwind
jumps would rarely (if ever) reach a depth of 16 (so the extra byte of this
opcode would be between 0x01 and 0x0f and be correctly loaded/saved/frozen
simply as an undefined opcode).
This patch fixes this bug by correctly accounting for the additional byte.
.
With this patch alignment is done relative to the start of the buffer that
is being unpacked, not the raw pointer value, as per CPython.
Fixes issue #3314.
This commit adds support for sys.settrace, allowing to install Python
handlers to trace execution of Python code. The interface follows CPython
as closely as possible. The feature is disabled by default and can be
enabled via MICROPY_PY_SYS_SETTRACE.
Prior to this patch the line number for a lambda would be "line 1" if the
body of the lambda contained only a simple expression (with no line number
stored in the parse node). Now the line number is always reported
correctly.
mp_compile no longer takes an emit_opt argument, rather this setting is now
provided by the global default_emit_opt variable.
Now, when -X emit=native is passed as a command-line option, the emitter
will be set for all compiled modules (included imports), not just the
top-level script.
In the future there could be a way to also set this variable from a script.
Fixes issue #4267.
With this patch exceptions that are re-raised have improved tracebacks
(less confusing, match CPython), and it makes re-raise slightly more
efficient (in time and RAM) because they no longer need to add a traceback.
Also general VM performance is not measurably affected.
Partially fixes issue #2928.
With this patch exception tracebacks that go through a finally are improved
(less confusing, match CPython), and it makes finally's slightly more
efficient (in time and RAM) because they no longer need to add a traceback.
Partially fixes issue #2928.