This changes lots of files to unify `board.h` across ports. It adds
`board_deinit` when CIRCUITPY_ALARM is set. `main.c` uses it to
deinit the board before deep sleeping (even when pretending.)
Deep sleep is now a two step process for the port. First, the
port should prepare to deep sleep based on the given alarms. It
should set alarms for both deep and pretend sleep. In particular,
the pretend versions should be set immediately so that we don't
miss an alarm as we shutdown. These alarms should also wake from
`port_idle_until_interrupt` which is used when pretending to deep
sleep.
Second, when real deep sleeping, `alarm_enter_deep_sleep` is called.
The port should set any alarms it didn't during prepare based on
data it saved internally during prepare.
ESP32-S2 sleep is a bit reorganized to locate more logic with
TimeAlarm. This will help it scale to more alarm types.
Fixes#3786
This allows calls to `allocate_memory()` while the VM is running, it will then allocate from the GC heap (unless there is a suitable hole among the supervisor allocations), and when the VM exits and the GC heap is freed, the allocation will be moved to the bottom of the former GC heap and transformed into a proper supervisor allocation. Existing movable allocations will also be moved to defragment the supervisor heap and ensure that the next VM run gets as much memory as possible for the GC heap.
By itself this breaks terminalio because it violates the assumption that supervisor_display_move_memory() still has access to an undisturbed heap to copy the tilegrid from. It will work in many cases, but if you're unlucky you will get garbled terminal contents after exiting from the vm run that created the display. This will be fixed in the following commit, which is separate to simplify review.
`pow(a, b, c)` can compute `(a ** b) % c` efficiently (in time and memory).
This can be useful for extremely specific applications, like implementing
the RSA cryptosystem. For typical uses of CircuitPython, this is not an
important feature. A survey of the bundle and learn system didn't find
any uses.
Disable it on M0 builds so that we can fit in needed upgrades to the USB
stack.
Disable certain classes of diagnostic when building ulab. We should
submit patches upstream to (A) fix these errors and (B) upgrade their
CI so that the problems are caught before we want to integrate with
CircuitPython, but not right now.
I like to use local makefile overrides, in the file GNUmakefile
(or, on case-sensitive systems, makefile) to set compilation choices.
However, writing
TRANSLATION := de_DE
include Makefile
did not work, because py.mk would override the TRANSLATION := specified
in an earlier part of the makefiles (but not from the commandline).
By using ?= instead of := the local makefile override works, but when
TRANSLATION is not specified it continues to work as before.
This ensures that only the translate("") alternative that will be used
is seen after preprocessing. Improves the quality of the Huffman encoding
and reduces binary size slightly.
Also makes one "enhanced" error message only occur when ERROR_REPORTING_DETAILED:
Instead of the word-for-word python3 error message
"Type object has no attribute '%q'", the message will be
"'type' object has no attribute '%q'". Also reduces binary size.
(that's rolled into this commit as it was right next to a change to
use the preprocessor for MICROPY_ERROR_REPORTING)
Note that the odd semicolon after "value_error:" in parsenum.c is necessary
due to a detail of the C grammar, in which a declaration cannot follow
a label directly.
This reclaims over 1kB of flash space by simplifying certain exception
messages. e.g., it will no longer display the requested/actual length
when a fixed list/tuple of N items is needed:
if (MICROPY_ERROR_REPORTING == MICROPY_ERROR_REPORTING_TERSE) {
mp_raise_ValueError(translate("tuple/list has wrong length"));
} else {
mp_raise_ValueError_varg(translate("requested length %d but object has length %d"),
(int)len, (int)seq_len);
Other chip families including samd51 keep their current error reporting
capabilities.
* No weak link for modules. It only impacts _os and _time and is
already disabled for non-full builds.
* Turn off PA00 and PA01 because they are the crystal on the Metro
M0 Express.
* Change ejected default to false to move it to BSS. It is set on
USB connection anyway.
* Set sinc_filter to const. Doesn't help flash but keeps it out of
RAM.
This gets a further speedup of about 2s (12s -> 9.5s elapsed build time)
for stm32f405_feather
For what are probably historical reasons, the qstr process involves
preprocessing a large number of source files into a single "qstr.i.last"
file, then reading this and splitting it into one "qstr" file for each
original source ("*.c") file.
By eliminating the step of writing qstr.i.last as well as making the
regular-expression-matching part be parallelized, build speed is further
improved.
Because the step to build QSTR_DEFS_COLLECTED does not access
qstr.i.last, the path is replaced with "-" in the Makefile.
Rather than simply invoking gcc in preprocessor mode with a list of files, use
a Python script with the (python3) ThreadPoolExecutor to invoke the
preprocessor in parallel.
The amount of concurrency is the number of system CPUs, not the makefile "-j"
parallelism setting, because there is no simple and correct way for a Python
program to correctly work together with make's idea of parallelism.
This reduces the build time of stm32f405 feather (a non-LTO build) from 16s to
12s on my 16-thread Ryzen machine.
Some examples of improved compliance with CPython that currently
have divergent behavior in CircuitPython are listed below:
* yield from is not allowed in async methods
```
>>> async def f():
... yield from 'abc'
...
Traceback (most recent call last):
File "<stdin>", line 2, in f
SyntaxError: 'yield from' inside async function
```
* await only works on awaitable expressions
```
>>> async def f():
... await 'not awaitable'
...
>>> f().send(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in f
AttributeError: 'str' object has no attribute '__await__'
```
* only __await__()able expressions are awaitable
Okay this one actually does not work in circuitpython at all today.
This is how CPython works though and pretending __await__ does not
exist will only bite users who write both.
```
>>> class c:
... pass
...
>>> def f(self):
... yield
... yield
... return 'f to pay respects'
...
>>> c.__await__ = f # could just as easily have put it on the class but this shows how it's wired
>>> async def g():
... awaitable_thing = c()
... partial = await awaitable_thing
... return 'press ' + partial
...
>>> q = g()
>>> q.send(None)
>>> q.send(None)
>>> q.send(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration: press f to pay respects
```
This adds the `async def` and `await` verbs to valid CircuitPython syntax using the Micropython implementation.
Consider:
```
>>> class Awaitable:
... def __iter__(self):
... for i in range(3):
... print('awaiting', i)
... yield
... return 42
...
>>> async def wait_for_it():
... a = Awaitable()
... result = await a
... return result
...
>>> task = wait_for_it()
>>> next(task)
awaiting 0
>>> next(task)
awaiting 1
>>> next(task)
awaiting 2
>>> next(task)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration: 42
>>>
```
and more excitingly:
```
>>> async def it_awaits_a_subtask():
... value = await wait_for_it()
... print('twice as good', value * 2)
...
>>> task = it_awaits_a_subtask()
>>> next(task)
awaiting 0
>>> next(task)
awaiting 1
>>> next(task)
awaiting 2
>>> next(task)
twice as good 84
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration:
```
Note that this is just syntax plumbing, not an all-encompassing implementation of an asynchronous task scheduler or asynchronous hardware apis.
uasyncio might be a good module to bring in, or something else - but the standard Python syntax does not _strictly require_ deeper hardware
support.
Micropython implements the await verb via the __iter__ function rather than __await__. It's okay.
The syntax being present will enable users to write clean and expressive multi-step state machines that are written serially and interleaved
according to the rules provided by those users.
Given that this does not include an all-encompassing C scheduler, this is expected to be an advanced functionality until the community settles
on the future of deep hardware support for async/await in CircuitPython. Users will implement yield-based schedulers and tasks wrapping
synchronous hardware APIs with polling to avoid blocking, while their application business logic gets simple `await` statements.
This already begins obscuring things, because now there are two sets of
shared-module functions for manipulating the same structure, e.g.,
common_hal_canio_remote_transmission_request_get_id and
common_hal_canio_message_get_id
New contributor @mdroberts1243 encountered an interesting problem in
which the argument they had named "column_underscore_and_page_addressing"
simply couldn't be used; I discovered that internally this had been
transformed into "column_underscore∧page_addressing", because QSTR
makes _ENTITY_ stand for the same thing as &ENTITY; does in HTML.
This might be nice for some things, but we don't want it here!
I was unable to find a sensible way to "escape" and prevent this entity
coding, so instead I ripped out support for the _and_ and _or_ escapes.
Tested & working:
* Send standard packets
* Receive standard packets (1 FIFO, no filter)
Interoperation between SAM E54 Xplained running this tree and
MicroPython running on STM32F405 Feather with an external
transceiver was also tested.
Many other aspects of a full implementation are not yet present,
such as error detection and recovery.
Discord user Folknology encountered a problem building with Python 3.6.9,
`TypeError: ord() expected a character, but string of length 0 found`.
I was able to reproduce the problem using Python3.5*, and discovered that
the meaning of the regular expression `"|."` had changed in 3.7. Before,
```
>>> [m.group(0) for m in re.finditer("|.", "hello")]
['', '', '', '', '', '']
```
After:
```
>>> [m.group(0) for m in re.finditer("|.", "hello")]
['', 'h', '', 'e', '', 'l', '', 'l', '', 'o', '']
```
Check if `words` is empty and if so use `"."` as the regular expression
instead. This gives the same result on both versions:
```
['h', 'e', 'l', 'l', 'o']
```
and fixes the generation of the huffman dictionary.
Folknology verified that this fix worked for them.
* I could easily install 3.5 but not 3.6. 3.5 reproduced the same problem
This construct (which I added without sufficient testing,
apparently) is only supported in Python 3.7 and newer. Make it
optional so that this script works on other Python versions. This
means that if you have a system with non-UTF-8 encoding you will
need to use Python 3.7.
In particular, this affects a problem building circuitpython in
github's ubuntu-18.04 virtual environment when Python 3.7 is not
explicitly installed. cookie-cuttered libraries call for Python
3.6:
```
- name: Set up Python 3.6
uses: actions/setup-python@v1
with:
python-version: 3.6
```
Since CircuitPython's own build calls for 3.8, this problem was not
detected.
This problem was also encountered by discord user mdroberts1243.
The failure I encountered was here:
https://github.com/jepler/Jepler_CircuitPython_udecimal/runs/1138045020?check_suite_focus=true
.. while my step of "clone and build circuitpython unix port" is
unusual, I think the same problem would have affected "build assets"
if that step had been reached.
Most users and the CI system are running in configurations where Python
configures stdout and stderr in UTF-8 mode. However, Windows is different,
setting values like CP1252. This led to a build failure on Windows, because
makeqstrdata printed Unicode strings to its stdout, expecting them to be
encoded as UTF-8.
This script is writing (stdout) to a compiler input file and potentially
printing messages (stderr) to a log or console. Explicitly configure stdout to
use utf-8 to get consistent behavior on all platforms, and configure stderr so
that if any log/diagnostic messages are printed that cannot be displayed
correctly, they are still displayed instead of creating an error while trying
to print the diagnostic information.
I considered setting the encodings both to ascii, but this would just be
occasionally inconvenient to developers like me who want to show diagnostic
info on stderr and in comments while working with the compression code.
Closes: #3408
While checking whether we can enable -Wimplicit-fallthrough, I encountered
a diagnostic in mp_binary_set_val_array_from_int which led to discovering
the following bug:
```
>>> struct.pack("xb", 3)
b'\x03\x03'
```
That is, the next value (3) was used as the value of a padding byte, while
standard Python always fills "x" bytes with zeros. I initially thought
this had to do with the unintentional fallthrough, but it doesn't.
Instead, this code would relate to an array.array with a typecode of
padding ('x'), which is ALSO not desktop Python compliant:
```
>>> array.array('x', (1, 2, 3))
array('x', [1, 0, 0])
```
Possibly this is dead code that used to be shared between struct-setting
and array-setting, but it no longer is.
I also discovered that the argument list length for struct.pack
and struct.pack_into were not checked, and that the length of binary data
passed to array.array was not checked to be a multiple of the element
size.
I have corrected all of these to conform more closely to standard Python
and revised some tests where necessary. Some tests for micropython-specific
behavior that does not conform to standard Python and is not present
in CircuitPython was deleted outright.
Massive savings. Thanks so much @ciscorn for providing the initial
code for choosing the dictionary.
This adds a bit of time to the build, both to find the dictionary
but also because (for reasons I don't fully understand), the binary
search in the compress() function no longer worked and had to be
replaced with a linear search.
I think this is because the intended invariant is that for codebook
entries that encode to the same number of bits, the entries are ordered
in ascending value. However, I mis-placed the transition from "words"
to "byte/char values" so the codebook entries for words are in word-order
rather than their code order.
Because this price is only paid at build time, I didn't care to determine
exactly where the correct fix was.
I also commented out a line to produce the "estimated total memory size"
-- at least on the unix build with TRANSLATION=ja, this led to a build
time KeyError trying to compute the codebook size for all the strings.
I think this occurs because some single unicode code point ('ァ') is
no longer present as itself in the compressed strings, due to always
being replaced by a word.
As promised, this seems to save hundreds of bytes in the German translation
on the trinket m0.
Testing performed:
- built trinket_m0 in several languages
- built and ran unix port in several languages (en, de_DE, ja) and ran
simple error-producing codes like ./micropython -c '1/0'
Compress common unicode bigrams by making code points in the range
0x80 - 0xbf (inclusive) represent them. Then, they can be greedily
encoded and the substituted code points handled by the existing Huffman
compression. Normally code points in the range 0x80-0xbf are not used
in Unicode, so we stake our own claim. Using the more arguably correct
"Private Use Area" (PUA) would mean that for scripts that only use
code points under 256 we would use more memory for the "values" table.
bigram means "two letters", and is also sometimes called a "digram".
It's nothing to do with "big RAM". For our purposes, a bigram represents
two successive unicode code points, so for instance in our build on
trinket m0 for english the most frequent are:
['t ', 'e ', 'in', 'd ', ...].
The bigrams are selected based on frequency in the corpus, but the
selection is not necessarily optimal, for these reasons I can think of:
* Suppose the corpus was just "tea" repeated 100 times. The
top bigrams would be "te", and "ea". However,
overlap, "te" could never be used. Thus, some bigrams might actually
waste space
* I _assume_ this has to be why e.g., bigram 0x86 "s " is more
frequent than bigram 0x85 " a" in English for Trinket M0, because
sequences like "can't add" would get the "t " digram and then
be unable to use the " a" digram.
* And generally, if a bigram is frequent then so are its constituents.
Say that "i" and "n" both encode to just 5 or 6 bits, then the huffman
code for "in" had better compress to 10 or fewer bits or it's a net
loss!
* I checked though! "i" is 5 bits, "n" is 6 bits (lucky guess)
but the bigram 0x83 also just 6 bits, so this one is a win of
5 bits for every "it" minus overhead. Yay, this round goes to team
compression.
* On the other hand, the least frequent bigram 0x9d " n" is 10 bits
long and its constituent code points are 4+6 bits so there's no
savings, but there is the cost of the table entry.
* and somehow 0x9f 'an' is never used at all!
With or without accounting for overlaps, there is some optimum number
of bigrams. Adding one more bigram uses at least 2 bytes (for the
entry in the bigram table; 4 bytes if code points >255 are in the
source text) and also needs a slot in the Huffman dictionary, so
adding bigrams beyond the optimim number makes compression worse again.
If it's an improvement, the fact that it's not guaranteed optimal
doesn't seem to matter too much. It just leaves a little more fruit
for the next sweep to pick up. Perhaps try adding the most frequent
bigram not yet present, until it doesn't improve compression overall.
Right now, de_DE is again the "fullest" build on trinket_m0. (It's
reclaimed that spot from the ja translation somehow) This change saves
104 bytes there, increasing free space about 6.8%. In the larger
(but not critically full) pyportal build it saves 324 bytes.
The specific number of bigrams used (32) was chosen as it is the max
number that fit within the 0x80..0xbf range. Larger tables would
require the use of 16 bit code points in the de_DE build, losing savings
overall.
(Side note: The most frequent letters in English have been said
to be: ETA OIN SHRDLU; but we have UAC EIL MOPRST in our corpus)
A crash like the following occurs in the unix port:
```
Program received signal SIGSEGV, Segmentation fault.
0x00005555555a2d7a in mp_obj_module_set_globals (self_in=0x55555562c860 <ulab_user_cmodule>, globals=0x55555562c840 <mp_module_ulab_globals>) at ../../py/objmodule.c:145
145 self->globals = globals;
(gdb) up
#1 0x00005555555b2781 in mp_builtin___import__ (n_args=5, args=0x7fffffffdbb0) at ../../py/builtinimport.c:496
496 mp_obj_module_set_globals(outer_module_obj,
(gdb)
#2 0x00005555555940c9 in mp_import_name (name=824, fromlist=0x555555621f10 <mp_const_none_obj>, level=0x1) at ../../py/runtime.c:1392
1392 return mp_builtin___import__(5, args);
```
I don't understand how it doesn't happen on the embedded ports, because
the module object should reside in ROM and the assignment of self->globals
should trigger a Hard Fault.
By checking VERIFY_PTR, we know that the pointed-to data is on the heap
so we can do things like mutate it.
In #2689, hitting ctrl-c during the printing of an object with a lot of sub-objects could cause the screen to stop updating (without showing a KeyboardInterrupt). This makes the printing of such objects acutally interruptable, and also correctly handles the KeyboardInterrupt:
```
>>> l = ["a" * 100] * 200
>>> l
['aaaaaaaaaaaaaaaaaaaaaa...aaaaaaaaaaa', Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyboardInterrupt:
>>>
```
The font is missing many characters and the build needs the space.
We can optimize font storage when we get a good font.
The serial output will work as usual.
This check as implemented is misleading, because it compares the
compressed size in bytes (including the length indication) with the source
string length in Unicode code points. For English this is approximately
fair, but for Japanese this is quite unfair and produces an excess of
"increased length" messages.
This message might have existed for one of two reasons:
* to alert to an improperly function huffman compression
* to call attention to a need for a "string is stored uncompressed" case
We know by now that the huffman compression is functioning as designed and
effective in general.
Just to be on the safe side, I did some back-of-the-envelope estimates.
I considered these three replacements for "the true source string size, in bytes":
+ decompressed_len_utf8 = len(decompressed.encode('utf-8'))
+ decompressed_len_utf16 = len(decompressed.encode('utf-16be'))
+ decompressed_len_bitsize = ((1+len(decompressed)) * math.ceil(math.log(1+len(values), 2)) + 7) // 8
The third counts how many bits each character requires (fewer than 128
characters in the source character set = 7, fewer than 256 = 8, fewer than 512
= 9, etc, adding a string-terminating value) and is in some way representative
of the best way we would be able to store "uncompressed strings". The Japanese
translation (largest as of writing) has just a few strings which increase by
this metric. However, the amount of loss due to expansion in those cases is
outweighed by the cost of adding 1 bit per string to indicate whether it's
compressed or not. For instance, in the BOARD=trinket_m0 TRANSLATION=ja build
the loss is 47 bytes over 300 strings. Adding 1 bit to each of 300 strings will
cost about 37 bytes, leaving just 5 Thumb instructions to implement the code to
check and decode "uncompressed" strings in order to break even.
This is a slight trade-off with code size, in places where a "_varg"
mp_raise variant is now used. The net savings on trinket_m0 is
just 32 bytes.
It also means that the translation will include the original English
text, and cannot be translated. These are usually names of Python
types such as int, set, or dict or special values such as "inf" or
"Nan".
This version
* moves source files to reflect module structure
* adds inline documentation suitable for extract_pyi
* incompatibly moves spectrogram to fft
* incompatibly removes "extras"
There are some remaining markup errors in the specific revision of
extmod/ulab but they do not prevent the doc building process from
completing.
MicroPython's original implementation of __aiter__ was correct for an
earlier (provisional) version of PEP492 (CPython 3.5), where __aiter__ was
an async-def function. But that changed in the final version of PEP492 (in
CPython 3.5.2) where the function was changed to a normal one. See
https://www.python.org/dev/peps/pep-0492/#why-aiter-does-not-return-an-awaitable
See also the note at the end of this subsection in the docs:
https://docs.python.org/3.5/reference/datamodel.html#asynchronous-iterators
And for completeness the BPO: https://bugs.python.org/issue27243
To be consistent with the Python spec as it stands today (and now that
PEP492 is final) this commit changes MicroPython's behaviour to match
CPython: __aiter__ should return an async-iterable object, but is not
itself awaitable.
The relevant tests are updated to match.
See #6267.