Unify USB-related makefile var and C def as CIRCUITPY_USB.
Always define it as 0 or 1, same as all other settings.
USB_AVAILABLE was conditionally defined in supervisor.mk,
but never actually used to #ifdef USB-related code.
Loosely related to #4546
The interrupt may have a higher priority than the serial output's
(USB) interrupt and may never make room. This makes prints from
interrupts (like the BLE event calls) best effort for what can be
queued up. The rest of the output will be dropped.
This allows the compiler to merge strings: e.g. "update",
"difference_update" and "symmetric_difference_update"
will all point to the same memory.
Shaves ~1KB off the image size, and potentially allows
bigger savings if qstr attrs are initialized in qstr_init(),
and not stored in the image.
This manifested as incorrect error messages from mpy-cross, like
```
$ mpy-cross doesnotexist.py
OSError: [Errno 2] cno such file/director
```
The remaining bits in `b` must be shifted to the correct position before
entering the loop.
For most (all?) actual builds, compress_max_length_bits was 8 and the
problem went unnoticed.
This switches stage2 to C and uses Jinja to change the C code based
on flash settings from https://github.com/adafruit/nvm.toml. It
produces the fastest settings for the given set of external flashes.
Flash size is no longer hard coded so switching flashes with similar
capabilities but different sizes should *just work*.
This PR also places "ITCM" code in RAM to save the XIP cache for
code execution. Further optimization is possible. A blink code.py
still requires a number of flash fetches every blink.
Fixes#4041
Instead of counting words in make, which is slightly awful, notice that
possible_devices is local to external_flash.c, so we can declare the array
with an automatic bound, and then get the count as the element-count
(MP_ARRAY_SIZE) of the array.
Since EXTERNAL_FLASH_DEVICE_COUNT is no longer a global macro, switch
a few sites to using EXTERNAL_FLASH_DEVICES in `#if` checks instead.
This adds some additional code in mkfs which doesn't seem necessary, and
Disabling it saves 172 bytes flash.
Testing performed: Using a Feather M0 Adalogger, checked that
* an sdcard could still be mounted (using adafruit_sdcard)
* os.listdir() of "/" and "/sd" worked
* CIRCUITPY still mounted
This is a first go at it, done by naive replacing of all array
operations with corresponding operations on the list. Note that
there is a lot of unnecessary type conversions, here. Also, list_pop
has been copied, because it's decalerd STATIC in py/objlist.h
Since we want to expose the list of group's children to the user,
we should only have the original objects in it, without any other
additional data, and compute the native object as needed.
This saves about 60 bytes (Feather M4 went from 45040 -> 45100 bytes free)
66 bytes of data eliminated, but 6 bytes paid back to initialize the length
field.
The RP2040 is new microcontroller from Raspberry Pi that features
two Cortex M0s and eight PIO state machines that are good for
crunching lots of data. It has 264k RAM and a built in UF2
bootloader too.
Datasheet: https://pico.raspberrypi.org/files/rp2040_datasheet.pdf
Some ports need an extra operation to ensure that the main task is
awoken so that a queued background task will execute during an ongoing
light sleep.
This removes the need to enable supervisor ticks while I2SOut is operating.
Closes: #3952
This changes lots of files to unify `board.h` across ports. It adds
`board_deinit` when CIRCUITPY_ALARM is set. `main.c` uses it to
deinit the board before deep sleeping (even when pretending.)
Deep sleep is now a two step process for the port. First, the
port should prepare to deep sleep based on the given alarms. It
should set alarms for both deep and pretend sleep. In particular,
the pretend versions should be set immediately so that we don't
miss an alarm as we shutdown. These alarms should also wake from
`port_idle_until_interrupt` which is used when pretending to deep
sleep.
Second, when real deep sleeping, `alarm_enter_deep_sleep` is called.
The port should set any alarms it didn't during prepare based on
data it saved internally during prepare.
ESP32-S2 sleep is a bit reorganized to locate more logic with
TimeAlarm. This will help it scale to more alarm types.
Fixes#3786
This allows calls to `allocate_memory()` while the VM is running, it will then allocate from the GC heap (unless there is a suitable hole among the supervisor allocations), and when the VM exits and the GC heap is freed, the allocation will be moved to the bottom of the former GC heap and transformed into a proper supervisor allocation. Existing movable allocations will also be moved to defragment the supervisor heap and ensure that the next VM run gets as much memory as possible for the GC heap.
By itself this breaks terminalio because it violates the assumption that supervisor_display_move_memory() still has access to an undisturbed heap to copy the tilegrid from. It will work in many cases, but if you're unlucky you will get garbled terminal contents after exiting from the vm run that created the display. This will be fixed in the following commit, which is separate to simplify review.
This makes a more useful display on the portrait magtag, allowing 21
characters across instead of just 18. There are 20 full rows of text,
instead of 21. The total number of characters increases slightly from 378
to 420.
For comparison, the Commodore VIC 20 had 22 rows of 23 characters for a
total of 506 characters. :-P
* No weak link for modules. It only impacts _os and _time and is
already disabled for non-full builds.
* Turn off PA00 and PA01 because they are the crystal on the Metro
M0 Express.
* Change ejected default to false to move it to BSS. It is set on
USB connection anyway.
* Set sinc_filter to const. Doesn't help flash but keeps it out of
RAM.
This unifies the flash config to the settings used by the Boot ROM.
This makes the config unique per board which allows for changing
quad enable and status bit differences per flash device. It also
allows for timing differences due to the board layout.
This change also tweaks linker layout to leave more ram space for
the CircuitPython heap.
This requires recovering the pointer of the allocation, which could be done by adding up neighbor lengths, but the simpler way is to stop NULLing it out in the first place and instead mark an allocation as freed by the client by setting the lowest bit of the length (which is always zero in a valid length).
When allocations were freed in a different order from the reverse of how they were allocated (leaving holes), the heap would get into an inconsistent state, eventually resulting in crashes.
free_memory() relies on having allocations in order, but allocate_memory() did not guarantee that: It reused the first allocation with a NULL ptr without ensuring that it was between low_address and high_address. When it belongs to a hole in the allocated memory, such an allocation is not really free for reuse, because free_memory() still needs its length.
Instead, explicitly mark allocations available for reuse with a special (invalid) value in the length field. Only allocations that lie between low_address and high_address are marked that way.
I have a function where it should be impossible to reach the end, so I put in a safe-mode reset at the bottom:
```
int find_unused_slot(void) {
// precondition: you already verified that a slot was available
for (int i=0; i<NUM_SLOTS; i++) {
if( slot_free(i)) {
return i;
}
}
safe_mode_reset(MICROPY_FATAL_ERROR);
}
```
However, the compiler still gave a diagnostic, because safe_mode_reset was not declared NORETURN.
So I started by teaching the compiler that reset_into_safe_mode never returned. This leads at least one level deeper due to reset_cpu needing to be a NORETURN function. Each port is a little different in this area. I also marked reset_to_bootloader as NORETURN.
Additional notes:
* stm32's reset_to_bootloader was not implemented, but now does a bare reset. Most stm32s are not fitted with uf2 bootloaders anyway.
* ditto cxd56
* esp32s2 did not implement reset_cpu at all. I used esp_restart(). (not tested)
* litex did not implement reset_cpu at all. I used reboot_ctrl_write. But notably this is what reset_to_bootloader already did, so one or the other must be incorrect (not tested). reboot_ctrl_write cannot be declared NORETURN, as it returns unless the special value 0xac is written), so a new unreachable forever-loop is added.
* cxd56's reset is via a boardctl() call which can't generically be declared NORETURN, so a new unreacahble "for(;;)" forever-loop is added.
* In several places, NVIC_SystemReset is redeclared with NORETURN applied. This is accepted just fine by gcc. I chose this as preferable to editing the multiple copies of CMSIS headers where it is normally declared.
* the stub safe_mode reset simply aborts. This is used in mpy-cross.
These changes remove the caveat from supervisor.runtime.serial_connected.
It appears that _tud_cdc_connected() only tracks explicit changes to the
"DTR" bit, which leads to disconnects not being registered.
Instead:
* when line state is changed explicitly, track the dtr value in
_serial_connected
* when the USB bus is suspended, set _serial_connected to False
Testing performed (using sam e54 xplained): Run a program to show
the state of `serial_connected` on the LED:
```
import digitalio
import supervisor
import board
led = digitalio.DigitalInOut(board.LED)
while True:
led.switch_to_output(not supervisor.runtime.serial_connected)
```
Try all the following:
* open, close serial terminal program
- LED status tracks whether terminal is open
* turn on/off data lines using the switchable charge-only cable
- LED turns off when switch is in "charger" position
- LED turns back on when switch is in Data position and terminal is
opened (but doesn't turn back on just because switch position is
changed)
Massive savings. Thanks so much @ciscorn for providing the initial
code for choosing the dictionary.
This adds a bit of time to the build, both to find the dictionary
but also because (for reasons I don't fully understand), the binary
search in the compress() function no longer worked and had to be
replaced with a linear search.
I think this is because the intended invariant is that for codebook
entries that encode to the same number of bits, the entries are ordered
in ascending value. However, I mis-placed the transition from "words"
to "byte/char values" so the codebook entries for words are in word-order
rather than their code order.
Because this price is only paid at build time, I didn't care to determine
exactly where the correct fix was.
I also commented out a line to produce the "estimated total memory size"
-- at least on the unix build with TRANSLATION=ja, this led to a build
time KeyError trying to compute the codebook size for all the strings.
I think this occurs because some single unicode code point ('ァ') is
no longer present as itself in the compressed strings, due to always
being replaced by a word.
As promised, this seems to save hundreds of bytes in the German translation
on the trinket m0.
Testing performed:
- built trinket_m0 in several languages
- built and ran unix port in several languages (en, de_DE, ja) and ran
simple error-producing codes like ./micropython -c '1/0'
Two problems: The lead byte for 3-byte sequences was wrong, and one
mid-byte was not even filled in due to a missing "++"!
Apparently this was broken ever since the first "Compress as unicode,
not bytes" commit, but I believed I'd "tested" it by running on the
Pinyin translation.
This rendered at least the Korean and Japanese translations completely
illegible, affecting 5.0 and all later releases.
Compress common unicode bigrams by making code points in the range
0x80 - 0xbf (inclusive) represent them. Then, they can be greedily
encoded and the substituted code points handled by the existing Huffman
compression. Normally code points in the range 0x80-0xbf are not used
in Unicode, so we stake our own claim. Using the more arguably correct
"Private Use Area" (PUA) would mean that for scripts that only use
code points under 256 we would use more memory for the "values" table.
bigram means "two letters", and is also sometimes called a "digram".
It's nothing to do with "big RAM". For our purposes, a bigram represents
two successive unicode code points, so for instance in our build on
trinket m0 for english the most frequent are:
['t ', 'e ', 'in', 'd ', ...].
The bigrams are selected based on frequency in the corpus, but the
selection is not necessarily optimal, for these reasons I can think of:
* Suppose the corpus was just "tea" repeated 100 times. The
top bigrams would be "te", and "ea". However,
overlap, "te" could never be used. Thus, some bigrams might actually
waste space
* I _assume_ this has to be why e.g., bigram 0x86 "s " is more
frequent than bigram 0x85 " a" in English for Trinket M0, because
sequences like "can't add" would get the "t " digram and then
be unable to use the " a" digram.
* And generally, if a bigram is frequent then so are its constituents.
Say that "i" and "n" both encode to just 5 or 6 bits, then the huffman
code for "in" had better compress to 10 or fewer bits or it's a net
loss!
* I checked though! "i" is 5 bits, "n" is 6 bits (lucky guess)
but the bigram 0x83 also just 6 bits, so this one is a win of
5 bits for every "it" minus overhead. Yay, this round goes to team
compression.
* On the other hand, the least frequent bigram 0x9d " n" is 10 bits
long and its constituent code points are 4+6 bits so there's no
savings, but there is the cost of the table entry.
* and somehow 0x9f 'an' is never used at all!
With or without accounting for overlaps, there is some optimum number
of bigrams. Adding one more bigram uses at least 2 bytes (for the
entry in the bigram table; 4 bytes if code points >255 are in the
source text) and also needs a slot in the Huffman dictionary, so
adding bigrams beyond the optimim number makes compression worse again.
If it's an improvement, the fact that it's not guaranteed optimal
doesn't seem to matter too much. It just leaves a little more fruit
for the next sweep to pick up. Perhaps try adding the most frequent
bigram not yet present, until it doesn't improve compression overall.
Right now, de_DE is again the "fullest" build on trinket_m0. (It's
reclaimed that spot from the ja translation somehow) This change saves
104 bytes there, increasing free space about 6.8%. In the larger
(but not critically full) pyportal build it saves 324 bytes.
The specific number of bigrams used (32) was chosen as it is the max
number that fit within the 0x80..0xbf range. Larger tables would
require the use of 16 bit code points in the de_DE build, losing savings
overall.
(Side note: The most frequent letters in English have been said
to be: ETA OIN SHRDLU; but we have UAC EIL MOPRST in our corpus)
Otherwise, out of range writes would occur in tilegrid_set_tile, causing a safe mode reset.
```
Hardware watchpoint 6: -location *stack_alloc->ptr
Old value = 24652061
New value = 24641565
0x000444f2 in common_hal_displayio_tilegrid_set_tile (self=0x200002c8 <supervisor_terminal_text_grid>, x=1, y=1, tile_index=0 '\000')
at ../../shared-module/displayio/TileGrid.c:236
236 if (!self->partial_change) {
(gdb)
```
.. however, the number of endpoints is only set for SAMD (8).
Other ports need to set the value. Otherwise, the build will show
the message
```
Unable to check whether maximum number of endpoints is respected
```