This allows calls to `allocate_memory()` while the VM is running, it will then allocate from the GC heap (unless there is a suitable hole among the supervisor allocations), and when the VM exits and the GC heap is freed, the allocation will be moved to the bottom of the former GC heap and transformed into a proper supervisor allocation. Existing movable allocations will also be moved to defragment the supervisor heap and ensure that the next VM run gets as much memory as possible for the GC heap.
By itself this breaks terminalio because it violates the assumption that supervisor_display_move_memory() still has access to an undisturbed heap to copy the tilegrid from. It will work in many cases, but if you're unlucky you will get garbled terminal contents after exiting from the vm run that created the display. This will be fixed in the following commit, which is separate to simplify review.
This makes a more useful display on the portrait magtag, allowing 21
characters across instead of just 18. There are 20 full rows of text,
instead of 21. The total number of characters increases slightly from 378
to 420.
For comparison, the Commodore VIC 20 had 22 rows of 23 characters for a
total of 506 characters. :-P
* No weak link for modules. It only impacts _os and _time and is
already disabled for non-full builds.
* Turn off PA00 and PA01 because they are the crystal on the Metro
M0 Express.
* Change ejected default to false to move it to BSS. It is set on
USB connection anyway.
* Set sinc_filter to const. Doesn't help flash but keeps it out of
RAM.
This unifies the flash config to the settings used by the Boot ROM.
This makes the config unique per board which allows for changing
quad enable and status bit differences per flash device. It also
allows for timing differences due to the board layout.
This change also tweaks linker layout to leave more ram space for
the CircuitPython heap.
This requires recovering the pointer of the allocation, which could be done by adding up neighbor lengths, but the simpler way is to stop NULLing it out in the first place and instead mark an allocation as freed by the client by setting the lowest bit of the length (which is always zero in a valid length).
When allocations were freed in a different order from the reverse of how they were allocated (leaving holes), the heap would get into an inconsistent state, eventually resulting in crashes.
free_memory() relies on having allocations in order, but allocate_memory() did not guarantee that: It reused the first allocation with a NULL ptr without ensuring that it was between low_address and high_address. When it belongs to a hole in the allocated memory, such an allocation is not really free for reuse, because free_memory() still needs its length.
Instead, explicitly mark allocations available for reuse with a special (invalid) value in the length field. Only allocations that lie between low_address and high_address are marked that way.
I have a function where it should be impossible to reach the end, so I put in a safe-mode reset at the bottom:
```
int find_unused_slot(void) {
// precondition: you already verified that a slot was available
for (int i=0; i<NUM_SLOTS; i++) {
if( slot_free(i)) {
return i;
}
}
safe_mode_reset(MICROPY_FATAL_ERROR);
}
```
However, the compiler still gave a diagnostic, because safe_mode_reset was not declared NORETURN.
So I started by teaching the compiler that reset_into_safe_mode never returned. This leads at least one level deeper due to reset_cpu needing to be a NORETURN function. Each port is a little different in this area. I also marked reset_to_bootloader as NORETURN.
Additional notes:
* stm32's reset_to_bootloader was not implemented, but now does a bare reset. Most stm32s are not fitted with uf2 bootloaders anyway.
* ditto cxd56
* esp32s2 did not implement reset_cpu at all. I used esp_restart(). (not tested)
* litex did not implement reset_cpu at all. I used reboot_ctrl_write. But notably this is what reset_to_bootloader already did, so one or the other must be incorrect (not tested). reboot_ctrl_write cannot be declared NORETURN, as it returns unless the special value 0xac is written), so a new unreachable forever-loop is added.
* cxd56's reset is via a boardctl() call which can't generically be declared NORETURN, so a new unreacahble "for(;;)" forever-loop is added.
* In several places, NVIC_SystemReset is redeclared with NORETURN applied. This is accepted just fine by gcc. I chose this as preferable to editing the multiple copies of CMSIS headers where it is normally declared.
* the stub safe_mode reset simply aborts. This is used in mpy-cross.
These changes remove the caveat from supervisor.runtime.serial_connected.
It appears that _tud_cdc_connected() only tracks explicit changes to the
"DTR" bit, which leads to disconnects not being registered.
Instead:
* when line state is changed explicitly, track the dtr value in
_serial_connected
* when the USB bus is suspended, set _serial_connected to False
Testing performed (using sam e54 xplained): Run a program to show
the state of `serial_connected` on the LED:
```
import digitalio
import supervisor
import board
led = digitalio.DigitalInOut(board.LED)
while True:
led.switch_to_output(not supervisor.runtime.serial_connected)
```
Try all the following:
* open, close serial terminal program
- LED status tracks whether terminal is open
* turn on/off data lines using the switchable charge-only cable
- LED turns off when switch is in "charger" position
- LED turns back on when switch is in Data position and terminal is
opened (but doesn't turn back on just because switch position is
changed)
Massive savings. Thanks so much @ciscorn for providing the initial
code for choosing the dictionary.
This adds a bit of time to the build, both to find the dictionary
but also because (for reasons I don't fully understand), the binary
search in the compress() function no longer worked and had to be
replaced with a linear search.
I think this is because the intended invariant is that for codebook
entries that encode to the same number of bits, the entries are ordered
in ascending value. However, I mis-placed the transition from "words"
to "byte/char values" so the codebook entries for words are in word-order
rather than their code order.
Because this price is only paid at build time, I didn't care to determine
exactly where the correct fix was.
I also commented out a line to produce the "estimated total memory size"
-- at least on the unix build with TRANSLATION=ja, this led to a build
time KeyError trying to compute the codebook size for all the strings.
I think this occurs because some single unicode code point ('ァ') is
no longer present as itself in the compressed strings, due to always
being replaced by a word.
As promised, this seems to save hundreds of bytes in the German translation
on the trinket m0.
Testing performed:
- built trinket_m0 in several languages
- built and ran unix port in several languages (en, de_DE, ja) and ran
simple error-producing codes like ./micropython -c '1/0'
Two problems: The lead byte for 3-byte sequences was wrong, and one
mid-byte was not even filled in due to a missing "++"!
Apparently this was broken ever since the first "Compress as unicode,
not bytes" commit, but I believed I'd "tested" it by running on the
Pinyin translation.
This rendered at least the Korean and Japanese translations completely
illegible, affecting 5.0 and all later releases.
Compress common unicode bigrams by making code points in the range
0x80 - 0xbf (inclusive) represent them. Then, they can be greedily
encoded and the substituted code points handled by the existing Huffman
compression. Normally code points in the range 0x80-0xbf are not used
in Unicode, so we stake our own claim. Using the more arguably correct
"Private Use Area" (PUA) would mean that for scripts that only use
code points under 256 we would use more memory for the "values" table.
bigram means "two letters", and is also sometimes called a "digram".
It's nothing to do with "big RAM". For our purposes, a bigram represents
two successive unicode code points, so for instance in our build on
trinket m0 for english the most frequent are:
['t ', 'e ', 'in', 'd ', ...].
The bigrams are selected based on frequency in the corpus, but the
selection is not necessarily optimal, for these reasons I can think of:
* Suppose the corpus was just "tea" repeated 100 times. The
top bigrams would be "te", and "ea". However,
overlap, "te" could never be used. Thus, some bigrams might actually
waste space
* I _assume_ this has to be why e.g., bigram 0x86 "s " is more
frequent than bigram 0x85 " a" in English for Trinket M0, because
sequences like "can't add" would get the "t " digram and then
be unable to use the " a" digram.
* And generally, if a bigram is frequent then so are its constituents.
Say that "i" and "n" both encode to just 5 or 6 bits, then the huffman
code for "in" had better compress to 10 or fewer bits or it's a net
loss!
* I checked though! "i" is 5 bits, "n" is 6 bits (lucky guess)
but the bigram 0x83 also just 6 bits, so this one is a win of
5 bits for every "it" minus overhead. Yay, this round goes to team
compression.
* On the other hand, the least frequent bigram 0x9d " n" is 10 bits
long and its constituent code points are 4+6 bits so there's no
savings, but there is the cost of the table entry.
* and somehow 0x9f 'an' is never used at all!
With or without accounting for overlaps, there is some optimum number
of bigrams. Adding one more bigram uses at least 2 bytes (for the
entry in the bigram table; 4 bytes if code points >255 are in the
source text) and also needs a slot in the Huffman dictionary, so
adding bigrams beyond the optimim number makes compression worse again.
If it's an improvement, the fact that it's not guaranteed optimal
doesn't seem to matter too much. It just leaves a little more fruit
for the next sweep to pick up. Perhaps try adding the most frequent
bigram not yet present, until it doesn't improve compression overall.
Right now, de_DE is again the "fullest" build on trinket_m0. (It's
reclaimed that spot from the ja translation somehow) This change saves
104 bytes there, increasing free space about 6.8%. In the larger
(but not critically full) pyportal build it saves 324 bytes.
The specific number of bigrams used (32) was chosen as it is the max
number that fit within the 0x80..0xbf range. Larger tables would
require the use of 16 bit code points in the de_DE build, losing savings
overall.
(Side note: The most frequent letters in English have been said
to be: ETA OIN SHRDLU; but we have UAC EIL MOPRST in our corpus)
Otherwise, out of range writes would occur in tilegrid_set_tile, causing a safe mode reset.
```
Hardware watchpoint 6: -location *stack_alloc->ptr
Old value = 24652061
New value = 24641565
0x000444f2 in common_hal_displayio_tilegrid_set_tile (self=0x200002c8 <supervisor_terminal_text_grid>, x=1, y=1, tile_index=0 '\000')
at ../../shared-module/displayio/TileGrid.c:236
236 if (!self->partial_change) {
(gdb)
```
.. however, the number of endpoints is only set for SAMD (8).
Other ports need to set the value. Otherwise, the build will show
the message
```
Unable to check whether maximum number of endpoints is respected
```
The font is missing many characters and the build needs the space.
We can optimize font storage when we get a good font.
The serial output will work as usual.