Compress common unicode bigrams by making code points in the range
0x80 - 0xbf (inclusive) represent them. Then, they can be greedily
encoded and the substituted code points handled by the existing Huffman
compression. Normally code points in the range 0x80-0xbf are not used
in Unicode, so we stake our own claim. Using the more arguably correct
"Private Use Area" (PUA) would mean that for scripts that only use
code points under 256 we would use more memory for the "values" table.
bigram means "two letters", and is also sometimes called a "digram".
It's nothing to do with "big RAM". For our purposes, a bigram represents
two successive unicode code points, so for instance in our build on
trinket m0 for english the most frequent are:
['t ', 'e ', 'in', 'd ', ...].
The bigrams are selected based on frequency in the corpus, but the
selection is not necessarily optimal, for these reasons I can think of:
* Suppose the corpus was just "tea" repeated 100 times. The
top bigrams would be "te", and "ea". However,
overlap, "te" could never be used. Thus, some bigrams might actually
waste space
* I _assume_ this has to be why e.g., bigram 0x86 "s " is more
frequent than bigram 0x85 " a" in English for Trinket M0, because
sequences like "can't add" would get the "t " digram and then
be unable to use the " a" digram.
* And generally, if a bigram is frequent then so are its constituents.
Say that "i" and "n" both encode to just 5 or 6 bits, then the huffman
code for "in" had better compress to 10 or fewer bits or it's a net
loss!
* I checked though! "i" is 5 bits, "n" is 6 bits (lucky guess)
but the bigram 0x83 also just 6 bits, so this one is a win of
5 bits for every "it" minus overhead. Yay, this round goes to team
compression.
* On the other hand, the least frequent bigram 0x9d " n" is 10 bits
long and its constituent code points are 4+6 bits so there's no
savings, but there is the cost of the table entry.
* and somehow 0x9f 'an' is never used at all!
With or without accounting for overlaps, there is some optimum number
of bigrams. Adding one more bigram uses at least 2 bytes (for the
entry in the bigram table; 4 bytes if code points >255 are in the
source text) and also needs a slot in the Huffman dictionary, so
adding bigrams beyond the optimim number makes compression worse again.
If it's an improvement, the fact that it's not guaranteed optimal
doesn't seem to matter too much. It just leaves a little more fruit
for the next sweep to pick up. Perhaps try adding the most frequent
bigram not yet present, until it doesn't improve compression overall.
Right now, de_DE is again the "fullest" build on trinket_m0. (It's
reclaimed that spot from the ja translation somehow) This change saves
104 bytes there, increasing free space about 6.8%. In the larger
(but not critically full) pyportal build it saves 324 bytes.
The specific number of bigrams used (32) was chosen as it is the max
number that fit within the 0x80..0xbf range. Larger tables would
require the use of 16 bit code points in the de_DE build, losing savings
overall.
(Side note: The most frequent letters in English have been said
to be: ETA OIN SHRDLU; but we have UAC EIL MOPRST in our corpus)
A crash like the following occurs in the unix port:
```
Program received signal SIGSEGV, Segmentation fault.
0x00005555555a2d7a in mp_obj_module_set_globals (self_in=0x55555562c860 <ulab_user_cmodule>, globals=0x55555562c840 <mp_module_ulab_globals>) at ../../py/objmodule.c:145
145 self->globals = globals;
(gdb) up
#1 0x00005555555b2781 in mp_builtin___import__ (n_args=5, args=0x7fffffffdbb0) at ../../py/builtinimport.c:496
496 mp_obj_module_set_globals(outer_module_obj,
(gdb)
#2 0x00005555555940c9 in mp_import_name (name=824, fromlist=0x555555621f10 <mp_const_none_obj>, level=0x1) at ../../py/runtime.c:1392
1392 return mp_builtin___import__(5, args);
```
I don't understand how it doesn't happen on the embedded ports, because
the module object should reside in ROM and the assignment of self->globals
should trigger a Hard Fault.
By checking VERIFY_PTR, we know that the pointed-to data is on the heap
so we can do things like mutate it.
In #2689, hitting ctrl-c during the printing of an object with a lot of sub-objects could cause the screen to stop updating (without showing a KeyboardInterrupt). This makes the printing of such objects acutally interruptable, and also correctly handles the KeyboardInterrupt:
```
>>> l = ["a" * 100] * 200
>>> l
['aaaaaaaaaaaaaaaaaaaaaa...aaaaaaaaaaa', Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyboardInterrupt:
>>>
```
The font is missing many characters and the build needs the space.
We can optimize font storage when we get a good font.
The serial output will work as usual.
This check as implemented is misleading, because it compares the
compressed size in bytes (including the length indication) with the source
string length in Unicode code points. For English this is approximately
fair, but for Japanese this is quite unfair and produces an excess of
"increased length" messages.
This message might have existed for one of two reasons:
* to alert to an improperly function huffman compression
* to call attention to a need for a "string is stored uncompressed" case
We know by now that the huffman compression is functioning as designed and
effective in general.
Just to be on the safe side, I did some back-of-the-envelope estimates.
I considered these three replacements for "the true source string size, in bytes":
+ decompressed_len_utf8 = len(decompressed.encode('utf-8'))
+ decompressed_len_utf16 = len(decompressed.encode('utf-16be'))
+ decompressed_len_bitsize = ((1+len(decompressed)) * math.ceil(math.log(1+len(values), 2)) + 7) // 8
The third counts how many bits each character requires (fewer than 128
characters in the source character set = 7, fewer than 256 = 8, fewer than 512
= 9, etc, adding a string-terminating value) and is in some way representative
of the best way we would be able to store "uncompressed strings". The Japanese
translation (largest as of writing) has just a few strings which increase by
this metric. However, the amount of loss due to expansion in those cases is
outweighed by the cost of adding 1 bit per string to indicate whether it's
compressed or not. For instance, in the BOARD=trinket_m0 TRANSLATION=ja build
the loss is 47 bytes over 300 strings. Adding 1 bit to each of 300 strings will
cost about 37 bytes, leaving just 5 Thumb instructions to implement the code to
check and decode "uncompressed" strings in order to break even.
This is a slight trade-off with code size, in places where a "_varg"
mp_raise variant is now used. The net savings on trinket_m0 is
just 32 bytes.
It also means that the translation will include the original English
text, and cannot be translated. These are usually names of Python
types such as int, set, or dict or special values such as "inf" or
"Nan".
This version
* moves source files to reflect module structure
* adds inline documentation suitable for extract_pyi
* incompatibly moves spectrogram to fft
* incompatibly removes "extras"
There are some remaining markup errors in the specific revision of
extmod/ulab but they do not prevent the doc building process from
completing.
MicroPython's original implementation of __aiter__ was correct for an
earlier (provisional) version of PEP492 (CPython 3.5), where __aiter__ was
an async-def function. But that changed in the final version of PEP492 (in
CPython 3.5.2) where the function was changed to a normal one. See
https://www.python.org/dev/peps/pep-0492/#why-aiter-does-not-return-an-awaitable
See also the note at the end of this subsection in the docs:
https://docs.python.org/3.5/reference/datamodel.html#asynchronous-iterators
And for completeness the BPO: https://bugs.python.org/issue27243
To be consistent with the Python spec as it stands today (and now that
PEP492 is final) this commit changes MicroPython's behaviour to match
CPython: __aiter__ should return an async-iterable object, but is not
itself awaitable.
The relevant tests are updated to match.
See #6267.
coroutines don't have __next__; they also call themselves coroutines.
This does not change the fact that `async def` methods are generators,
but it does make them behave more like CPython.
Otherwise functions like memset might get optimised to call themselves (eg
with gcc 10). And provide CFLAGS_BUILTIN so these options can be changed
by a port if needed.
Fixes issue #6053.
Testing performed: That a card is successfully mounted on Pygamer with
the built in SD card slot
This module is enabled for most FULL_BUILD boards, but is disabled for
samd21 ("M0"), litex, and pca10100 for various reasons.
I noticed that this code was referring to samd-specific functionality,
and isn't enabled except in one samd board (pewpew10). Move it.
There is incomplte support for _pew in mimxrt10xx which then caused build
errors; adding a #if guard to check for _pew being enabled fixes it.
The _pew module is not likely to be important on mimxrt but I'll leave the
choice to remove it to someone else.
* Fix flash writes that don't end on a sector boundary. Fixes#2944
* Fix enum incompatibility with IDF.
* Fix printf output so it goes out debug UART.
* Increase stack size to 8k.
* Fix sleep of less than a tick so it doesn't crash.
Length was stored as a 16-bit number always. Most translations have
a max length far less. For example, US English translation lengths
always fit in just 8 bits. probably all languages fit in 9 bits.
This also has the side effect of reducing the alignment of
compressed_string_t from 2 bytes to 1.
testing performed: ran in german and english on pyruler, printed messages
looked right.
Firmware size, en_US
Before: 3044 bytes free in flash
After: 3408 bytes free in flash
Firmware size, de_DE (with #2967 merged to restore translations)
Before: 1236 bytes free in flash
After: 1600 bytes free in flash
This adds an exception to be raised when the WatchDogTimer times out.
Note that this currently causes a HardFault, and it's not clear why it's
not behaving properly.
Signed-off-by: Sean Cross <sean@xobs.io>