Most users and the CI system are running in configurations where Python
configures stdout and stderr in UTF-8 mode. However, Windows is different,
setting values like CP1252. This led to a build failure on Windows, because
makeqstrdata printed Unicode strings to its stdout, expecting them to be
encoded as UTF-8.
This script is writing (stdout) to a compiler input file and potentially
printing messages (stderr) to a log or console. Explicitly configure stdout to
use utf-8 to get consistent behavior on all platforms, and configure stderr so
that if any log/diagnostic messages are printed that cannot be displayed
correctly, they are still displayed instead of creating an error while trying
to print the diagnostic information.
I considered setting the encodings both to ascii, but this would just be
occasionally inconvenient to developers like me who want to show diagnostic
info on stderr and in comments while working with the compression code.
Closes: #3408
Massive savings. Thanks so much @ciscorn for providing the initial
code for choosing the dictionary.
This adds a bit of time to the build, both to find the dictionary
but also because (for reasons I don't fully understand), the binary
search in the compress() function no longer worked and had to be
replaced with a linear search.
I think this is because the intended invariant is that for codebook
entries that encode to the same number of bits, the entries are ordered
in ascending value. However, I mis-placed the transition from "words"
to "byte/char values" so the codebook entries for words are in word-order
rather than their code order.
Because this price is only paid at build time, I didn't care to determine
exactly where the correct fix was.
I also commented out a line to produce the "estimated total memory size"
-- at least on the unix build with TRANSLATION=ja, this led to a build
time KeyError trying to compute the codebook size for all the strings.
I think this occurs because some single unicode code point ('ァ') is
no longer present as itself in the compressed strings, due to always
being replaced by a word.
As promised, this seems to save hundreds of bytes in the German translation
on the trinket m0.
Testing performed:
- built trinket_m0 in several languages
- built and ran unix port in several languages (en, de_DE, ja) and ran
simple error-producing codes like ./micropython -c '1/0'
Compress common unicode bigrams by making code points in the range
0x80 - 0xbf (inclusive) represent them. Then, they can be greedily
encoded and the substituted code points handled by the existing Huffman
compression. Normally code points in the range 0x80-0xbf are not used
in Unicode, so we stake our own claim. Using the more arguably correct
"Private Use Area" (PUA) would mean that for scripts that only use
code points under 256 we would use more memory for the "values" table.
bigram means "two letters", and is also sometimes called a "digram".
It's nothing to do with "big RAM". For our purposes, a bigram represents
two successive unicode code points, so for instance in our build on
trinket m0 for english the most frequent are:
['t ', 'e ', 'in', 'd ', ...].
The bigrams are selected based on frequency in the corpus, but the
selection is not necessarily optimal, for these reasons I can think of:
* Suppose the corpus was just "tea" repeated 100 times. The
top bigrams would be "te", and "ea". However,
overlap, "te" could never be used. Thus, some bigrams might actually
waste space
* I _assume_ this has to be why e.g., bigram 0x86 "s " is more
frequent than bigram 0x85 " a" in English for Trinket M0, because
sequences like "can't add" would get the "t " digram and then
be unable to use the " a" digram.
* And generally, if a bigram is frequent then so are its constituents.
Say that "i" and "n" both encode to just 5 or 6 bits, then the huffman
code for "in" had better compress to 10 or fewer bits or it's a net
loss!
* I checked though! "i" is 5 bits, "n" is 6 bits (lucky guess)
but the bigram 0x83 also just 6 bits, so this one is a win of
5 bits for every "it" minus overhead. Yay, this round goes to team
compression.
* On the other hand, the least frequent bigram 0x9d " n" is 10 bits
long and its constituent code points are 4+6 bits so there's no
savings, but there is the cost of the table entry.
* and somehow 0x9f 'an' is never used at all!
With or without accounting for overlaps, there is some optimum number
of bigrams. Adding one more bigram uses at least 2 bytes (for the
entry in the bigram table; 4 bytes if code points >255 are in the
source text) and also needs a slot in the Huffman dictionary, so
adding bigrams beyond the optimim number makes compression worse again.
If it's an improvement, the fact that it's not guaranteed optimal
doesn't seem to matter too much. It just leaves a little more fruit
for the next sweep to pick up. Perhaps try adding the most frequent
bigram not yet present, until it doesn't improve compression overall.
Right now, de_DE is again the "fullest" build on trinket_m0. (It's
reclaimed that spot from the ja translation somehow) This change saves
104 bytes there, increasing free space about 6.8%. In the larger
(but not critically full) pyportal build it saves 324 bytes.
The specific number of bigrams used (32) was chosen as it is the max
number that fit within the 0x80..0xbf range. Larger tables would
require the use of 16 bit code points in the de_DE build, losing savings
overall.
(Side note: The most frequent letters in English have been said
to be: ETA OIN SHRDLU; but we have UAC EIL MOPRST in our corpus)
This check as implemented is misleading, because it compares the
compressed size in bytes (including the length indication) with the source
string length in Unicode code points. For English this is approximately
fair, but for Japanese this is quite unfair and produces an excess of
"increased length" messages.
This message might have existed for one of two reasons:
* to alert to an improperly function huffman compression
* to call attention to a need for a "string is stored uncompressed" case
We know by now that the huffman compression is functioning as designed and
effective in general.
Just to be on the safe side, I did some back-of-the-envelope estimates.
I considered these three replacements for "the true source string size, in bytes":
+ decompressed_len_utf8 = len(decompressed.encode('utf-8'))
+ decompressed_len_utf16 = len(decompressed.encode('utf-16be'))
+ decompressed_len_bitsize = ((1+len(decompressed)) * math.ceil(math.log(1+len(values), 2)) + 7) // 8
The third counts how many bits each character requires (fewer than 128
characters in the source character set = 7, fewer than 256 = 8, fewer than 512
= 9, etc, adding a string-terminating value) and is in some way representative
of the best way we would be able to store "uncompressed strings". The Japanese
translation (largest as of writing) has just a few strings which increase by
this metric. However, the amount of loss due to expansion in those cases is
outweighed by the cost of adding 1 bit per string to indicate whether it's
compressed or not. For instance, in the BOARD=trinket_m0 TRANSLATION=ja build
the loss is 47 bytes over 300 strings. Adding 1 bit to each of 300 strings will
cost about 37 bytes, leaving just 5 Thumb instructions to implement the code to
check and decode "uncompressed" strings in order to break even.
Length was stored as a 16-bit number always. Most translations have
a max length far less. For example, US English translation lengths
always fit in just 8 bits. probably all languages fit in 9 bits.
This also has the side effect of reducing the alignment of
compressed_string_t from 2 bytes to 1.
testing performed: ran in german and english on pyruler, printed messages
looked right.
Firmware size, en_US
Before: 3044 bytes free in flash
After: 3408 bytes free in flash
Firmware size, de_DE (with #2967 merged to restore translations)
Before: 1236 bytes free in flash
After: 1600 bytes free in flash
If a translation only has unicode code points 255 and below, the "values"
array can be 8 bits instead of 16 bits. This reclaims some code size,
e.g., in a local build, trinket_m0 / en_US reclaimed 112 bytes and de_DE
reclaimed 104 bytes. However, languages like zh_Latn_pinyin, which use
code points above 255, did not benefit.
By treating each unicode code-point as a single entity for huffman
compression, the overall compression rate can be somewhat improved
without changing the algorithm. On the decompression side, when
compressed values above 127 are encountered, they need to be
converted from a 16-bit Unicode code point into a UTF-8 byte
sequence.
Doing this returns approximately 1.5kB of flash storage with the
zh_Latn_pinyin translation. (292 -> 1768 bytes remaining in my build
of trinket_m0)
Other "more ASCII" translations benefit less, and in fact
zh_Latn_pinyin is no longer the most constrained translation!
(de_DE 1156 -> 1384 bytes free in flash, I didn't check others
before pushing for CI)
English is slightly pessimized, 2840 -> 2788 bytes, probably mostly
because the "values" array was changed from uint8_t to uint16_t,
which is strictly not required for an all-ASCII translation. This
could probably be avoided in this case, but as English is not the
most constrained translation it doesn't really matter.
Testing performed: built for feather nRF52840 express and trinket m0
in English and zh_Latn_pinyin; ran and verified the localized
messages such as
Àn xià rènhé jiàn jìnrù REPL. Shǐyòng CTRL-D chóngxīn jiāzài.
and
Press any key to enter the REPL. Use CTRL-D to reload.
were properly displayed.
It is possible for this routine to expand some inputs, and in fact
it does for certan strings in the proposed Korean translation of
CircuitPython (#1858). I did not determine what the maximum
expansion is -- it's probably modest, like len()/7+2 bytes or
something -- so I tried to just make enc[] an adequate
over-allocation, and then ensured that all the strings in the
proposed ko.po now worked. The worst actual expansion seems to be a
string that goes from 65 UTF-8-encoded bytes to 68 compressed bytes
(+4.6%). Only a few out of all strings are reported as
non-compressed.
This saves code space in builds which use link-time optimization.
The optimization drops the untranslated strings and replaces them
with a compressed_string_t struct. It can then be decompressed to
a c string.
Builds without LTO work as well but include both untranslated
strings and compressed strings.
This work could be expanded to include QSTRs and loaded strings if
a compress method is added to C. Its tracked in #531.
Update makeqstrdata.py to sort strings starting with "__" to the beginning
of qstr list, so they get low qstr id's, guaranteedly fitting in 8 bits.
Then use this property to further compact op_id => qstr mapping arrays.
Fetch the current usb mode and return a string representation when
pyb.usb_mode() is called with no args. The possible string values are interned
as qstr's. None will be returned if an incorrect mode is set.
This patch makes configurable, via MICROPY_QSTR_BYTES_IN_HASH, the
number of bytes used for a qstr hash. It was originally fixed at 2
bytes, and now defaults to 2 bytes. Setting it to 1 byte will save
ROM and RAM at a small expense of hash collisions.
This new config option sets how many fixed-number-of-bytes to use to
store the length of each qstr. Previously this was hard coded to 2,
but, as per issue #1056, this is considered overkill since no-one
needs identifiers longer than 255 bytes.
With this patch the number of bytes for the length is configurable, and
defaults to 1 byte. The configuration option filters through to the
makeqstrdata.py script.
Code size savings going from 2 to 1 byte:
- unix x64 down by 592 bytes
- stmhal down by 1148 bytes
- bare-arm down by 284 bytes
Also has RAM savings, and will be slightly more efficient in execution.
The original parsing would error out on any C declarations that are not typedefs
or extern variables. This limits what can go in mpconfig.h and mpconfigport.h,
as they are included in qstr.h. For instance even a function declaration would be
rejected and including system headers is a complete no-go.
That seems too limiting for a global config header, so makeqstrdata now
ignores everything that does not match a qstr definition.