2016-03-11 11:12:59 -05:00
|
|
|
"""
|
|
|
|
This script processes the output from the C preprocessor and extracts all
|
|
|
|
qstr. Each qstr is transformed into a qstr definition of the form 'Q(...)'.
|
|
|
|
|
|
|
|
This script works with Python 2.6, 2.7, 3.3 and 3.4.
|
|
|
|
"""
|
|
|
|
|
2017-06-08 23:42:13 -04:00
|
|
|
from __future__ import print_function
|
|
|
|
|
2020-10-29 01:38:13 -04:00
|
|
|
import io
|
|
|
|
import os
|
2016-03-11 11:12:59 -05:00
|
|
|
import re
|
2020-10-08 10:40:17 -04:00
|
|
|
import subprocess
|
2017-06-08 23:42:13 -04:00
|
|
|
import sys
|
2020-10-29 01:38:13 -04:00
|
|
|
import multiprocessing, multiprocessing.dummy
|
2016-03-11 11:12:59 -05:00
|
|
|
|
|
|
|
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
# Extract MP_QSTR_FOO macros.
|
|
|
|
_MODE_QSTR = "qstr"
|
|
|
|
|
|
|
|
# Extract MP_COMPRESSED_ROM_TEXT("") macros. (Which come from MP_ERROR_TEXT)
|
|
|
|
_MODE_COMPRESS = "compress"
|
|
|
|
|
2022-05-31 03:10:14 -04:00
|
|
|
# Extract MP_REGISTER_MODULE(...) macros.
|
|
|
|
_MODE_MODULE = "module"
|
|
|
|
|
2022-07-01 13:29:08 -04:00
|
|
|
# Extract MP_REGISTER_ROOT_POINTER(...) macros.
|
|
|
|
_MODE_ROOT_POINTER = "root_pointer"
|
|
|
|
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
|
2022-03-29 07:42:19 -04:00
|
|
|
def is_c_source(fname):
|
|
|
|
return os.path.splitext(fname)[1] in [".c"]
|
|
|
|
|
|
|
|
|
|
|
|
def is_cxx_source(fname):
|
|
|
|
return os.path.splitext(fname)[1] in [".cc", ".cp", ".cxx", ".cpp", ".CPP", ".c++", ".C"]
|
|
|
|
|
|
|
|
|
2020-10-08 10:40:17 -04:00
|
|
|
def preprocess():
|
|
|
|
if any(src in args.dependencies for src in args.changed_sources):
|
|
|
|
sources = args.sources
|
|
|
|
elif any(args.changed_sources):
|
|
|
|
sources = args.changed_sources
|
|
|
|
else:
|
|
|
|
sources = args.sources
|
|
|
|
csources = []
|
|
|
|
cxxsources = []
|
|
|
|
for source in sources:
|
2022-03-29 07:42:19 -04:00
|
|
|
if is_cxx_source(source):
|
2020-10-08 10:40:17 -04:00
|
|
|
cxxsources.append(source)
|
2022-03-29 07:42:19 -04:00
|
|
|
elif is_c_source(source):
|
2020-10-08 10:40:17 -04:00
|
|
|
csources.append(source)
|
|
|
|
try:
|
|
|
|
os.makedirs(os.path.dirname(args.output[0]))
|
|
|
|
except OSError:
|
|
|
|
pass
|
2020-10-29 01:38:13 -04:00
|
|
|
|
|
|
|
def pp(flags):
|
|
|
|
def run(files):
|
|
|
|
return subprocess.check_output(args.pp + flags + files)
|
|
|
|
|
|
|
|
return run
|
|
|
|
|
|
|
|
try:
|
|
|
|
cpus = multiprocessing.cpu_count()
|
|
|
|
except NotImplementedError:
|
|
|
|
cpus = 1
|
|
|
|
p = multiprocessing.dummy.Pool(cpus)
|
|
|
|
with open(args.output[0], "wb") as out_file:
|
|
|
|
for flags, sources in (
|
|
|
|
(args.cflags, csources),
|
|
|
|
(args.cxxflags, cxxsources),
|
|
|
|
):
|
|
|
|
batch_size = (len(sources) + cpus - 1) // cpus
|
|
|
|
chunks = [sources[i : i + batch_size] for i in range(0, len(sources), batch_size or 1)]
|
|
|
|
for output in p.imap(pp(flags), chunks):
|
|
|
|
out_file.write(output)
|
2020-10-08 10:40:17 -04:00
|
|
|
|
|
|
|
|
2016-04-19 04:30:06 -04:00
|
|
|
def write_out(fname, output):
|
|
|
|
if output:
|
2016-04-23 12:36:07 -04:00
|
|
|
for m, r in [("/", "__"), ("\\", "__"), (":", "@"), ("..", "@@")]:
|
|
|
|
fname = fname.replace(m, r)
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
with open(args.output_dir + "/" + fname + "." + args.mode, "w") as f:
|
2016-04-19 04:30:06 -04:00
|
|
|
f.write("\n".join(output) + "\n")
|
|
|
|
|
2020-02-26 23:36:53 -05:00
|
|
|
|
2016-03-11 11:12:59 -05:00
|
|
|
def process_file(f):
|
2018-03-16 08:54:06 -04:00
|
|
|
re_line = re.compile(r"#[line]*\s\d+\s\"([^\"]+)\"")
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
if args.mode == _MODE_QSTR:
|
|
|
|
re_match = re.compile(r"MP_QSTR_[_a-zA-Z0-9]+")
|
|
|
|
elif args.mode == _MODE_COMPRESS:
|
|
|
|
re_match = re.compile(r'MP_COMPRESSED_ROM_TEXT\("([^"]*)"\)')
|
2022-05-31 03:10:14 -04:00
|
|
|
elif args.mode == _MODE_MODULE:
|
2022-05-31 08:56:11 -04:00
|
|
|
re_match = re.compile(r"MP_REGISTER_MODULE\(.*?,\s*.*?\);")
|
2022-07-01 13:29:08 -04:00
|
|
|
elif args.mode == _MODE_ROOT_POINTER:
|
|
|
|
re_match = re.compile(r"MP_REGISTER_ROOT_POINTER\(.*?\);")
|
2016-03-11 11:12:59 -05:00
|
|
|
output = []
|
2016-04-19 04:30:06 -04:00
|
|
|
last_fname = None
|
2016-03-11 11:12:59 -05:00
|
|
|
for line in f:
|
2018-03-16 08:54:06 -04:00
|
|
|
if line.isspace():
|
|
|
|
continue
|
2016-04-23 12:36:07 -04:00
|
|
|
# match gcc-like output (# n "file") and msvc-like output (#line n "file")
|
2020-02-26 23:36:53 -05:00
|
|
|
if line.startswith(("# ", "#line")):
|
2018-03-16 08:54:06 -04:00
|
|
|
m = re_line.match(line)
|
2016-04-23 12:36:07 -04:00
|
|
|
assert m is not None
|
|
|
|
fname = m.group(1)
|
2022-03-29 07:42:19 -04:00
|
|
|
if not is_c_source(fname) and not is_cxx_source(fname):
|
2016-04-19 04:30:06 -04:00
|
|
|
continue
|
|
|
|
if fname != last_fname:
|
|
|
|
write_out(last_fname, output)
|
|
|
|
output = []
|
|
|
|
last_fname = fname
|
|
|
|
continue
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
for match in re_match.findall(line):
|
|
|
|
if args.mode == _MODE_QSTR:
|
|
|
|
name = match.replace("MP_QSTR_", "")
|
|
|
|
output.append("Q(" + name + ")")
|
2022-07-01 13:29:08 -04:00
|
|
|
elif args.mode in (_MODE_COMPRESS, _MODE_MODULE, _MODE_ROOT_POINTER):
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
output.append(match)
|
2016-03-11 11:12:59 -05:00
|
|
|
|
2020-08-26 05:23:10 -04:00
|
|
|
if last_fname:
|
|
|
|
write_out(last_fname, output)
|
2016-04-19 04:30:06 -04:00
|
|
|
return ""
|
|
|
|
|
|
|
|
|
|
|
|
def cat_together():
|
|
|
|
import glob
|
|
|
|
import hashlib
|
2020-02-26 23:36:53 -05:00
|
|
|
|
2016-04-19 04:30:06 -04:00
|
|
|
hasher = hashlib.md5()
|
|
|
|
all_lines = []
|
|
|
|
outf = open(args.output_dir + "/out", "wb")
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
for fname in glob.glob(args.output_dir + "/*." + args.mode):
|
2016-04-19 04:30:06 -04:00
|
|
|
with open(fname, "rb") as f:
|
|
|
|
lines = f.readlines()
|
|
|
|
all_lines += lines
|
|
|
|
all_lines.sort()
|
|
|
|
all_lines = b"\n".join(all_lines)
|
|
|
|
outf.write(all_lines)
|
|
|
|
outf.close()
|
|
|
|
hasher.update(all_lines)
|
|
|
|
new_hash = hasher.hexdigest()
|
2020-02-26 23:36:53 -05:00
|
|
|
# print(new_hash)
|
2016-04-19 04:30:06 -04:00
|
|
|
old_hash = None
|
|
|
|
try:
|
|
|
|
with open(args.output_file + ".hash") as f:
|
|
|
|
old_hash = f.read()
|
|
|
|
except IOError:
|
|
|
|
pass
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
mode_full = "QSTR"
|
|
|
|
if args.mode == _MODE_COMPRESS:
|
|
|
|
mode_full = "Compressed data"
|
2022-05-31 03:10:14 -04:00
|
|
|
elif args.mode == _MODE_MODULE:
|
|
|
|
mode_full = "Module registrations"
|
2022-07-01 13:29:08 -04:00
|
|
|
elif args.mode == _MODE_ROOT_POINTER:
|
|
|
|
mode_full = "Root pointer registrations"
|
2016-04-19 04:30:06 -04:00
|
|
|
if old_hash != new_hash:
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
print(mode_full, "updated")
|
2016-04-23 12:36:07 -04:00
|
|
|
try:
|
|
|
|
# rename below might fail if file exists
|
|
|
|
os.remove(args.output_file)
|
|
|
|
except:
|
|
|
|
pass
|
2016-04-19 04:30:06 -04:00
|
|
|
os.rename(args.output_dir + "/out", args.output_file)
|
|
|
|
with open(args.output_file + ".hash", "w") as f:
|
|
|
|
f.write(new_hash)
|
|
|
|
else:
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
print(mode_full, "not updated")
|
2016-03-11 11:12:59 -05:00
|
|
|
|
|
|
|
|
|
|
|
if __name__ == "__main__":
|
2020-10-08 10:40:17 -04:00
|
|
|
if len(sys.argv) < 6:
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
print("usage: %s command mode input_filename output_dir output_file" % sys.argv[0])
|
2017-06-08 23:42:13 -04:00
|
|
|
sys.exit(2)
|
|
|
|
|
|
|
|
class Args:
|
|
|
|
pass
|
2020-02-26 23:36:53 -05:00
|
|
|
|
2017-06-08 23:42:13 -04:00
|
|
|
args = Args()
|
|
|
|
args.command = sys.argv[1]
|
2020-10-08 10:40:17 -04:00
|
|
|
|
|
|
|
if args.command == "pp":
|
|
|
|
named_args = {
|
|
|
|
s: []
|
|
|
|
for s in [
|
|
|
|
"pp",
|
|
|
|
"output",
|
|
|
|
"cflags",
|
|
|
|
"cxxflags",
|
|
|
|
"sources",
|
|
|
|
"changed_sources",
|
|
|
|
"dependencies",
|
|
|
|
]
|
|
|
|
}
|
|
|
|
|
|
|
|
for arg in sys.argv[1:]:
|
|
|
|
if arg in named_args:
|
|
|
|
current_tok = arg
|
|
|
|
else:
|
|
|
|
named_args[current_tok].append(arg)
|
|
|
|
|
|
|
|
if not named_args["pp"] or len(named_args["output"]) != 1:
|
|
|
|
print("usage: %s %s ..." % (sys.argv[0], " ... ".join(named_args)))
|
|
|
|
sys.exit(2)
|
|
|
|
|
|
|
|
for k, v in named_args.items():
|
|
|
|
setattr(args, k, v)
|
|
|
|
|
|
|
|
preprocess()
|
|
|
|
sys.exit(0)
|
|
|
|
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
args.mode = sys.argv[2]
|
|
|
|
args.input_filename = sys.argv[3] # Unused for command=cat
|
|
|
|
args.output_dir = sys.argv[4]
|
|
|
|
args.output_file = None if len(sys.argv) == 5 else sys.argv[5] # Unused for command=split
|
|
|
|
|
2022-07-01 13:29:08 -04:00
|
|
|
if args.mode not in (_MODE_QSTR, _MODE_COMPRESS, _MODE_MODULE, _MODE_ROOT_POINTER):
|
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
2019-09-26 08:19:29 -04:00
|
|
|
print("error: mode %s unrecognised" % sys.argv[2])
|
|
|
|
sys.exit(2)
|
2017-06-08 23:42:13 -04:00
|
|
|
|
2016-04-19 04:30:06 -04:00
|
|
|
try:
|
|
|
|
os.makedirs(args.output_dir)
|
|
|
|
except OSError:
|
|
|
|
pass
|
2016-03-11 11:12:59 -05:00
|
|
|
|
2016-04-19 07:39:08 -04:00
|
|
|
if args.command == "split":
|
2020-02-26 23:36:53 -05:00
|
|
|
with io.open(args.input_filename, encoding="utf-8") as infile:
|
2016-04-19 07:39:08 -04:00
|
|
|
process_file(infile)
|
2016-03-11 11:12:59 -05:00
|
|
|
|
2016-04-19 07:39:08 -04:00
|
|
|
if args.command == "cat":
|
|
|
|
cat_together()
|