This commit adds a new tool called mpy_ld.py which is essentially a linker
that builds .mpy files directly from .o files. A new header file
(dynruntime.h) and makefile fragment (dynruntime.mk) are also included
which allow building .mpy files from C source code. Such .mpy files can
then be dynamically imported as though they were a normal Python module,
even though they are implemented in C.
Converting .o files directly (rather than pre-linked .elf files) allows the
resulting .mpy to be more efficient because it has more control over the
relocations; for example it can skip PLT indirection. Doing it this way
also allows supporting more architectures, such as Xtensa which has
specific needs for position-independent code and the GOT.
The tool supports targets of x86, x86-64, ARM Thumb and Xtensa (windowed
and non-windowed). BSS, text and rodata sections are supported, with
relocations to all internal sections and symbols, as well as relocations to
some external symbols (defined by dynruntime.h), and linking of qstrs.
Usage:
mpy-tool.py -o merged.mpy --merge mod1.mpy mod2.mpy
The constituent .mpy files are executed sequentially when the merged file
is imported, and they all use the same global namespace.
While the new manifest.py style got introduced for freezing python code
into the resulting binary, the old way - where files and modules within
ports/*/modules where baked into the resulting binary - was still
supported via `freeze('$(PORT_DIR)/modules')` within manifest.py.
However behaviour changed for symlinked directories (=modules), as those
links weren't followed anymore.
This commit restores the original behaviour by explicitly following
symlinks within a modules/ directory
When loading a manifest file, e.g. by include(), it will chdir first to the
directory of that manifest. This means that all file operations within a
manifest are relative to that manifest's location.
As a consequence of this, additional environment variables are needed to
find absolute paths, so the following are added: $(MPY_LIB_DIR),
$(PORT_DIR), $(BOARD_DIR). And rename $(MPY) to $(MPY_DIR) to be
consistent.
Existing manifests are updated to match.
This introduces a new build variable FROZEN_MANIFEST which can be set to a
manifest listing (written in Python) that describes the set of files to be
frozen in to the firmware.
Instead of encoding 4 zero bytes as placeholders for the simple_name and
source_file qstrs, and storing the qstrs after the bytecode, store the
qstrs at the location of these 4 bytes. This saves 4 bytes per bytecode
function stored in a .mpy file (for example lcd160cr.mpy drops by 232
bytes, 4x 58 functions). And resulting code size is slightly reduced on
ports that use this feature.
This patch compresses the second part of the bytecode prelude which
contains the source file name, function name, source-line-number mapping
and cell closure information. This part of the prelude now begins with a
single varible length unsigned integer which encodes 2 numbers, being the
byte-size of the following 2 sections in the header: the "source info
section" and the "closure section". After decoding this variable unsigned
integer it's possible to skip over one or both of these sections very
easily.
This scheme saves about 2 bytes for most functions compared to the original
format: one in the case that there are no closure cells, and one because
padding was eliminated.
The start of the bytecode prelude contains 6 numbers telling the amount of
stack needed for the Python values and exceptions, and the signature of the
function. Prior to this patch these numbers were all encoded one after the
other (2x variable unsigned integers, then 4x bytes), but using so many
bytes is unnecessary.
An entropy analysis of around 150,000 bytecode functions from the CPython
standard library showed that the optimal Shannon coding would need about
7.1 bits on average to encode these 6 numbers, compared to the existing 48
bits.
This patch attempts to get close to this optimal value by packing the 6
numbers into a single, varible-length unsigned integer via bit-wise
interleaving. The interleaving scheme is chosen to minimise the average
number of bytes needed, and at the same time keep the scheme simple enough
so it can be implemented without too much overhead in code size or speed.
The scheme requires about 10.5 bits on average to store the 6 numbers.
As a result most functions which originally took 6 bytes to encode these 6
numbers now need only 1 byte (in 80% of cases).
Prior to this patch mp_opcode_format would calculate the incorrect size of
the MP_BC_UNWIND_JUMP opcode, missing the additional byte. But, because
opcodes below 0x10 are unused and treated as bytes in the .mpy load/save
and freezing code, this bug did not show any symptoms, since nested unwind
jumps would rarely (if ever) reach a depth of 16 (so the extra byte of this
opcode would be between 0x01 and 0x0f and be correctly loaded/saved/frozen
simply as an undefined opcode).
This patch fixes this bug by correctly accounting for the additional byte.
.
- Split 'qemu-arm' from 'unix' for generating tests.
- Add frozen module to the qemu-arm test build.
- Add test that reproduces the requirement to half-word align native
function data.
Use "-f" to select filesystem mode, followed by the command to execute.
Optionally put ":" at the start of a filename to indicate that it's on the
remote device, if it would otherwise be ambiguous.
Examples:
$ pyboard.py -f ls
$ pyboard.py -f cat main.py
$ pyboard.py -f cp :main.py . # get from device
$ pyboard.py -f cp main.py : # put to device
$ pyboard.py -f rm main.py
Previously, when linking qstr objects in native code for ARM Thumb, the
index into the machine code was being incremented by 4, not 8. It should
be 8 to account for the size of the two machine instructions movw and movt.
This patch makes sure the index into the machine code is incremented by the
correct amount for all variations of qstr linking.
See issue #4829.
Fixes errors in the tool when 1) linking qstrs in native ARM-M code; 2)
freezing multiple files some of which use native code and some which don't.
Fixes issue #4829.
The user can now select their own package index by either passing the "-i"
command line option, or setting the upip.index_urls variable (before doing
an install).
The https://micropython.org/pi package index hosts packages from
micropython-lib and will be searched first when installing a package. If a
package is not found here then it will fallback to PyPI.
Prior to this patch, when a lot of data was output by a running script
pyboard.py would try to capture all of this output into the "data"
variable, which would gradually slow down pyboard.py to the point where it
would have large CPU and memory usage (on the host) and potentially lose
data.
This patch fixes this problem by not accumulating the data in the case that
the data is not needed, which is when "data_consumer" is used.
The qstr window size is not log-2 encoded, it's just the actual number (but
in mpy-tool.py this didn't lead to an error because the size is just used
to truncate the window so it doesn't grow arbitrarily large in memory).
Addresses issue #4635.
This system makes it a lot easier to include external libraries as static,
native modules in MicroPython. Simply pass USER_C_MODULES (like
FROZEN_MPY_DIR) as a make parameter.
When encoded in the mpy file, if qstr <= QSTR_LAST_STATIC then store two
bytes: 0, static_qstr_id. Otherwise encode the qstr as usual (either with
string data or a reference into the qstr window).
Reduces mpy file size by about 5%.
Instead of emitting two bytes in the bytecode for where the linked qstr
should be written to, it is now replaced by the actual qstr data, or a
reference into the qstr window.
Reduces mpy file size by about 10%.
This is an implementation of a sliding qstr window used to reduce the
number of qstrs stored in a .mpy file. The window size is configured to 32
entries which takes a fixed 64 bytes (16-bits each) on the C stack when
loading/saving a .mpy file. It allows to remember the most recent 32 qstrs
so they don't need to be stored again in the .mpy file. The qstr window
uses a simple least-recently-used mechanism to discard the least recently
used qstr when the window overflows (similar to dictionary compression).
This scheme only needs a single pass to save/load the .mpy file.
Reduces mpy file size by about 25% with a window size of 32.
POP_BLOCK and POP_EXCEPT are now the same, and are always followed by a
JUMP. So this optimisation reduces code size, and RAM usage of bytecode by
two bytes for each try-except handler.
Under python3 (tested with 3.6.7) bytes with a list of integers as an
argument returns a different result than under python 2.7 (tested with
2.7.15rc1) which causes pydfu.py to fail when run under 2.7. Changing
bytes to bytearray makes pydfu work properly under both Python 2.7 and
Python 3.6.
If you happen to only have a really simple frozen file that doesn't contain
any new qstrs then the generated frozen_mpy.c file contains an empty
enumeration which causes a C compile time error.
Following an equivalent fix to py/bc.c. The reason the incorrect values
for the opcode constants were not previously causing a bug is because they
were never being used: these opcodes always have qstr arguments so the part
of the code that was comparing them would never be reached.
Thanks to @malinah for finding the problem and providing the initial patch.
A DFU device must be in the idle state before it can be programmed, and
this requires either clearing the status or aborting, depending on its
current state. Code is added to do this. And the USB transfer size is now
automatically detected so devices with a size less than 2048 bytes work
correctly.
Some Python linters don't like unconditional except clauses because they
catch SystemExit and KeyboardInterrupt, which usually is not the intended
behaviour.
There appears to be an issue on Windows with CPython >= 3.6,
sys.stdout.flush() raises an exception:
OSError: [WinError 87] The parameter is incorrect
It works fine to just catch and ignore the error on the flush line. Tested
on Windows 10 x64 1803 (Build 17134.228), Python 3.6.4 amd64.
The Python documentation recommends to pass the command as a string when
using Popen(..., shell=True). This is because "sh -c <string>" is used to
execute the command and additional arguments after the command string are
passed to the shell itself (not the executing command).
https://docs.python.org/3.5/library/subprocess.html#subprocess.Popen
The first dynamic qstr pool is double the size of the 'alloc' field of
the last const qstr pool. The built in const qstr pool
(mp_qstr_const_pool) has a hardcoded alloc size of 10, meaning that the
first dynamic pool is allocated space for 20 entries. The alloc size
must be less than or equal to the actual number of qstrs in the pool
(the 'len' field) to ensure that the first dynamically created qstr
triggers the creation of a new pool.
When modules are frozen a second const pool is created (generally
mp_qstr_frozen_const_pool) and linked to the built in pool. However,
this second const pool had its 'alloc' field set to the number of qstrs
in the pool. When freezing a large quantity of modules this can result
in thousands of qstrs being in the pool. This means that the first
dynamically created qstr results in a massive allocation. This commit
sets the alloc size of the frozen qstr pool to 10 or less (if the number
of qstrs in the pool is less than 10). The result of this is that the
allocation behaviour when a dynamic qstr is created is identical with an
without frozen code.
Note that there is the potential for a slight memory inefficiency if the
frozen modules have less than 10 qstrs, as the first few dynamic
allocations will have quite a large overhead, but the geometric growth
soon deals with this.
The ST DFU bootloader supports a transfer size up to 2048 bytes, so send
that much data on each download (to device) packet. This almost halves
total download time.
Instead of passing thru more and more options from tinytest-codegen to
run-tests --list-tests, pipe output of run-tests --list-tests into
tinytest-codegen.
Gets passed to run-tests --list-tests to get actual list of tests to use.
If --target= is not given, legacy set hardcoded in tinytest-codegen itself
is used.
Also, get rid of tinytest test groups - they aren't really used for
anything, and only complicate processing. Besides, one of the next
step is to limit number of tests per a generated file to control
the binary size, which also will require "flat" list of tests.
The way tinytest was used in qemu-arm test target is that it didn't test
much. MicroPython tests are based on matching the test output against
reference output, but qemu-arm's implementation didn't do that, it
effectively tested just that there was no exception during test
execution. "upytesthelper" wrapper was introduce to fix it, and so
test generator is now switched to generate test code for it.
Also, fix PEP8 and other codestyle issues.
This patch allows the following code to run without allocating on the heap:
super().foo(...)
Before this patch such a call would allocate a super object on the heap and
then load the foo method and call it right away. The super object is only
needed to perform the lookup of the method and not needed after that. This
patch makes an optimisation to allocate the super object on the C stack and
discard it right after use.
Changes in code size due to this patch are:
bare-arm: +128
minimal: +232
unix x64: +416
unix nanbox: +364
stmhal: +184
esp8266: +340
cc3200: +128
This allows to execute a command and communicate with its stdin/stdout
via pipes ("exec") or with command-created pseudo-terminal ("execpty"),
to emulate serial access. Immediate usecase is controlling a QEMU process
which emulates board's serial via normal console, but it could be used
e.g. with helper binaries to access real board over other hadware
protocols, etc.
An example of device specification for these cases is:
--device exec:../zephyr/qemu.sh
--device execpty:../zephyr/qemu2.sh
Where qemu.sh contains long-long qemu startup line, or calls another
command. There's a special support in this patch for running the command
in a new terminal session, to support shell wrappers like that (without
new terminal session, only wrapper script would be terminated, but its
child processes would continue to run).
This patch introduces the a small framework to track differences between
uPy and CPython. The framework consists of:
- A set of "tests" which test for an individual feature that differs between
uPy and CPy. Each test is like a normal uPy test in the test suite, but
has a special comment at the start with some meta-data: a category (eg
syntax, core language), a human-readable description of the difference, a
cause, and a workaround. Following the meta-data there is a short code
snippet which demonstrates the difference. See tests/cpydiff directory
for the initial set of tests.
- A program (this patch) which runs all the tests (on uPy and CPy) and
generates nicely-formated .rst documenting the differences.
- Integration into the docs build so that everything is automatic, and the
differences appear in a way that is easy for users to read/reference (see
latter commits).
The idea with using this new framework is:
- When a new difference is found it's easy to write a short test for it,
along with a description, and add it to the existing ones. It's also easy
for contributors to submit tests for differences they find.
- When something is no longer different the tool will give an error and
difference can be removed (or promoted to a proper feature test).
Previous to this patch the qemu-arm tests were compiled with is_relp=true
meaning that the __repl_print__ function was called for all lines of code
in the outer scope. This is not the right behaviour for scripts that are
executed as though they were a file (eg tests).
With this fix the micropython/heapalloc_str.py test now works so it is
removed from the test blacklist.
With caching of map lookups in the bytecode, frozen bytecode can still
work but must be stored in RAM, not ROM. This patch allows mpy-tool.py to
generate code that works with this optimisation, but it's not recommended
to use it on embedded targets (because of lack of RAM).
Previous to this patch pyboard.py would open a new serial connection to
the target for each script that was run, and for any command that was run.
Apart from being inefficient, this meant that the board was soft-reset
between scripts/commands, which precludes scripts from accessing variables
set in a previous one.
This patch changes the behaviour of pyboard.py so that the connection to
the target is created only once, and it's not reset between scripts or any
command that is sent with the -c option.
To make its inclusion as frozen modules in multiple ports less magic.
Ports are just expected to symlink 2 files into their scripts/modules
subdirs.
Unix port updated to use this and in general follow frozen modules setup
tested and tried on baremetal ports, where there's "scripts" predefined
dir (overridable with FROZEN_DIR make var), and a user just drops Python
files there.
This helps to test floating point code on Cortex-M hardware.
As part of this patch the link-time-optimisation was disabled because it
wasn't compatible with software FP support. In particular, the linker
could not find the __aeabi_f2d, __aeabi_d2f etc functions even though they
were provided by lib/libm/math.c.
Frozen modules are now stored with extensions and with '/' as path
separator. In other words, frozen modules paths stored as they are
in normal filesystem.
When an mpy file is frozen it must know the values of certain
configuration variables. This patch provides an explicit check in the
generated C file that the configuration variables are what they are
supposed to be.
The config variable MICROPY_MODULE_FROZEN is now made of two separate
parts: MICROPY_MODULE_FROZEN_STR and MICROPY_MODULE_FROZEN_MPY. This
allows to have none, either or both of frozen strings and frozen mpy
files (aka frozen bytecode).
This fix adds PIDs 9801 and 9802 to the pybcdc.inf file.
When in CDC only mode, it presents itself as a Communcations
device rather than as a composite device. Presenting as a
composite device with only the CDC interface seems to confuse
windows.
To test and make sure that the correct pybcdc.inf was being used,
I used USBDeview from http://www.nirsoft.net/utils/usb_devices_view.html
to uninstall any old pyboard drivers (Use Control-F and search
for pyboard). I found running USBDeview as administrator worked best.
Installing the driver in CDC+MSC mode first is recommended (since the
pybcdc.inf file in on the internal flash drive). Then when you switch
modes everything seems to work properly.
I used https://github.com/dhylands/upy-examples/blob/master/boot_switch.py
to easily switch the pyboard between the various USB modes for testing.
The adapter class "TelnetToSerial" is used to access the Telnet
connection using the same API as with the serial connection. The
function pyboard.run-test() has been removed to made the module
generic and because this small test is no longer needed.
When looking for chars to indicate raw repl is active, look for the full
string of chars to improve reliability of entering raw repl correctly.
Previous to this patch there was the possibility that raw repl was
entered in a dirty state, where not all input chars from previous
invocation were drained.
In raw REPL ">" indicates the prompt. We originally read this character
upon entering the raw REPL, and after reading the last bit of the
output. This patch changes the logic so the ">" is read only just
before trying to send the next command. To make this work (and as an
added feature) the input buffer is now flushed upon entering raw REPL.
The main reason for this change is so that pyboard.py recognises the EOF
when sys.exit() is called on the pyboard. Ie, if you run pyboard.py
with a script that calls sys.exit(), then pyboard.py will exit after
the sys.exit() is called.
upip is a simple and light-weight package manager for MicroPython modules,
offering subset of pip functionality. upip is part of micropython-lib
project: https://github.com/micropython/micropython-lib/tree/master/upip
This script bootstraps upip by downloading and unpacking it directly from
PyPI repository, with all other packages to be installed with upip itself.
Replaces RUN_TEST=1 definition; now "make test" in qemu-arm directory
will run tests/basics/ and check that they all succeed.
This patch also enables the test on Travis CI.
We don't have an explicit ChangeLog file, but don't really need one
because we use a good version control system. This script is useful if
you need a pretty-printed ChangeLog for some reason.
This makes pyboard.py much more useful for long running scripts. When
running a script via pyboard.py, it now waits until the script finishes,
with no timeout. CTRL-C can be used to break out of the waiting if
needed.
Improvements are:
2 ctrl-C's are now needed to truly kill running script on pyboard, so
make CDC interface allow multiple ctrl-C's through at once (ie sending
b'\x03\x03' to pyboard now counts as 2 ctrl-C's).
ctrl-C in friendly-repl can now stop multi-line input.
In raw-repl mode, use ctrl-D to indicate end of running script, and also
end of any error message. Thus, output of raw-repl is always at least 2
ctrl-D's and it's much easier to parse.
pyboard.py is now a bit faster, handles exceptions from pyboard better
(prints them and exits with exit code 1), prints out the pyboard output
while the script is running (instead of waiting till the end), and
allows to follow the output of a previous script when run with no
arguments.
-t/--target is a pip option. Trying to use pip options for different meanings
in pip-micropython may lead to big confusion. That's why the original passed
any extra parameters using environment variables. "All options belong to pip."
Using Python's file open in 'r' mode opens it for text reading, which
converts all new lines to \n. Could use 'rb' binary mode, but then
don't have access to the string Template replacement functions. Thus,
force the output to have '\\r\\n' ending.
Also fix regex to match hex digits.
The USB VID&PID are automatically extracted from usbd_desc_cdc_msc.c
and inserted into pybcdc_inf.template, ensuring that the same USB
IDs get used everywhere