Add working example code to provide a starting point for users with files
that they can just copy, and include the modules in the coverage test to
verify the complete user C module build functionality. The cexample module
uses the code originally found in cmodules.rst, which has been updated to
reflect this and partially rewritten with more complete information.
Support building .cpp files and linking them into the micropython
executable in a way similar to how it is done for .c files. The main
incentive here is to enable user C modules to use C++ files (which are put
in SRC_MOD_CXX by py.mk) since the core itself does not utilize C++.
However, to verify build functionality a unix overage test is added. The
esp32 port already has CXXFLAGS so just add the user modules' flags to it.
For the unix port use a copy of the CFLAGS but strip the ones which are not
usable for C++.
If a port provides MICROPY_PY_URANDOM_SEED_INIT_FUNC as a source of
randomness then this will be used when urandom.seed() is called without
an argument (or with None as the argument) to seed the pRNG.
Other related changes in this commit:
- mod_urandom___init__ is changed to call seed() without arguments, instead
of explicitly passing in the result of MICROPY_PY_URANDOM_SEED_INIT_FUNC.
- mod_urandom___init__ will only ever seed the pRNG once (before it could
seed it again if imported by, eg, random and then urandom).
- The Yasmarang state is moved to the BSS for builds where the state is
guaranteed to be initialised on import of the (u)random module.
Signed-off-by: Damien George <damien@micropython.org>
When threading is enabled without the GIL then there can be races between
the threads accessing the globals dict. Avoid this issue by making sure
all globals variables are allocated before starting the threads.
Signed-off-by: Damien George <damien@micropython.org>
It requires mp_hal_time_ns() to be provided by a port. This function
allows very accurate absolute timestamps.
Enabled on unix, windows, stm32, esp8266 and esp32.
Signed-off-by: Damien George <damien@micropython.org>
The use of -S ensures that only the CPython standard library is accessible,
which makes tests run the same regardless of any site-packages that are
installed. It also improves start-up time of CPython, reducing the overall
time spent running the test suite.
tests/basics/containment.py is updated to work around issue with old Python
versions not being able to str-format a dict-keys object, which becomes
apparent when -S is used.
Signed-off-by: Damien George <damien@micropython.org>
A read-only memoryview object is a better representation of the data, which
is owned by the ubluetooth module and may change between calls to the
user's irq callback function.
Signed-off-by: Damien George <damien@micropython.org>
Prior to this commit, uos.chdir('/') followed by uos.stat('noexist') would
succeed that stat even though the entry did not exist (some other functions
like listdir would have similar issues). This is because, if the current
directory was the root and the path was relative, mp_vfs_lookup_path would
return success for bad paths.
Signed-off-by: Damien George <damien@micropython.org>
And enable this feature on unix, the coverage variant. The .exp test file
is needed so the test can run on CPython versions prior to "@=" operator
support.
Signed-off-by: Damien George <damien@micropython.org>
Prior to this commit, pow(-2, float('nan')) would return (nan+nanj), or
raise an exception on targets that don't support complex numbers. This is
fixed to return simply nan, as CPython does.
Signed-off-by: Damien George <damien@micropython.org>
This is consistent with the other 'micro' modules and allows implementing
additional features in Python via e.g. micropython-lib's sys.
Note this is a breaking change (not backwards compatible) for ports which
do not enable weak links, as "import sys" must now be replaced with
"import usys".
Verifies mtime timestamps on files match the value returned by time.time().
Also update vfs_fat_ramdisk.py so it doesn't check FAT timestamp of the
root, because that may change across runs/ports.
Signed-off-by: Damien George <damien@micropython.org>
This commit fixes the cases when a TCP socket is in STATE_NEW,
STATE_LISTENING or STATE_CONNECTING and recv() is called on it. It now
raises ENOTCONN instead of a random error code due to it previously
indexing beyond the start of error_lookup_table[].
Signed-off-by: Damien George <damien@micropython.org>
Updating to Black v20.8b1 there are two changes that affect the code in
this repository:
- If there is a trailing comma in a list (eg [], () or function call) then
that list is now written out with one line per element. So remove such
trailing commas where the list should stay on one line.
- Spaces at the start of """ doc strings are removed.
Signed-off-by: Damien George <damien@micropython.org>
So they can be skipped if __rOP__'s are not supported on the target. Also
fix the typo in the complex_special_methods.py filename.
Signed-off-by: Damien George <damien@micropython.org>
A configurable result directory is advantageous because it enables
using a dedicated location, eventually outside of the source tree,
instead of forcing the output files into a fixed directory which might
also contain other files already. For that reason the default output
directory also has been changed to tests/results/.
Replace some usages of paths relative to the current working directory
with absolute paths relative to the tests directory.
Fixes and resulting changes:
- default values of MICROPYTHON and MPYCROSS are absolute paths and
always correct
- likewise, the correct full paths for tools and extmod directories
are appended to sys.path
- printing/cleaning failures works properly since it expects the .exp
and .out files in the tests directory which is also where they
are written to now, plus no more need for changing directories
This fixes#5872 and allows running custom tests which use run-tests
without having to cd to the tests directory first, and the test output
still is in the tests/ directory instead of the current working directory.
Discovery of tests and all skip test logic based on paths relative to
the current working directory remains unchanged which essentially means
that for running most of MicroPython's own tests, run-tests must still
be ran from within it's directory, so document that.
With sleep(0.2) a multiple of sleep(0.1), the order of task 2 and 3
execution is not well defined, and depends on the precision of the system
clock and how fast the rest of the code runs. So change 0.2 to 0.18 to
make the test more reliable.
Also fix a typo of t3/t4, and cancel t4 at the end.
Signed-off-by: Damien George <damien@micropython.org>
This commit adds support for modification time of files on littlefs v2
filesystems, using file attributes. For some background see issue #6114.
Features/properties of this implementation:
- Only supported on littlefs2 (not littlefs1).
- Uses littlefs2's general file attributes to store the timestamp.
- The timestamp is 64-bits and stores nanoseconds since 1970/1/1 (if the
range to the year 2554 is not enough then additional bits can be added to
this timestamp by adding another file attribute).
- mtime is enabled by default but can be disabled in the constructor, eg:
uos.mount(uos.VfsLfs2(bdev, mtime=False), '/flash')
- It's fully backwards compatible, existing littlefs2 filesystems will work
without reformatting and timestamps will be added transparently to
existing files (once they are opened for writing).
- Files without timestamps will open correctly, and stat will just return 0
for their timestamp.
- mtime can be disabled or enabled each mount time and timestamps will only
be updated if mtime is enabled (otherwise they will be untouched).
Signed-off-by: Damien George <damien@micropython.org>
Otherwise a task that continuously awaits on a large negative sleep can
monopolise the scheduler (because its wake time is always less than
everything else in the pairing heap).
Signed-off-by: Damien George <damien@micropython.org>
As per CPython behaviour, compile(stmt, "file", "single") should create
code which prints to stdout (via __repl_print__) the results of any
expressions in stmt.
Signed-off-by: Damien George <damien@micropython.org>
mp_reader_new_file() is used to read in files for importing, either .py or
.mpy files, for the lexer and persistent code loader respectively. In both
cases the file should be opened in raw bytes mode: the lexer handles
unicode characters itself, and .mpy files contain 8-bit bytes by nature.
Before this commit importing was working correctly because, although the
file was opened in text mode, all native filesystem implementations (POSIX,
FAT, LFS) would access the file in raw bytes mode via mp_stream_rw()
calling mp_stream_p_t.read(). So it was only an issue for non-native
filesystems, such as those implemented in Python. For Python-based
filesystem implementations, a call to mp_stream_rw() would go via IOBase
and then to readinto() at the Python level, and readinto() is only defined
on files opened in raw bytes mode.
Signed-off-by: Damien George <damien@micropython.org>
On ports where normal heap memory can contain executable code (eg ARM-based
ports such as stm32), native code loaded from an .mpy file may be reclaimed
by the GC because there's no reference to the very start of the native
machine code block that is reachable from root pointers (only pointers to
internal parts of the machine code block are reachable, but that doesn't
help the GC find the memory).
This commit fixes this issue by maintaining an explicit list of root
pointers pointing to native code that is loaded from an .mpy file. This
is not needed for all ports so is selectable by the new configuration
option MICROPY_PERSISTENT_CODE_TRACK_RELOC_CODE. It's enabled by default
if a port does not specify any special functions to allocate or commit
executable memory.
A test is included to test that native code loaded from an .mpy file does
not get reclaimed by the GC.
Fixes#6045.
Signed-off-by: Damien George <damien@micropython.org>
All imports are now tested to see if the test should be skipped,
UserFile.read is removed, and UserFile.readinto is made more efficient.
Signed-off-by: Damien George <damien@micropython.org>
These tests are specific to MicroPython so have a better home in the
micropython/ test subdir, and putting them here allows them to be run by
all targets, not just those that have access to the local filesystem (eg
the unix port).
Signed-off-by: Damien George <damien@micropython.org>
It raises on EOFError instead of an IncompleteReadError (which is what
CPython does). But the latter is derived from EOFError so code compatible
with MicroPython and CPython can be written by catching EOFError (eg see
included test).
Fixes issue #6156.
Signed-off-by: Damien George <damien@micropython.org>
MicroPython's original implementation of __aiter__ was correct for an
earlier (provisional) version of PEP492 (CPython 3.5), where __aiter__ was
an async-def function. But that changed in the final version of PEP492 (in
CPython 3.5.2) where the function was changed to a normal one. See
https://www.python.org/dev/peps/pep-0492/#why-aiter-does-not-return-an-awaitable
See also the note at the end of this subsection in the docs:
https://docs.python.org/3.5/reference/datamodel.html#asynchronous-iterators
And for completeness the BPO: https://bugs.python.org/issue27243
To be consistent with the Python spec as it stands today (and now that
PEP492 is final) this commit changes MicroPython's behaviour to match
CPython: __aiter__ should return an async-iterable object, but is not
itself awaitable.
The relevant tests are updated to match.
See #6267.
Because the argument arrays may overlap, as show by the new tests in this
commit.
Also remove the debugging comments for these macros, add a new comment
about overlapping regions, and separate the macros by blank lines to make
them easier to read.
Fixes issue #6244.
Signed-off-by: Damien George <damien@micropython.org>
This commit adds human readable error messages when mbedtls or axtls raise
an exception. Currently often just an EIO error is raised so the user is
lost and can't tell whether it's a cert error, buffer overrun, connecting
to a non-ssl port, etc. The axtls and mbedtls error raising in the ussl
module is modified to raise:
OSError(-err_num, "error string")
For axtls a small error table of strings is added and used for the second
argument of the OSErrer. For mbedtls the code uses mbedtls' built-in
strerror function, and if there is an out of memory condition it just
produces OSError(-err_num). Producing the error string for mbedtls is
conditional on them being included in the mbedtls build, via
MBEDTLS_ERROR_C.
This commit adds the IRQ_GATTS_INDICATE_DONE BLE event which will be raised
with the status of gatts_indicate (unlike notify, indications require
acknowledgement).
An example of its use is added to ble_temperature.py, and to the multitests
in ble_characteristic.py.
Implemented for btstack and nimble bindings, tested in both directions
between unix/btstack and pybd/nimble.
The goal of this commit is to allow using ble.gatts_notify() at any time,
even if the stack is not ready to send the notification right now. It also
addresses the same issue for ble.gatts_indicate() and ble.gattc_write()
(without response). In addition this commit fixes the case where the
buffer passed to write-with-response wasn't copied, meaning it could be
modified by the caller, affecting the in-progress write.
The changes are:
- gatts_notify/indicate will now run in the background if the ACL buffer is
currently full, meaning that notify/indicate can be called at any time.
- gattc_write(mode=0) (no response) will now allow for one outstanding
write.
- gattc_write(mode=1) (with response) will now copy the buffer so that it
can't be modified by the caller while the write is in progress.
All four paths also now track the buffer while the operation is in
progress, which prevents the GC free'ing the buffer while it's still
needed.
This commit fixes lookups of class members to make it so that built-in
functions that are used as methods/functions of a class work correctly.
The mp_convert_member_lookup() function is pretty much completely changed
by this commit, but for the most part it's just reorganised and the
indenting changed. The functional changes are:
- staticmethod and classmethod checks moved to later in the if-logic,
because they are less common and so should be checked after the more
common cases.
- The explicit mp_obj_is_type(member, &mp_type_type) check is removed
because it's now subsumed by other, more general tests in this function.
- MP_TYPE_FLAG_BINDS_SELF and MP_TYPE_FLAG_BUILTIN_FUN type flags added to
make the checks in this function much simpler (now they just test this
bit in type->flags).
- An extra check is made for mp_obj_is_instance_type(type) to fix lookup of
built-in functions.
Fixes#1326 and #6198.
Signed-off-by: Damien George <damien@micropython.org>
This allows complex binary operations to fail gracefully with unsupported
operation rather than raising an exception, so that special methods work
correctly.
Signed-off-by: Damien George <damien@micropython.org>
uint types in viper mode can now be used for all binary operators except
floor-divide and modulo.
Fixes issue #1847 and issue #6177.
Signed-off-by: Damien George <damien@micropython.org>
An OrderedDict can now be used for the locals when creating a type
explicitly via type(name, bases, locals).
Signed-off-by: Damien George <damien@micropython.org>
The syntax matches CPython and the semantics are equivalent except that,
unlike CPython, MicroPython allows using := to assign to comprehension
iteration variables, because disallowing this would take a lot of code to
check for it.
The new compile-time option MICROPY_PY_ASSIGN_EXPR selects this feature and
is enabled by default, following MICROPY_PY_ASYNC_AWAIT.
For example, to run the BLE multitests entirely with the unix port:
env MICROPY_MICROPYTHON=../ports/unix/micropython-dev ./run-multitests.py \
-i micropython,MICROPYBTUSB=01 \
-i micropython,MICROPYBTUSB=02:02 \
multi_bluetooth/ble_*.py
The behavior mirrors the instance object dict attribute where a copy of the
local attributes are provided (unless the dict is read-only, then that dict
itself is returned, as an optimisation). MicroPython does not support
modifying this dict because the changes will not be reflected in the class.
The feature is only enabled if MICROPY_CPYTHON_COMPAT is set, the same as
the instance version.
Older implementations deal with infinity/negative zero incorrectly. This
commit adds generic fixes that can be enabled by any port that needs them,
along with new tests cases.
This commit allows the user to set/get the GAP device name used by service
0x1800, characteristic 0x2a00. The usage is:
BLE.config(gap_name="myname")
print(BLE.config("gap_name"))
As part of this change the compile-time setting
MICROPY_PY_BLUETOOTH_DEFAULT_NAME is renamed to
MICROPY_PY_BLUETOOTH_DEFAULT_GAP_NAME to emphasise its link to GAP and this
new "gap_name" config value. And the default value of this for the NimBLE
bindings is changed from "PYBD" to "MPY NIMBLE" to be more generic.
This commit fixes the behaviour of socket.getaddrinfo on the ESP32 so it
raises an OSError when the name resolution fails instead of returning a []
or a resolution for 0.0.0.0.
Tests are added (generic and ESP32-specific) to verify behaviour consistent
with CPython, modulo the different types of exceptions per MicroPython
documentation.
If the new name start with '/', cur_dir is not prepened any more, so that
the current working directory is respected. And extend the test cases for
rename to cover this functionality.
This change scans for '.', '..' and multiple '/' and normalizes the new
path name. If the resulting path does not exist, an error is raised.
Non-existing interim path elements are ignored if they are removed during
normalization.
This fixes the bug, that stat(filename) would not consider the current
working directory. So if e.g. the cwd is "lib", then stat("main.py") would
return the info for "/main.py" instead of "/lib/main.py".
On arm64 with CPython:
>>> _thread.stack_size(32*1024)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: size not valid: 32768 bytes
So increase the CPython value in the test to 512k so it runs on more
systems (on modern Linux the default stack size is usually 8MB).
Constant expression like "2 ** 3" will now be folded, and the special form
"X = const(2 ** 3)" will now compile because the argument to the const is
now a constant.
Fixes issue #5865.
This commit adds several small items to improve the support for OTA
updates on an esp32:
- a partition table for 4MB flash modules that has two OTA partitions ready
to go to do updates
- a GENERIC_OTA board that uses that partition table and that enables
automatic roll-back in the bootloader
- a new esp32.Partition.mark_app_valid_cancel_rollback() class-method to
signal that the boot is successful and should not be rolled back at the
next reset
- an automated test for doing an OTA update
- documentation updates
For ports that have a system malloc which is not garbage collected (eg
unix, esp32), the stream object for the DB must be retained separately to
prevent it from being reclaimed by the MicroPython GC (because the
berkeley-db library uses malloc to allocate the DB structure which stores
the only reference to the stream).
Although in some cases the user code will explicitly retain a reference to
the underlying stream because it needs to call close() on it, this is not
always the case, eg in cases where the DB is intended to live forever.
Fixes issue #5940.
One can now use `-i micropython` and `-i cpython` to add instances using
the `MICROPYTHON` and `CPYTHON3` variables (which can be overridden by env
vars).
This commit consolidates a number of check_esp_err functions that check
whether an ESP-IDF return code is OK and raises an exception if not. The
exception raised is an OSError with the error code as the first argument
(negative if it's ESP-IDF specific) and the ESP-IDF error string as the
second argument.
This commit also fixes esp32.Partition.set_boot to use check_esp_err, and
uses that function for a unit test.
This commit adds an idf_heap_info(capabilities) method to the esp32 module
which returns info about the ESP-IDF heaps. It's useful to get a bit of a
picture of what's going on when code fails because ESP-IDF can't allocate
memory anymore. Includes documentation and a test.
For combinations of certain versions of glibc and gcc the definition of
fpclassify always takes float as argument instead of adapting itself to
float/double/long double as required by the C99 standard. At the time of
writing this happens for instance for glibc 2.27 with gcc 7.5.0 when
compiled with -Os and glibc 3.0.7 with gcc 9.3.0. When calling fpclassify
with double as argument, as in objint.c, this results in an implicit
narrowing conversion which is not really correct plus results in a warning
when compiled with -Wfloat-conversion. So fix this by spelling out the
logic manually.
When the unix and windows ports use MICROPY_FLOAT_IMPL_FLOAT instead of
MICROPY_FLOAT_IMPL_DOUBLE, the test output has for example
complex(-0.15052, 0.34109) instead of the expected
complex(-0.15051, 0.34109).
Use one decimal place less for the output printing to fix this.
This commit adds Loop.new_event_loop() which is used to reset the singleton
event loop. This functionality is put here instead of in Loop.close() to
make it possible to write code that is compatible with CPython.
In this part of the code there is no way to get the ** operator, so no need
to check for it.
This commit also adds tests for this, and other related, invalid const
operations.
The decompression of error-strings is only done if the string is accessed
via printing or via er.args. Tests are added for this feature to ensure
the decompression works.
This adds a couple of commands to the run-tests script to print the diffs
of failed tests and also to clean up the .exp and .out files after failed
tests. (And a spelling error is fixed while we are touching nearby code.)
Travis is also updated to use these new commands, including using it for
more builds.
Since automatically formatting tests with black, we have lost one line of
code coverage. This adds an explicit test to ensure we are testing the
case that is no longer covered implicitly.
This adds the Python files in the tests/ directory to be formatted with
./tools/codeformat.py. The basics/ subdirectory is excluded for now so we
aren't changing too much at once.
In a few places `# fmt: off`/`# fmt: on` was used where the code had
special formatting for readability or where the test was actually testing
the specific formatting.
Includes a test where the (non uasyncio) client does a RST on the
connection, as a simple TCP server/client test where both sides are using
uasyncio, and a test for TCP stream close then write.
Fixes UDP non-blocking recv so it returns EAGAIN instead of ETIMEDOUT.
Timeout waiting for incoming data is also improved by replacing 100ms delay
with poll_sockets(), as is done in other parts of this module.
Fixes issue #5759.
This commit adds micropython.heap_locked() which returns the current
lock-depth of the heap, and can be used by Python code to check if the heap
is locked or not. This new function is configured via
MICROPY_PY_MICROPYTHON_HEAP_LOCKED and is disabled by default.
This commit also changes the return value of micropython.heap_unlock() so
it returns the current lock-depth as well.
This commit changes the BLE _IRQ_SCAN_RESULT data from:
addr_type, addr, connectable, rssi, adv_data
to:
addr_type, addr, adv_type, rssi, adv_data
This allows _IRQ_SCAN_RESULT to handle all scan result types (not just
connectable and non-connectable passive scans), and to distinguish between
them using adv_type which is an integer taking values 0x00-0x04 per the BT
specification.
This is a breaking change to the API, albeit a very minor one: the existing
connectable value was a boolean and True now becomes 0x00, False becomes
0x02.
Documentation is updated and a test added.
Fixes#5738.
This commit adds a test runner and initial test scripts which run multiple
Python/MicroPython instances (eg executables, target boards) in parallel.
This is useful for testing, eg, network and Bluetooth functionality.
Each test file has a set of functions called instanceX(), where X ranges
from 0 up to the maximum number of instances that are needed, N-1. Then
run-multitests.py will execute this script on N separate instances (eg
micropython executables, or attached boards via pyboard.py) at the same
time, synchronising their start in the right order, possibly passing IP
address (or other address like bluetooth MAC) from the "server" instance to
the "client" instances so they can connect to each other. It then runs
them to completion, collects the output, and then tests against what
CPython gives (or what's in a provided .py.exp file).
The tests will be run using the standard unix executable for all instances
by default, eg:
$ ./run-multitests.py multi_net/*.py
Or they can be run with a board and unix executable via:
$ ./run-multitests.py --instance pyb:/dev/ttyACM0 --instance exec:micropython multi_net/*.py
Only the "==" operator was tested by the test suite in for such arguments.
Other comparison operators like "<" take a different path in the code so
need to be tested separately.
When this variable is set to non-empty string it triggers the REPL after a
command/module/file finishes running.
The Python file without the file extension is because the cmdline: parser
in run-test splits on spaces, so we can't use the -c option since
`import os` can't be written without a space.
This commit implements a more complete replication of CPython's behaviour
for equality and inequality testing of objects. This addresses the issues
discussed in #5382 and a few other inconsistencies. Improvements over the
old code include:
- Support for returning non-boolean results from comparisons (as used by
numpy and others).
- Support for non-reflexive equality tests.
- Preferential use of __ne__ methods and MP_BINARY_OP_NOT_EQUAL binary
operators for inequality tests, when available.
- Fallback to op2 == op1 or op2 != op1 when op1 does not implement the
(in)equality operators.
The scheme here makes use of a new flag, MP_TYPE_FLAG_NEEDS_FULL_EQ_TEST,
in the flags word of mp_obj_type_t to indicate if various shortcuts can or
cannot be used when performing equality and inequality tests. Currently
four built-in classes have the flag set: float and complex are
non-reflexive (since nan != nan) while bytearray and frozenszet instances
can equal other builtin class instances (bytes and set respectively). The
flag is also set for any new class defined by the user.
This commit also includes a more comprehensive set of tests for the
behaviour of (in)equality operators implemented in special methods.
This commit adds a generator test for throwing into a nested exception, and
one when using yield-from with a pending exception cleanup. Both these
tests currently fail on the native emitter, and are simplified versions of
native test failures from uasyncio in #5332.
This commit adds backward-word, backward-kill-word, forward-word,
forward-kill-word sequences for the REPL, with bindings to Alt+F, Alt+B,
Alt+D and Alt+Backspace respectively. It is disabled by default and can be
enabled via MICROPY_REPL_EMACS_WORDS_MOVE.
Further enabling MICROPY_REPL_EMACS_EXTRA_WORDS_MOVE adds extra bindings
for these new sequences: Ctrl+Right, Ctrl+Left and Ctrl+W.
The features are enabled on unix micropython-coverage and micropython-dev.
As the mktime documentation for CPython states: "The earliest date for
which it can generate a time is platform-dependent". In particular on
Windows this depends on the timezone so e.g. for UTC+2 the earliest is 2
hours past midnight January 1970. So change the reference to the earliest
possible, for UTC+14.
It is possile for `run_feature_check(pyb, args, base_path, 'float.py')` to
return `b'CRASH'`. This causes an unhandled exception in `int()`.
This commit fixes the problem by first testing for `b'CRASH'` before trying
to convert the return value to an integer.
Instances of the slice class are passed to __getitem__() on objects when
the user indexes them with a slice. In practice the majority of the time
(other than passing it on untouched) is to work out what the slice means in
the context of an array dimension of a particular length. Since Python 2.3
there has been a method on the slice class, indices(), that takes a
dimension length and returns the real start, stop and step, accounting for
missing or negative values in the slice spec. This commit implements such
a indices() method on the slice class.
It is configurable at compile-time via MICROPY_PY_BUILTINS_SLICE_INDICES,
disabled by default, enabled on unix, stm32 and esp32 ports.
This commit also adds new tests for slice indices and for slicing unicode
strings.
Allows assigning attributes on class instances that implement their own
__setattr__. Both object.__setattr__ and super(A, b).__setattr__ will work
with this commit.
Because CPython 3.8.0 now produces different output:
- basics/parser.py: CPython does not allow '\\\n' as input.
- import/import_override: CPython imports _io.
This commit adds a sys.implementation.mpy entry when the system supports
importing .mpy files. This entry is a 16-bit integer which encodes two
bytes of information from the header of .mpy files that are supported by
the system being run: the second and third bytes, .mpy version, and flags
and native architecture. This allows determining the supported .mpy file
dynamically by code, and also for the user to find it out by inspecting
this value. It's further possible to dynamically detect if the system
supports importing .mpy files by `hasattr(sys.implementation, 'mpy')`.
Replace the is_running field with a tri-state variable to indicate
running/not-running/pending-exception.
Update tests to cover the various cases.
This allows cancellation in uasyncio even if the coroutine hasn't been
executed yet. Fixes#5242
POSIX poll should always return POLLERR and POLLHUP in revents, regardless
of whether they were requested in the input events flags.
See issues #4290 and #5172.
Instead of encoding 4 zero bytes as placeholders for the simple_name and
source_file qstrs, and storing the qstrs after the bytecode, store the
qstrs at the location of these 4 bytes. This saves 4 bytes per bytecode
function stored in a .mpy file (for example lcd160cr.mpy drops by 232
bytes, 4x 58 functions). And resulting code size is slightly reduced on
ports that use this feature.
Prior to this commit, when unwinding through an active finally the stack
was not being correctly popped/folded, which resulting in the VM crashing
for complicated unwinding of nested finallys.
This should be fixed with this commit, and more tests for return/break/
continue within a finally have been added to exercise this.
This check follows CPython's behaviour, because 'import *' always populates
the globals with the imported names, not locals.
Since it's safe to do this (doesn't lead to a crash or undefined behaviour)
the check is only enabled for MICROPY_CPYTHON_COMPAT.
Fixes issue #5121.
This patch compresses the second part of the bytecode prelude which
contains the source file name, function name, source-line-number mapping
and cell closure information. This part of the prelude now begins with a
single varible length unsigned integer which encodes 2 numbers, being the
byte-size of the following 2 sections in the header: the "source info
section" and the "closure section". After decoding this variable unsigned
integer it's possible to skip over one or both of these sections very
easily.
This scheme saves about 2 bytes for most functions compared to the original
format: one in the case that there are no closure cells, and one because
padding was eliminated.
The start of the bytecode prelude contains 6 numbers telling the amount of
stack needed for the Python values and exceptions, and the signature of the
function. Prior to this patch these numbers were all encoded one after the
other (2x variable unsigned integers, then 4x bytes), but using so many
bytes is unnecessary.
An entropy analysis of around 150,000 bytecode functions from the CPython
standard library showed that the optimal Shannon coding would need about
7.1 bits on average to encode these 6 numbers, compared to the existing 48
bits.
This patch attempts to get close to this optimal value by packing the 6
numbers into a single, varible-length unsigned integer via bit-wise
interleaving. The interleaving scheme is chosen to minimise the average
number of bytes needed, and at the same time keep the scheme simple enough
so it can be implemented without too much overhead in code size or speed.
The scheme requires about 10.5 bits on average to store the 6 numbers.
As a result most functions which originally took 6 bytes to encode these 6
numbers now need only 1 byte (in 80% of cases).
From the beginning of this project the RAISE_VARARGS opcode was named and
implemented following CPython, where it has an argument (to the opcode)
counting how many args the raise takes:
raise # 0 args (re-raise previous exception)
raise exc # 1 arg
raise exc from exc2 # 2 args (chained raise)
In the bytecode this operation therefore takes 2 bytes, one for
RAISE_VARARGS and one for the number of args.
This patch splits this opcode into 3, where each is now a single byte.
This reduces bytecode size by 1 byte for each use of raise. Every byte
counts! It also has the benefit of reducing code size (on all ports except
nanbox).
To make progress towards MicroPython supporting Python 3.5, adding the
matmul operator is important because it's a really "low level" part of the
language, being a new token and modifications to the grammar.
It doesn't make sense to make it configurable because 1) it would make the
grammar and lexer complicated/messy; 2) no other operators are
configurable; 3) it's not a feature that can be "dynamically plugged in"
via an import.
And matmul can be useful as a general purpose user-defined operator, it
doesn't have to be just for numpy use.
Based on work done by Jim Mussared.
Prior to this patch mp_opcode_format would calculate the incorrect size of
the MP_BC_UNWIND_JUMP opcode, missing the additional byte. But, because
opcodes below 0x10 are unused and treated as bytes in the .mpy load/save
and freezing code, this bug did not show any symptoms, since nested unwind
jumps would rarely (if ever) reach a depth of 16 (so the extra byte of this
opcode would be between 0x01 and 0x0f and be correctly loaded/saved/frozen
simply as an undefined opcode).
This patch fixes this bug by correctly accounting for the additional byte.
.
With this patch alignment is done relative to the start of the buffer that
is being unpacked, not the raw pointer value, as per CPython.
Fixes issue #3314.
With this patch exceptions that are re-raised have improved tracebacks
(less confusing, match CPython), and it makes re-raise slightly more
efficient (in time and RAM) because they no longer need to add a traceback.
Also general VM performance is not measurably affected.
Partially fixes issue #2928.
With this patch exception tracebacks that go through a finally are improved
(less confusing, match CPython), and it makes finally's slightly more
efficient (in time and RAM) because they no longer need to add a traceback.
Partially fixes issue #2928.
- Split 'qemu-arm' from 'unix' for generating tests.
- Add frozen module to the qemu-arm test build.
- Add test that reproduces the requirement to half-word align native
function data.
Enabled via MICROPY_PY_URE_DEBUG, disabled by default (but enabled on unix
coverage build). This is a rarely used feature that costs a lot of code
(500-800 bytes flash). Debugging of regular expressions can be done
offline with other tools.
As per PEP 485, this function appeared in for Python 3.5. Configured via
MICROPY_PY_MATH_ISCLOSE which is disabled by default, but enabled for the
ports which already have MICROPY_PY_MATH_SPECIAL_FUNCTIONS enabled.
Prior to this patch the amount of free space in an array (including
bytearray) was not being maintained correctly for the case of slice
assignment which changed the size of the array. Under certain cases (as
encoded in the new test) it was possible that the array could grow beyond
its allocated memory block and corrupt the heap.
Fixes issue #4127.
JSON requires that keys of objects be strings. CPython will therefore
automatically quote simple types (NoneType, bool, int, float) when they are
used directly as keys in JSON output. To prevent subtle bugs and emit
compliant JSON, MicroPython should at least test for such keys so they
aren't silently let through. Then doing the actual quoting is a similar
cost to raising an exception, so that's what is implemented by this patch.
Fixes issue #4790.
misc_aes.py and misc_mandel.py are adapted from sources in this repository.
misc_pystone.py is the standard Python pystone test. misc_raytrace.py is
written from scratch.
This benchmarking test suite is intended to be run on any MicroPython
target. As such all tests are parameterised with N and M: N is the
approximate CPU frequency (in MHz) of the target and M is the approximate
amount of heap memory (in kbytes) available on the target. When running
the benchmark suite these parameters must be specified and then each test
is tuned to run on that target in a reasonable time (<1 second).
The test scripts are not standalone: they require adding some extra code at
the end to run the test with the appropriate parameters. This is done
automatically by the run-perfbench.py script, in such a way that imports
are minimised (so the tests can be run on targets without filesystem
support).
To interface with the benchmarking framework, each test provides a
bm_params dict and a bm_setup function, with the later taking a set of
parameters (chosen based on N, M) and returning a pair of functions, one to
run the test and one to get the results.
When running the test the number of microseconds taken by the test are
recorded. Then this is converted into a benchmark score by inverting it
(so higher number is faster) and normalising it with an appropriate factor
(based roughly on the amount of work done by the test, eg number of
iterations).
Test outputs are also compared against a "truth" value, computed by running
the test with CPython. This provides a basic way of making sure the test
actually ran correctly.
Each test is run multiple times and the results averaged and standard
deviation computed. This is output as a summary of the test.
To make comparisons of performance across different runs the
run-perfbench.py script also includes a diff mode that reads in the output
of two previous runs and computes the difference in performance. Reports
are given as a percentage change in performance with a combined standard
deviation to give an indication if the noise in the benchmarking is less
than the thing that is being measured.
Example invocations for PC, pyboard and esp8266 targets respectively:
$ ./run-perfbench.py 1000 1000
$ ./run-perfbench.py --pyboard 100 100
$ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
Reuse the implementation for bytes since it works the same way regardless
of the underlying type. This method gets added for CPython compatibility
of bytearray, but to keep the code simple and small array.array now also
has a working decode method, which is non-standard but doesn't hurt.
This allows figuring out the number of bytes in the memoryview object as
len(memview) * memview.itemsize.
The feature is enabled via MICROPY_PY_BUILTINS_MEMORYVIEW_ITEMSIZE and is
disabled by default.
It consists of:
1. "do_handhake" param (default True) to wrap_socket(). If it's False,
handshake won't be performed by wrap_socket(), as it would be done in
blocking way normally. Instead, SSL socket can be set to non-blocking mode,
and handshake would be performed before the first read/write request (by
just returning EAGAIN to these requests, while instead reading/writing/
processing handshake over the connection). Unfortunately, axTLS doesn't
really support non-blocking handshake correctly. So, while framework for
this is implemented on MicroPython's module side, in case of axTLS, it
won't work reliably.
2. Implementation of .setblocking() method. It must be called on SSL socket
for blocking vs non-blocking operation to be handled correctly (for
example, it's not enough to wrap non-blocking socket with wrap_socket()
call - resulting SSL socket won't be itself non-blocking). Note that
.setblocking() propagates call to the underlying socket object, as
expected.
When running Linux on WSL, Popen.kill() can raise a ProcessLookupError if
the process does not exist anymore, which can happen here since the
previous statement already tries to close the process by sending Ctrl-D to
the running repl. This doesn't seem to be a problem on other OSes, so just
swallow the exception silently since it indicates the process has been
closed already, which after all is what we want.