The remaining assignment was added in upstream micropython; the
deleted assignment was added in circuitpython as part of the long-lived
object area feature. During the merge, the redundant assignment
was not removed.
(since collected is a local variable and no pointers to it escape,
it doesn't seem possible for the placement of the assignment before
or after GC_ENTER() is important)
This diagnostic was found by clang 7's scan-build static analyzer.
This fixes a crash on boards with built-in displays which statically
allocate the display bus. When the pointer is provided to never
free, it tries to allocate on the non-existant heap and crashes.
This patch adds the gc_sweep_all() function which does a garbage collection
without tracing any root pointers, so frees all the memory, and most
importantly runs any remaining finalisers.
This helps primarily for soft reset: it will close any open files, any open
sockets, and help to get the system back to a clean state upon soft reset.
Without this, if GC threshold is hit and there is not enough memory left to
satisfy the request, gc_collect() will run a second time and the search for
memory will happen again and will fail again.
Thanks to @adritium for pointing out this issue, see #3786.
This patch moves the start of the root pointer section in mp_state_ctx_t
so that it skips entries that are not pointers and don't need scanning.
Previously, the start of the root pointer section was at the very beginning
of the mp_state_ctx_t struct (which is the beginning of mp_state_thread_t).
This was the original assembler version of the NLR code was hard-coded to
have the nlr_top pointer at the start of this state structure. But now
that the NLR code is partially written in C there is no longer this
restriction on the location of nlr_top (and a comment to this effect has
been removed in this patch).
So now the root pointer section starts part way through the
mp_state_thread_t structure, after the entries which are not root pointers.
This patch also moves the non-pointer entries for MICROPY_ENABLE_SCHEDULER
outside the root pointer section.
Moving non-pointer entries out of the root pointer section helps to make
the GC more precise and should help to prevent some cases of collectable
garbage being kept.
This patch also has a measurable improvement in performance of the
pystone.py benchmark: on unix x86-64 and stm32 there was an improvement of
roughly 0.6% (tested with both gcc 7.3 and gcc 8.1).
This macro is written out explicitly in the two locations that it is used
and then the code is optimised, opening possibilities for further
optimisations and reducing code size:
unix: -48
minimal CROSS=1: -32
stm32: -32
This adapts the allocation process to start from either end of the heap
when searching for free space. The default behavior is identical to the
existing behavior where it starts with the lowest block and looks higher.
Now it can also look from the highest block and lower depending on the
long_lived parameter to gc_alloc. As the heap fills, the two sections may
overlap. When they overlap, a collect may be triggered in order to keep
the long lived section compact. However, free space is always eligable
for each type of allocation.
By starting from either of the end of the heap we have ability to separate
short lived objects from long lived ones. This separation reduces heap
fragmentation because long lived objects are easy to densely pack.
Most objects are short lived initially but may be made long lived when
they are referenced by a type or module. This involves copying the
memory and then letting the collect phase free the old portion.
QSTR pools and chunks are always long lived because they are never freed.
The reallocation, collection and free processes are largely unchanged. They
simply also maintain an index to the highest free block as well as the lowest.
These indices are used to speed up the allocation search until the next collect.
In practice, this change may slightly slow down import statements with the
benefit that memory is much less fragmented afterwards. For example, a test
import into a 20k heap that leaves ~6k free previously had the largest
continuous free space of ~400 bytes. After this change, the largest continuous
free space is over 3400 bytes.
This patch introduces the MICROPY_ENABLE_PYSTACK option (disabled by
default) which enables a "Python stack" that allows to allocate and free
memory in a scoped, or Last-In-First-Out (LIFO) way, similar to alloca().
A new memory allocation API is introduced along with this Py-stack. It
includes both "local" and "nonlocal" LIFO allocation. Local allocation is
intended to be equivalent to using alloca(), whereby the same function must
free the memory. Nonlocal allocation is where another function may free
the memory, so long as it's still LIFO.
Follow-up patches will convert all uses of alloca() and VLA to the new
scoped allocation API. The old behaviour (using alloca()) will still be
available, but when MICROPY_ENABLE_PYSTACK is enabled then alloca() is no
longer required or used.
The benefits of enabling this option are (or will be once subsequent
patches are made to convert alloca()/VLA):
- Toolchains without alloca() can use this feature to obtain correct and
efficient scoped memory allocation (compared to using the heap instead
of alloca(), which is slower).
- Even if alloca() is available, enabling the Py-stack gives slightly more
efficient use of stack space when calling nested Python functions, due to
the way that compilers implement alloca().
- Enabling the Py-stack with the stackless mode allows for even more
efficient stack usage, as well as retaining high performance (because the
heap is no longer used to build and destroy stackless code states).
- With Py-stack and stackless enabled, Python-calling-Python is no longer
recursive in the C mp_execute_bytecode function.
The micropython.pystack_use() function is included to measure usage of the
Python stack.
Accessing them will crash immediately instead still working for some time,
until overwritten by some other data, leading to much less deterministic
crashes.
These checks are assumed to be true in all cases where gc_realloc is
called with a valid pointer, so no need to waste code space and time
checking them in a non-debug build.
Header files that are considered internal to the py core and should not
normally be included directly are:
py/nlr.h - internal nlr configuration and declarations
py/bc0.h - contains bytecode macro definitions
py/runtime0.h - contains basic runtime enums
Instead, the top-level header files to include are one of:
py/obj.h - includes runtime0.h and defines everything to use the
mp_obj_t type
py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
and defines everything to use the general runtime support functions
Additional, specific headers (eg py/objlist.h) can be included if needed.
Fixes for stmhal USB mass storage, lwIP bindings and VFS regressions
This release provides an important fix for the USB mass storage device in
the stmhal port by implementing the SCSI SYNCHRONIZE_CACHE command, which
is now require by some Operating Systems. There are also fixes for the
lwIP bindings to improve non-blocking sockets and error codes. The VFS has
some regressions fixed including the ability to statvfs the root.
All changes are listed below.
py core:
- modbuiltins: add core-provided version of input() function
- objstr: catch case of negative "maxsplit" arg to str.rsplit()
- persistentcode: allow to compile with complex numbers disabled
- objstr: allow to compile with obj-repr D, and unicode disabled
- modsys: allow to compile with obj-repr D and PY_ATTRTUPLE disabled
- provide mp_decode_uint_skip() to help reduce stack usage
- makeqstrdefs.py: make script run correctly with Python 2.6
- objstringio: if created from immutable object, follow copy on write policy
extmod:
- modlwip: connect: for non-blocking mode, return EINPROGRESS
- modlwip: fix error codes for duplicate calls to connect()
- modlwip: accept: fix error code for non-blocking mode
- vfs: allow to statvfs the root directory
- vfs: allow "buffering" and "encoding" args to VFS's open()
- modframebuf: fix signed/unsigned comparison pendantic warning
lib:
- libm: use isfinite instead of finitef, for C99 compatibility
- utils/interrupt_char: remove support for KBD_EXCEPTION disabled
tests:
- basics/string_rsplit: add tests for negative "maxsplit" argument
- float: convert "sys.exit()" to "raise SystemExit"
- float/builtin_float_minmax: PEP8 fixes
- basics: convert "sys.exit()" to "raise SystemExit"
- convert remaining "sys.exit()" to "raise SystemExit"
unix port:
- convert to use core-provided version of built-in import()
- Makefile: replace references to make with $(MAKE)
windows port:
- convert to use core-provided version of built-in import()
qemu-arm port:
- Makefile: adjust object-file lists to get correct dependencies
- enable micropython.mem_*() functions to allow more tests
stmhal port:
- boards: enable DAC for NUCLEO_F767ZI board
- add support for NUCLEO_F446RE board
- pass USB handler as parameter to allow more than one USB handler
- usb: use local USB handler variable in Start-of-Frame handler
- usb: make state for USB device private to top-level USB driver
- usbdev: for MSC implement SCSI SYNCHRONIZE_CACHE command
- convert from using stmhal's input() to core provided version
cc3200 port:
- convert from using stmhal's input() to core provided version
teensy port:
- convert from using stmhal's input() to core provided version
esp8266 port:
- Makefile: replace references to make with $(MAKE)
- Makefile: add clean-modules target
- convert from using stmhal's input() to core provided version
zephyr port:
- modusocket: getaddrinfo: Fix mp_obj_len() usage
- define MICROPY_PY_SYS_PLATFORM (to "zephyr")
- machine_pin: use native Zephyr types for Zephyr API calls
docs:
- machine.Pin: remove out_value() method
- machine.Pin: add on() and off() methods
- esp8266: consistently replace Pin.high/low methods with .on/off
- esp8266/quickref: polish Pin.on()/off() examples
- network: move confusingly-named cc3200 Server class to its reference
- uos: deconditionalize, remove minor port-specific details
- uos: move cc3200 port legacy VFS mounting functions to its ref doc
- machine: sort machine classes in logical order, not alphabetically
- network: first step to describe standard network class interface
examples:
- embedding: use core-provided KeyboardInterrupt object
The GC was deleting memory that was in use because its scan of the
stack missed the very top. Switching to _estack fixes this by relying
on the location from the linker.
Fixes#124
If a finaliser raises an exception then it must not propagate through the
GC sweep function. This patch protects against such a thing by running
finaliser code via the mp_call_function_1_protected call.
This patch also adds scheduler lock/unlock calls around the finaliser
execution to further protect against any possible reentrancy issues: the
memory manager is already locked when doing a collection, but we also don't
want to allow any scheduled code to run, KeyboardInterrupts to interupt the
code, nor threads to switch.
Docs are here: http://tannewt-micropython.readthedocs.io/en/microcontroller/
It differs from upstream's machine in the following ways:
* Python API is identical across ports due to code structure. (Lives in shared-bindings)
* Focuses on abstracting common functionality (AnalogIn) and not representing structure (ADC).
* Documentation lives with code making it easy to ensure they match.
* Pin is split into references (board.D13 and microcontroller.pin.PA17) and functionality (DigitalInOut).
* All nativeio classes claim underlying hardware resources when inited on construction, support Context Managers (aka with statements) and have deinit methods which release the claimed hardware.
* All constructors take pin references rather than peripheral ids. Its up to the implementation to find hardware or throw and exception.
There can be stray pointers in memory blocks that are not properly zero'd
after allocation. This patch adds a new config option to always zero all
allocated memory (via gc_alloc and gc_realloc) and hence help to eliminate
stray pointers.
See issue #2195.
Currently, MicroPython runs GC when it could not allocate a block of memory,
which happens when heap is exhausted. However, that policy can't work well
with "inifinity" heaps, e.g. backed by a virtual memory - there will be a
lot of swap thrashing long before VM will be exhausted. Instead, in such
cases "allocation threshold" policy is used: a GC is run after some number of
allocations have been made. Details vary, for example, number or total amount
of allocations can be used, threshold may be self-adjusting based on GC
outcome, etc.
This change implements a simple variant of such policy for MicroPython. Amount
of allocated memory so far is used for threshold, to make it useful to typical
finite-size, and small, heaps as used with MicroPython ports. And such GC policy
is indeed useful for such types of heaps too, as it allows to better control
fragmentation. For example, if a threshold is set to half size of heap, then
for an application which usually makes big number of small allocations, that
will (try to) keep half of heap memory in a nice defragmented state for an
occasional large allocation.
For an application which doesn't exhibit such behavior, there won't be any
visible effects, except for GC running more frequently, which however may
affect performance. To address this, the GC threshold is configurable, and
by default is off so far. It's configured with gc.threshold(amount_in_bytes)
call (can be queries without an argument).
Previously, if there was chain of allocated blocks ending with the last
block of heap, it wasn't included in number of 1/2-block or max block
size stats.
GC_EXIT() can cause a pending thread (waiting on the mutex) to be
scheduled right away. This other thread may trigger a garbage
collection. If the pointer to the newly-allocated block (allocated by
the original thread) is not computed before the switch (so it's just left
as a block number) then the block will be wrongly reclaimed.
This patch makes sure the pointer is computed before allowing any thread
switch to occur.
By using a single, global mutex, all memory-related functions (alloc,
free, realloc, collect, etc) are made thread safe. This means that only
one thread can be in such a function at any one time.