This adapts the allocation process to start from either end of the heap
when searching for free space. The default behavior is identical to the
existing behavior where it starts with the lowest block and looks higher.
Now it can also look from the highest block and lower depending on the
long_lived parameter to gc_alloc. As the heap fills, the two sections may
overlap. When they overlap, a collect may be triggered in order to keep
the long lived section compact. However, free space is always eligable
for each type of allocation.
By starting from either of the end of the heap we have ability to separate
short lived objects from long lived ones. This separation reduces heap
fragmentation because long lived objects are easy to densely pack.
Most objects are short lived initially but may be made long lived when
they are referenced by a type or module. This involves copying the
memory and then letting the collect phase free the old portion.
QSTR pools and chunks are always long lived because they are never freed.
The reallocation, collection and free processes are largely unchanged. They
simply also maintain an index to the highest free block as well as the lowest.
These indices are used to speed up the allocation search until the next collect.
In practice, this change may slightly slow down import statements with the
benefit that memory is much less fragmented afterwards. For example, a test
import into a 20k heap that leaves ~6k free previously had the largest
continuous free space of ~400 bytes. After this change, the largest continuous
free space is over 3400 bytes.
Header files that are considered internal to the py core and should not
normally be included directly are:
py/nlr.h - internal nlr configuration and declarations
py/bc0.h - contains bytecode macro definitions
py/runtime0.h - contains basic runtime enums
Instead, the top-level header files to include are one of:
py/obj.h - includes runtime0.h and defines everything to use the
mp_obj_t type
py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
and defines everything to use the general runtime support functions
Additional, specific headers (eg py/objlist.h) can be included if needed.
Fixes for stmhal USB mass storage, lwIP bindings and VFS regressions
This release provides an important fix for the USB mass storage device in
the stmhal port by implementing the SCSI SYNCHRONIZE_CACHE command, which
is now require by some Operating Systems. There are also fixes for the
lwIP bindings to improve non-blocking sockets and error codes. The VFS has
some regressions fixed including the ability to statvfs the root.
All changes are listed below.
py core:
- modbuiltins: add core-provided version of input() function
- objstr: catch case of negative "maxsplit" arg to str.rsplit()
- persistentcode: allow to compile with complex numbers disabled
- objstr: allow to compile with obj-repr D, and unicode disabled
- modsys: allow to compile with obj-repr D and PY_ATTRTUPLE disabled
- provide mp_decode_uint_skip() to help reduce stack usage
- makeqstrdefs.py: make script run correctly with Python 2.6
- objstringio: if created from immutable object, follow copy on write policy
extmod:
- modlwip: connect: for non-blocking mode, return EINPROGRESS
- modlwip: fix error codes for duplicate calls to connect()
- modlwip: accept: fix error code for non-blocking mode
- vfs: allow to statvfs the root directory
- vfs: allow "buffering" and "encoding" args to VFS's open()
- modframebuf: fix signed/unsigned comparison pendantic warning
lib:
- libm: use isfinite instead of finitef, for C99 compatibility
- utils/interrupt_char: remove support for KBD_EXCEPTION disabled
tests:
- basics/string_rsplit: add tests for negative "maxsplit" argument
- float: convert "sys.exit()" to "raise SystemExit"
- float/builtin_float_minmax: PEP8 fixes
- basics: convert "sys.exit()" to "raise SystemExit"
- convert remaining "sys.exit()" to "raise SystemExit"
unix port:
- convert to use core-provided version of built-in import()
- Makefile: replace references to make with $(MAKE)
windows port:
- convert to use core-provided version of built-in import()
qemu-arm port:
- Makefile: adjust object-file lists to get correct dependencies
- enable micropython.mem_*() functions to allow more tests
stmhal port:
- boards: enable DAC for NUCLEO_F767ZI board
- add support for NUCLEO_F446RE board
- pass USB handler as parameter to allow more than one USB handler
- usb: use local USB handler variable in Start-of-Frame handler
- usb: make state for USB device private to top-level USB driver
- usbdev: for MSC implement SCSI SYNCHRONIZE_CACHE command
- convert from using stmhal's input() to core provided version
cc3200 port:
- convert from using stmhal's input() to core provided version
teensy port:
- convert from using stmhal's input() to core provided version
esp8266 port:
- Makefile: replace references to make with $(MAKE)
- Makefile: add clean-modules target
- convert from using stmhal's input() to core provided version
zephyr port:
- modusocket: getaddrinfo: Fix mp_obj_len() usage
- define MICROPY_PY_SYS_PLATFORM (to "zephyr")
- machine_pin: use native Zephyr types for Zephyr API calls
docs:
- machine.Pin: remove out_value() method
- machine.Pin: add on() and off() methods
- esp8266: consistently replace Pin.high/low methods with .on/off
- esp8266/quickref: polish Pin.on()/off() examples
- network: move confusingly-named cc3200 Server class to its reference
- uos: deconditionalize, remove minor port-specific details
- uos: move cc3200 port legacy VFS mounting functions to its ref doc
- machine: sort machine classes in logical order, not alphabetically
- network: first step to describe standard network class interface
examples:
- embedding: use core-provided KeyboardInterrupt object
The GC was deleting memory that was in use because its scan of the
stack missed the very top. Switching to _estack fixes this by relying
on the location from the linker.
Fixes#124
If a finaliser raises an exception then it must not propagate through the
GC sweep function. This patch protects against such a thing by running
finaliser code via the mp_call_function_1_protected call.
This patch also adds scheduler lock/unlock calls around the finaliser
execution to further protect against any possible reentrancy issues: the
memory manager is already locked when doing a collection, but we also don't
want to allow any scheduled code to run, KeyboardInterrupts to interupt the
code, nor threads to switch.
Docs are here: http://tannewt-micropython.readthedocs.io/en/microcontroller/
It differs from upstream's machine in the following ways:
* Python API is identical across ports due to code structure. (Lives in shared-bindings)
* Focuses on abstracting common functionality (AnalogIn) and not representing structure (ADC).
* Documentation lives with code making it easy to ensure they match.
* Pin is split into references (board.D13 and microcontroller.pin.PA17) and functionality (DigitalInOut).
* All nativeio classes claim underlying hardware resources when inited on construction, support Context Managers (aka with statements) and have deinit methods which release the claimed hardware.
* All constructors take pin references rather than peripheral ids. Its up to the implementation to find hardware or throw and exception.
There can be stray pointers in memory blocks that are not properly zero'd
after allocation. This patch adds a new config option to always zero all
allocated memory (via gc_alloc and gc_realloc) and hence help to eliminate
stray pointers.
See issue #2195.
Currently, MicroPython runs GC when it could not allocate a block of memory,
which happens when heap is exhausted. However, that policy can't work well
with "inifinity" heaps, e.g. backed by a virtual memory - there will be a
lot of swap thrashing long before VM will be exhausted. Instead, in such
cases "allocation threshold" policy is used: a GC is run after some number of
allocations have been made. Details vary, for example, number or total amount
of allocations can be used, threshold may be self-adjusting based on GC
outcome, etc.
This change implements a simple variant of such policy for MicroPython. Amount
of allocated memory so far is used for threshold, to make it useful to typical
finite-size, and small, heaps as used with MicroPython ports. And such GC policy
is indeed useful for such types of heaps too, as it allows to better control
fragmentation. For example, if a threshold is set to half size of heap, then
for an application which usually makes big number of small allocations, that
will (try to) keep half of heap memory in a nice defragmented state for an
occasional large allocation.
For an application which doesn't exhibit such behavior, there won't be any
visible effects, except for GC running more frequently, which however may
affect performance. To address this, the GC threshold is configurable, and
by default is off so far. It's configured with gc.threshold(amount_in_bytes)
call (can be queries without an argument).
Previously, if there was chain of allocated blocks ending with the last
block of heap, it wasn't included in number of 1/2-block or max block
size stats.
GC_EXIT() can cause a pending thread (waiting on the mutex) to be
scheduled right away. This other thread may trigger a garbage
collection. If the pointer to the newly-allocated block (allocated by
the original thread) is not computed before the switch (so it's just left
as a block number) then the block will be wrongly reclaimed.
This patch makes sure the pointer is computed before allowing any thread
switch to occur.
By using a single, global mutex, all memory-related functions (alloc,
free, realloc, collect, etc) are made thread safe. This means that only
one thread can be in such a function at any one time.
Address printed was truncated anyway and in general confusing to outsider.
A line which dumps it is still left in the source, commented, for peculiar
cases when it may be needed (e.g. when running under debugger).
These are typical consumers of large chunks of memory, so it's useful to
see at least their number (how much memory isn't clearly shown, as the data
for these objects is allocated elsewhere).
Previously, mark operation weren't logged at all, while it's quite useful
to see cascade of marks in case of over-marking (and in other cases too).
Previously, sweep was logged for each block of object in memory, but that
doesn't make much sense and just lead to longer output, harder to parse
by a human. Instead, log sweep only once per object. This is similar to
other memory manager operations, e.g. an object is allocated, then freed.
Or object is allocated, then marked, otherwise swept (one log entry per
operation, with the same memory address in each case).
Ideally we'd use %zu for size_t args, but that's unlikely to be supported
by all runtimes, and we would then need to implement it in mp_printf.
So simplest and most portable option is to use %u and cast the argument
to uint(=unsigned int).
Note: reason for the change is that UINT_FMT can be %llu (size suitable
for mp_uint_t) which is wider than size_t and prints incorrect results.
size_t is the correct type to use to count things related to the size of
the address space. Using size_t (instead of mp_uint_t) is important for
the efficiency of ports that configure mp_uint_t to larger than the
machine word size.
This allows the mp_obj_t type to be configured to something other than a
pointer-sized primitive type.
This patch also includes additional changes to allow the code to compile
when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of
mp_uint_t, and various casts.
Currently, the only place that clears the bit is in gc_collect.
So if a block with a finalizer is allocated, and subsequently
freed, and then the block is reallocated with no finalizer then
the bit remains set.
This could also be fixed by having gc_alloc clear the bit, but
I'm pretty sure that free is called way less than alloc, so doing
it in free is more efficient.
Previous to this patch all interned strings lived in their own malloc'd
chunk. On average this wastes N/2 bytes per interned string, where N is
the number-of-bytes for a quanta of the memory allocator (16 bytes on 32
bit archs).
With this patch interned strings are concatenated into the same malloc'd
chunk when possible. Such chunks are enlarged inplace when possible,
and shrunk to fit when a new chunk is needed.
RAM savings with this patch are highly varied, but should always show an
improvement (unless only 3 or 4 strings are interned). New version
typically uses about 70% of previous memory for the qstr data, and can
lead to savings of around 10% of total memory footprint of a running
script.
Costs about 120 bytes code size on Thumb2 archs (depends on how many
calls to gc_realloc are made).
GC for unix/windows builds doesn't make use of the bss section anymore,
so we do not need the (sometimes complicated) build features and code related to it
This patch consolidates all global variables in py/ core into one place,
in a global structure. Root pointers are all located together to make
GC tracing easier and more efficient.
gc.enable/disable are now the same as CPython: they just control whether
automatic garbage collection is enabled or not. If disabled, you can
still allocate heap memory, and initiate a manual collection.