Currently, MicroPython runs GC when it could not allocate a block of memory,
which happens when heap is exhausted. However, that policy can't work well
with "inifinity" heaps, e.g. backed by a virtual memory - there will be a
lot of swap thrashing long before VM will be exhausted. Instead, in such
cases "allocation threshold" policy is used: a GC is run after some number of
allocations have been made. Details vary, for example, number or total amount
of allocations can be used, threshold may be self-adjusting based on GC
outcome, etc.
This change implements a simple variant of such policy for MicroPython. Amount
of allocated memory so far is used for threshold, to make it useful to typical
finite-size, and small, heaps as used with MicroPython ports. And such GC policy
is indeed useful for such types of heaps too, as it allows to better control
fragmentation. For example, if a threshold is set to half size of heap, then
for an application which usually makes big number of small allocations, that
will (try to) keep half of heap memory in a nice defragmented state for an
occasional large allocation.
For an application which doesn't exhibit such behavior, there won't be any
visible effects, except for GC running more frequently, which however may
affect performance. To address this, the GC threshold is configurable, and
by default is off so far. It's configured with gc.threshold(amount_in_bytes)
call (can be queries without an argument).
Previously, if there was chain of allocated blocks ending with the last
block of heap, it wasn't included in number of 1/2-block or max block
size stats.
GC_EXIT() can cause a pending thread (waiting on the mutex) to be
scheduled right away. This other thread may trigger a garbage
collection. If the pointer to the newly-allocated block (allocated by
the original thread) is not computed before the switch (so it's just left
as a block number) then the block will be wrongly reclaimed.
This patch makes sure the pointer is computed before allowing any thread
switch to occur.
By using a single, global mutex, all memory-related functions (alloc,
free, realloc, collect, etc) are made thread safe. This means that only
one thread can be in such a function at any one time.
Address printed was truncated anyway and in general confusing to outsider.
A line which dumps it is still left in the source, commented, for peculiar
cases when it may be needed (e.g. when running under debugger).
These are typical consumers of large chunks of memory, so it's useful to
see at least their number (how much memory isn't clearly shown, as the data
for these objects is allocated elsewhere).
Previously, mark operation weren't logged at all, while it's quite useful
to see cascade of marks in case of over-marking (and in other cases too).
Previously, sweep was logged for each block of object in memory, but that
doesn't make much sense and just lead to longer output, harder to parse
by a human. Instead, log sweep only once per object. This is similar to
other memory manager operations, e.g. an object is allocated, then freed.
Or object is allocated, then marked, otherwise swept (one log entry per
operation, with the same memory address in each case).
Ideally we'd use %zu for size_t args, but that's unlikely to be supported
by all runtimes, and we would then need to implement it in mp_printf.
So simplest and most portable option is to use %u and cast the argument
to uint(=unsigned int).
Note: reason for the change is that UINT_FMT can be %llu (size suitable
for mp_uint_t) which is wider than size_t and prints incorrect results.
size_t is the correct type to use to count things related to the size of
the address space. Using size_t (instead of mp_uint_t) is important for
the efficiency of ports that configure mp_uint_t to larger than the
machine word size.
This allows the mp_obj_t type to be configured to something other than a
pointer-sized primitive type.
This patch also includes additional changes to allow the code to compile
when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of
mp_uint_t, and various casts.
Currently, the only place that clears the bit is in gc_collect.
So if a block with a finalizer is allocated, and subsequently
freed, and then the block is reallocated with no finalizer then
the bit remains set.
This could also be fixed by having gc_alloc clear the bit, but
I'm pretty sure that free is called way less than alloc, so doing
it in free is more efficient.
Previous to this patch all interned strings lived in their own malloc'd
chunk. On average this wastes N/2 bytes per interned string, where N is
the number-of-bytes for a quanta of the memory allocator (16 bytes on 32
bit archs).
With this patch interned strings are concatenated into the same malloc'd
chunk when possible. Such chunks are enlarged inplace when possible,
and shrunk to fit when a new chunk is needed.
RAM savings with this patch are highly varied, but should always show an
improvement (unless only 3 or 4 strings are interned). New version
typically uses about 70% of previous memory for the qstr data, and can
lead to savings of around 10% of total memory footprint of a running
script.
Costs about 120 bytes code size on Thumb2 archs (depends on how many
calls to gc_realloc are made).
GC for unix/windows builds doesn't make use of the bss section anymore,
so we do not need the (sometimes complicated) build features and code related to it
This patch consolidates all global variables in py/ core into one place,
in a global structure. Root pointers are all located together to make
GC tracing easier and more efficient.
gc.enable/disable are now the same as CPython: they just control whether
automatic garbage collection is enabled or not. If disabled, you can
still allocate heap memory, and initiate a manual collection.
The heap allocation is now exactly as it was before the "faster gc
alloc" patch, but it's still nearly as fast. It is fixed by being
careful to always update the "last free block" pointer whenever the heap
changes (eg free or realloc).
Tested on all tests by enabling EXTENSIVE_HEAP_PROFILING in py/gc.c:
old and new allocator have exactly the same behaviour, just the new one
is much faster.
Recent speed up of GC allocation made the GC have a fragmented heap.
This patch restores "original fragmentation behaviour" whilst still
retaining relatively fast allocation. This patch works because there is
always going to be a single block allocated now and then, which advances
the gc_last_free_atb_index pointer often enough so that the whole heap
doesn't need scanning.
Should address issue #836.
This simple patch gives a very significant speed up for memory allocation
with the GC.
Eg, on PYBv1.0:
tests/basics/dict_del.py: 3.55 seconds -> 1.19 seconds
tests/misc/rge_sm.py: 15.3 seconds -> 2.48 seconds
This was a nasty bug to track down. It only had consequences when the
heap size was just the right size to expose the rounding error in the
calculation of the finaliser table size. And, a script had to allocate
a small (1 or 2 cell) object at the very end of the heap. And, this
object must not have a finaliser. And, the initial state of the heap
must have been all bits set to 1. All these conspire on the pyboard,
but only if your run the script fresh (so unused memory is all 1's),
and if your script allocates a lot of small objects (eg 2-char strings
that are not interned).