Address printed was truncated anyway and in general confusing to outsider.
A line which dumps it is still left in the source, commented, for peculiar
cases when it may be needed (e.g. when running under debugger).
These are typical consumers of large chunks of memory, so it's useful to
see at least their number (how much memory isn't clearly shown, as the data
for these objects is allocated elsewhere).
Previously, mark operation weren't logged at all, while it's quite useful
to see cascade of marks in case of over-marking (and in other cases too).
Previously, sweep was logged for each block of object in memory, but that
doesn't make much sense and just lead to longer output, harder to parse
by a human. Instead, log sweep only once per object. This is similar to
other memory manager operations, e.g. an object is allocated, then freed.
Or object is allocated, then marked, otherwise swept (one log entry per
operation, with the same memory address in each case).
Ideally we'd use %zu for size_t args, but that's unlikely to be supported
by all runtimes, and we would then need to implement it in mp_printf.
So simplest and most portable option is to use %u and cast the argument
to uint(=unsigned int).
Note: reason for the change is that UINT_FMT can be %llu (size suitable
for mp_uint_t) which is wider than size_t and prints incorrect results.
size_t is the correct type to use to count things related to the size of
the address space. Using size_t (instead of mp_uint_t) is important for
the efficiency of ports that configure mp_uint_t to larger than the
machine word size.
This allows the mp_obj_t type to be configured to something other than a
pointer-sized primitive type.
This patch also includes additional changes to allow the code to compile
when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of
mp_uint_t, and various casts.
Currently, the only place that clears the bit is in gc_collect.
So if a block with a finalizer is allocated, and subsequently
freed, and then the block is reallocated with no finalizer then
the bit remains set.
This could also be fixed by having gc_alloc clear the bit, but
I'm pretty sure that free is called way less than alloc, so doing
it in free is more efficient.
Previous to this patch all interned strings lived in their own malloc'd
chunk. On average this wastes N/2 bytes per interned string, where N is
the number-of-bytes for a quanta of the memory allocator (16 bytes on 32
bit archs).
With this patch interned strings are concatenated into the same malloc'd
chunk when possible. Such chunks are enlarged inplace when possible,
and shrunk to fit when a new chunk is needed.
RAM savings with this patch are highly varied, but should always show an
improvement (unless only 3 or 4 strings are interned). New version
typically uses about 70% of previous memory for the qstr data, and can
lead to savings of around 10% of total memory footprint of a running
script.
Costs about 120 bytes code size on Thumb2 archs (depends on how many
calls to gc_realloc are made).
GC for unix/windows builds doesn't make use of the bss section anymore,
so we do not need the (sometimes complicated) build features and code related to it
This patch consolidates all global variables in py/ core into one place,
in a global structure. Root pointers are all located together to make
GC tracing easier and more efficient.
gc.enable/disable are now the same as CPython: they just control whether
automatic garbage collection is enabled or not. If disabled, you can
still allocate heap memory, and initiate a manual collection.
The heap allocation is now exactly as it was before the "faster gc
alloc" patch, but it's still nearly as fast. It is fixed by being
careful to always update the "last free block" pointer whenever the heap
changes (eg free or realloc).
Tested on all tests by enabling EXTENSIVE_HEAP_PROFILING in py/gc.c:
old and new allocator have exactly the same behaviour, just the new one
is much faster.
Recent speed up of GC allocation made the GC have a fragmented heap.
This patch restores "original fragmentation behaviour" whilst still
retaining relatively fast allocation. This patch works because there is
always going to be a single block allocated now and then, which advances
the gc_last_free_atb_index pointer often enough so that the whole heap
doesn't need scanning.
Should address issue #836.
This simple patch gives a very significant speed up for memory allocation
with the GC.
Eg, on PYBv1.0:
tests/basics/dict_del.py: 3.55 seconds -> 1.19 seconds
tests/misc/rge_sm.py: 15.3 seconds -> 2.48 seconds
This was a nasty bug to track down. It only had consequences when the
heap size was just the right size to expose the rounding error in the
calculation of the finaliser table size. And, a script had to allocate
a small (1 or 2 cell) object at the very end of the heap. And, this
object must not have a finaliser. And, the initial state of the heap
must have been all bits set to 1. All these conspire on the pyboard,
but only if your run the script fresh (so unused memory is all 1's),
and if your script allocates a lot of small objects (eg 2-char strings
that are not interned).
Blanket wide to all .c and .h files. Some files originating from ST are
difficult to deal with (license wise) so it was left out of those.
Also merged modpyb.h, modos.h, modstm.h and modtime.h in stmhal/.
Also add some more debugging output to gc_dump_alloc_table().
Now that newly allocated heap is always zero'd, maybe we just make this
a policy for the uPy API to keep it simple (ie any new implementation of
memory allocation must zero all allocations). This follows the D
language philosophy.
Before this patch, a previously used memory block which had pointers in
it may still retain those pointers if the new user of that block does
not actually use the entire block. Eg, if I want 5 blocks worth of
heap, I actually get 8 (round up to nearest 4). Then I never use the
last 3, so they keep their old values, which may be pointers pointing to
the heap, hence preventing GC.
In rare (or maybe not that rare) cases, this leads to long, unintentional
"linked lists" within the GC'd heap, filling it up completely. It's
pretty rare, because you have to reuse exactly that memory which is part
of this "linked list", and reuse it in just the right way.
This should fix issue #522, and might have something to do with
issue #510.