This has benefits all round: code factoring for parse/compile/execute,
proper context save/restore for exec, allow to sepcify globals/locals
for eval, and reduced ROM usage by >100 bytes on stmhal and unix.
Also, the call to mp_parse_compile_execute is tail call optimised for
the import code, so it doesn't increase stack memory usage.
In CPython IOError (and EnvironmentError) is deprecated and aliased to
OSError. All modules that used to raise IOError now raise OSError (or a
derived exception).
In Micro Python we never used IOError (except 1 place, incorrectly) and
so don't need to keep it.
See http://legacy.python.org/dev/peps/pep-3151/ for background.
Viper can now do the following:
def store(p:ptr8, c:int):
p[0] = c
This does a store of c to the memory pointed to by p using a machine
instructions inline in the code.
It seems most sensible to use size_t for measuring "number of bytes" in
malloc and vstr functions (since that's what size_t is for). We don't
use mp_uint_t because malloc and vstr are not Micro Python specific.
mp_parse_node_free now frees the memory associated with non-interned
strings. And the parser calls mp_parse_node_free when discarding a
non-used node (such as a doc string).
Also, the compiler now frees the parse tree explicitly just before it
exits (as opposed to relying on the caller to do this).
Addresses issue #708 as best we can.
Stack is full descending and must be 8-byte aligned. It must start off
pointing to just above the last byte of RAM.
Previously, stack started pointed to last byte of RAM (eg 0x2001ffff)
and so was not 8-byte aligned. This caused a bug in combination with
alloca.
This patch also updates some debug printing code.
Addresses issue #872 (among many other undiscovered issues).
Heap RAM was being allocated to print dicts and do some other types of
iterating. Now these iterations use 1 word of state on the stack.
Deleting elements from a dict was not allowing the value to be reclaimed
by the GC. This is now fixed.
sys.exit always raises SystemExit so doesn't need a special
implementation for each port. If C exit() is really needed, use the
standard os._exit function.
Also initialise mp_sys_path and mp_sys_argv in teensy port.
Eventually, viper wants to be able to use raw pointers to strings and
arrays for efficient access. But for now, let's just load strings as a
Python object so they can be used as normal. This will anyway be
compatible with eventual intended viper behaviour.
Addresses issue #857.
Type representing signed size doesn't have to be int, so use special value
which defaults to SSIZE_MAX, but as it's not defined by C standard (but rather
by POSIX), allow ports to set it.
Previously, mpz was restricted to using at most 15 bits in each digit,
where a digit was a uint16_t.
With this patch, mpz can use all 16 bits in the uint16_t (improvement
to mpn_div was required). This gives small inprovements in speed and
RAM usage. It also yields savings in ROM code size because all of the
digit masking operations become no-ops.
Also, mpz can now use a uint32_t as the digit type, and hence use 32
bits per digit. This will give decent improvements in mpz speed on
64-bit machines.
Test for big integer division added.
Code-info size, block name, source name, n_state and n_exc_stack now use
variable length encoded uints. This saves 7-9 bytes per bytecode
function for most functions.
This way, the native glue code is only compiled if native code is
enabled (which makes complete sense; thanks to Paul Sokolovsky for
the idea).
Should fix issue #834.
The heap allocation is now exactly as it was before the "faster gc
alloc" patch, but it's still nearly as fast. It is fixed by being
careful to always update the "last free block" pointer whenever the heap
changes (eg free or realloc).
Tested on all tests by enabling EXTENSIVE_HEAP_PROFILING in py/gc.c:
old and new allocator have exactly the same behaviour, just the new one
is much faster.
Recent speed up of GC allocation made the GC have a fragmented heap.
This patch restores "original fragmentation behaviour" whilst still
retaining relatively fast allocation. This patch works because there is
always going to be a single block allocated now and then, which advances
the gc_last_free_atb_index pointer often enough so that the whole heap
doesn't need scanning.
Should address issue #836.
With a file with 1 line (and an error on that line), used to show the
line as number 0. Now shows it correctly as line number 1.
But, when line numbers are disabled, it now prints line number 1 for any
line that has an error (instead of 0 as previously). This might end up
being confusing, but requires extra RAM and/or hack logic to make it
print something special in the case of no line numbers.
These functions are generally 1 machine instruction, and are used in
critical code, so makes sense to have them inline.
Also leave these functions uninverted (ie 0 means enable, 1 means
disable) and provide macro constants if you really need to distinguish
the states. This makes for smaller code as well (combined with
inlining).
Applied to teensy port as well.
Because (for Thumb) a function pointer has the LSB set, pointers to
dynamic functions in RAM (eg native, viper or asm functions) were not
being traced by the GC. This patch is a comprehensive fix for this.
Addresses issue #820.
This simple patch gives a very significant speed up for memory allocation
with the GC.
Eg, on PYBv1.0:
tests/basics/dict_del.py: 3.55 seconds -> 1.19 seconds
tests/misc/rge_sm.py: 15.3 seconds -> 2.48 seconds