Closed over variables are now passed on the stack, instead of creating a
tuple and passing that. This way memory for the closed over variables
can be allocated within the closure object itself. See issue #510 for
background.
On stmhal, computed gotos make the binary about 1k bigger, but makes it
run faster, and we have the room, so why not. All tests pass on
pyboard using computed gotos.
Things get tricky when using the nlr code to catch exceptions. Need to
ensure that the variables (stack layout) in the exception handler are
the same as in the bit protected by the exception handler.
Prior to this patch there were a few bugs. 1) The constant
mp_const_MemoryError_obj was being preloaded to a specific location on
the stack at the start of the function. But this location on the stack
was being overwritten in the opcode loop (since it didn't think that
variable would ever be referenced again), and so when an exception
occurred, the variable holding the address of MemoryError was corrupt.
2) The FOR_ITER opcode detection in the exception handler used sp, which
may or may not contain the right value coming out of the main opcode
loop.
With this patch there is a clear separation of variables used in the
opcode loop and in the exception handler (should fix issue (2) above).
Furthermore, nlr_raise is no longer used in the opcode loop. Instead,
it jumps directly into the exception handler. This tells the C compiler
more about the possible code flow, and means that it should have the
same stack layout for the exception handler. This should fix issue (1)
above. Indeed, the generated (ARM) assembler has been checked explicitly,
and with 'goto exception_handler', the problem with &MemoryError is
fixed.
This may now fix problems with rge-sm, and probably many other subtle
bugs yet to show themselves. Incidentally, rge-sm now passes on
pyboard (with a reduced range of integration)!
Main lesson: nlr is tricky. Don't use nlr_push unless you know what you
are doing! Luckily, it's not used in many places. Using nlr_raise/jump
is fine.
Attempt to address issue #386. unique_code_id's have been removed and
replaced with a pointer to the "raw code" information. This pointer is
stored in the actual byte code (aligned, so the GC can trace it), so
that raw code (ie byte code, native code and inline assembler) is kept
only for as long as it is needed. In memory it's now like a tree: the
outer module's byte code points directly to its children's raw code. So
when the outer code gets freed, if there are no remaining functions that
need the raw code, then the children's code gets freed as well.
This is pretty much like CPython does it, except that CPython stores
indexes in the byte code rather than machine pointers. These indices
index the per-function constant table in order to find the relevant
code.
This is necessary to catch all cases where locals are referenced before
assignment. We still keep the _0, _1, _2 versions of LOAD_FAST to help
reduced the byte code size in RAM.
Addresses issue #457.
This simplifies the compiler a little, since now it can do 1 pass over
a function declaration, to determine default arguments. I would have
done this originally, but CPython 3.3 somehow had the default keyword
args compiled before the default position args (even though they appear
in the other order in the text of the script), and I thought it was
important to have the same order of execution when evaluating default
arguments. CPython 3.4 has changed the order to the more obvious one,
so we can also change.
There was thinkos that either send_value or throw_value is specified, but
there were cases with both. Note that send_value is pushed onto generator's
stack - but that's probably only good, because if we throw exception into
gen, it should not ever use send_value, and that will be just extra "assert".
Adding this bytecode allows to remove 4 others related to
function/method calls with * and ** support. Will also help with
bytecodes that make functions/closures with default positional and
keyword args.
Mostly just a global search and replace. Except rt_is_true which
becomes mp_obj_is_true.
Still would like to tidy up some of the names, but this will do for now.
Required to reraise correct exceptions in except block, regardless if more
try blocks with active exceptions happen in the same except block.
P.S. This "automagic reraise" appears to be quite wasteful feature of Python
- we need to save pending exception just in case it *might* be reraised.
Instead, programmer could explcitly capture exception to a variable using
"except ... as var", and reraise that. So, consider disabling argless raise
support as an optimization.
The compiler allocates 7 entries on the stack for a with statement
(following CPython, but probably can be reduced). This is enough for
the method load and call in SETUP_WITH.
Partly (very partly!) addresses issue #386. Most importantly, at the
REPL command line, each invocation does not now lead to increased memory
usage (unless you define a function/lambda).
This reduntant triple is one of the ugliest parts of Python, which they
chickened out to fix in Python3. We really should consider passing just
as single exception instance (without breaking Python-level APIs of course),
but until we do, let's follow CPython layout.
Rationale: setting up the stack (state for locals and exceptions) is
really part of the "code", it's the prelude of the function. For
example, native code adjusts the stack pointer on entry to the function.
Native code doesn't need to know n_state for any other reason. So
putting the state size in the bytecode prelude is sensible.
It reduced ROM usage on STM by about 30 bytes :) And makes it easier to
pass information about the bytecode between functions.
For this, needed to implement DELETE_NAME bytecode (because var bound
in except clause is automatically deleted at its end).
http://docs.python.org/3/reference/compound_stmts.html#except :
"When an exception has been assigned using as target, it is cleared at
the end of the except clause."
Each built-in exception is now a type, with base type BaseException.
C exceptions are created by passing a pointer to the exception type to
make an instance of. When raising an exception from the VM, an
instance is created automatically if an exception type is raised (as
opposed to an exception instance).
Exception matching (RT_BINARY_OP_EXCEPTION_MATCH) is now proper.
Handling of parse error changed to match new exceptions.
mp_const_type renamed to mp_type_type for consistency.
TODO: Decide if we really need separate bytecode for creating functions
with default arguments - we would need same for closures, then there're
keywords arguments too. Having all combinations is a small exponential
explosion, likely we need just 2 cases - simplest (no defaults, no kw),
and full - defaults & kw.
This properly implements return from try/finally block(s).
TODO: Consider if we need to do any value stack unwinding for RETURN_VALUE
case. Intuitively, this is "success" return, so value stack should be in
good shape, and unwinding shouldn't be required.
We still have FAST_[0,1,2] byte codes, but they now just access the
fastn array (before they had special local variables). It's now
simpler, a bit faster, and uses a bit less stack space (on STM at least,
which is most important).
The only reason now to keep FAST_[0,1,2] byte codes is for compressed
byte code size.
LOAD_METHOD bug was: emitbc did not correctly calculate the amount of
stack usage for a LOAD_METHOD operation.
small int bug was: int was being used to pass small ints, when it should
have been machine_int_t.
Change state layout in VM so the stack starts at state[0] and grows
upwards. Locals are at the top end of the state and number downwards.
This cleans up a lot of the interface connecting the VM to C: now all
functions that take an array of Micro Python objects are in order (ie no
longer in reverse).
Also clean up C API with keyword arguments (call_n and call_n_kw
replaced with single call method that takes keyword arguments). And now
make_new takes keyword arguments.
emitnative.c has not yet been changed to comply with the new order of
stack layout.