This commit fixes lookups of class members to make it so that built-in
functions that are used as methods/functions of a class work correctly.
The mp_convert_member_lookup() function is pretty much completely changed
by this commit, but for the most part it's just reorganised and the
indenting changed. The functional changes are:
- staticmethod and classmethod checks moved to later in the if-logic,
because they are less common and so should be checked after the more
common cases.
- The explicit mp_obj_is_type(member, &mp_type_type) check is removed
because it's now subsumed by other, more general tests in this function.
- MP_TYPE_FLAG_BINDS_SELF and MP_TYPE_FLAG_BUILTIN_FUN type flags added to
make the checks in this function much simpler (now they just test this
bit in type->flags).
- An extra check is made for mp_obj_is_instance_type(type) to fix lookup of
built-in functions.
Fixes#1326 and #6198.
Signed-off-by: Damien George <damien@micropython.org>
This provides a more consistent C-level API to raise exceptions, ie moving
away from nlr_raise towards mp_raise_XXX. It also reduces code size by a
small amount on some ports.
Replace the is_running field with a tri-state variable to indicate
running/not-running/pending-exception.
Update tests to cover the various cases.
This allows cancellation in uasyncio even if the coroutine hasn't been
executed yet. Fixes#5242
This wasn't necessary as the wrapped function already has a reference to
its globals. But it had a dual purpose of tracking whether the function
was currently running, so replace it with a bool.
In which case place the native function prelude in a bytes object, linked
from the const_table of that function. An architecture should define
N_PRELUDE_AS_BYTES_OBJ to 1 before including py/emitnative.c to emit
correct machine code, then enable MICROPY_EMIT_NATIVE_PRELUDE_AS_BYTES_OBJ
so the runtime can correctly handle the prelude being in a bytes object.
The start of the bytecode prelude contains 6 numbers telling the amount of
stack needed for the Python values and exceptions, and the signature of the
function. Prior to this patch these numbers were all encoded one after the
other (2x variable unsigned integers, then 4x bytes), but using so many
bytes is unnecessary.
An entropy analysis of around 150,000 bytecode functions from the CPython
standard library showed that the optimal Shannon coding would need about
7.1 bits on average to encode these 6 numbers, compared to the existing 48
bits.
This patch attempts to get close to this optimal value by packing the 6
numbers into a single, varible-length unsigned integer via bit-wise
interleaving. The interleaving scheme is chosen to minimise the average
number of bytes needed, and at the same time keep the scheme simple enough
so it can be implemented without too much overhead in code size or speed.
The scheme requires about 10.5 bits on average to store the 6 numbers.
As a result most functions which originally took 6 bytes to encode these 6
numbers now need only 1 byte (in 80% of cases).
These macros could in principle be (inline) functions so it makes sense to
have them lower case, to match the other C API functions.
The remaining macros that are upper case are:
- MP_OBJ_TO_PTR, MP_OBJ_FROM_PTR
- MP_OBJ_NEW_SMALL_INT, MP_OBJ_SMALL_INT_VALUE
- MP_OBJ_NEW_QSTR, MP_OBJ_QSTR_VALUE
- MP_OBJ_FUN_MAKE_SIG
- MP_DECLARE_CONST_xxx
- MP_DEFINE_CONST_xxx
These must remain macros because they are used when defining const data (at
least, MP_OBJ_NEW_SMALL_INT is so it makes sense to have
MP_OBJ_SMALL_INT_VALUE also a macro).
For those macros that have been made lower case, compatibility macros are
provided for the old names so that users do not need to change their code
immediately.
This commit adds first class support for yield and yield-from in the native
emitter, including send and throw support, and yields enclosed in exception
handlers (which requires pulling down the NLR stack before yielding, then
rebuilding it when resuming).
This has been fully tested and is working on unix x86 and x86-64, and
stm32. Also basic tests have been done with the esp8266 port. Performance
of existing native code is unchanged.
Instead of at end of state, n_state - 1. It was originally (way back in
v1.0) put at the end of the state because the VM didn't have a pointer to
the start. But now that the VM takes a mp_code_state_t pointer it does
have a pointer to the start of the state so can put the exception object
there.
This commit saves about 30 bytes of code on all architectures, and, more
importantly, reduces C-stack usage by a couple of words (8 bytes on Thumb2
and 16 bytes on x86-64) for every (non-generator) call of a bytecode
function because fun_bc_call no longer needs to remember the n_state
variable.
This commit implements PEP479 which disallows raising StopIteration inside
a generator to signal that it should be finished. Instead, the generator
should simply return when it is complete.
See https://www.python.org/dev/peps/pep-0479/ for details.
With the recent change b488a4a848, a
generating function now has the same layout in memory as a normal bytecode
function, and so can reuse the latter's attribute accessor code to
implement __name__.
For generating functions there is no need to wrap the bytecode function in
a generator wrapper instance. Instead the type of the bytecode function
can be changed to mp_type_gen_wrap. This reduces code size and saves a
block of GC heap RAM for each generator.
The code_state.old_globals variable is there to save the globals state so
should be used for this purpose, to avoid the need for additional local
variables on the C stack.
This implements .pend_throw(exc) method, which sets up an exception to be
triggered on the next call to generator's .__next__() or .send() method.
This is unlike .throw(), which immediately starts to execute the generator
to process the exception. This effectively adds Future-like capabilities
to generator protocol (exception will be raised in the future).
The need for such a method arised to implement uasyncio wait_for() function
efficiently (its behavior is clearly "Future" like, and normally would
require to introduce an expensive Future wrapper around all native
couroutines, like upstream asyncio does).
py/objgenerator: pend_throw: Return previous pended value.
This effectively allows to store an additional value (not necessary an
exception) in a coroutine while it's not being executed. uasyncio has
exactly this usecase: to mark a coro waiting in I/O queue (and thus
not executed in the normal scheduling queue), for the purpose of
implementing wait_for() function (cancellation of such waiting coro
by a timeout).
This commit essentially reverts aa9dbb1b03
where this if-condition was added. It seems that even when that commit
was made the code was never reached by any tests, nor reachable by
analysis (see below). The same is true with the code as it currently
stands: no test triggers this if-condition, nor any uasyncio examples.
Analysing the flow of the program also shows that it's not reachable:
==START==
-> to trigger this if condition mp_execute_bytecode() must return
MP_VM_RETURN_YIELD with *sp==MP_OBJ_STOP_ITERATION
-> mp_execute_bytecode() can only return MP_VM_RETURN_YIELD from the
MP_BC_YIELD_VALUE bytecode, which can happen in 2 ways:
-> 1) from a "yield <x>" in bytecode, but <x> must always be a proper
object, never MP_OBJ_STOP_ITERATION; ==END1==
-> 2) via yield from, via mp_resume() which must return
MP_VM_RETURN_YIELD with ret_value==MP_OBJ_STOP_ITERATION, which
can happen in 3 ways:
-> 1) it delegates to mp_obj_gen_resume(); go back to ==START==
-> 2) it returns MP_VM_RETURN_YIELD directly but with a guard that
ret_val!=MP_OBJ_STOP_ITERATION; ==END2==
-> 3) it returns MP_VM_RETURN_YIELD with ret_val set from
mp_call_method_n_kw(), but mp_call_method_n_kw() must return a
proper object, never MP_OBJ_STOP_ITERATION; ==END3==
The above shows there is no way to trigger the if-condition and it can be
removed.
Header files that are considered internal to the py core and should not
normally be included directly are:
py/nlr.h - internal nlr configuration and declarations
py/bc0.h - contains bytecode macro definitions
py/runtime0.h - contains basic runtime enums
Instead, the top-level header files to include are one of:
py/obj.h - includes runtime0.h and defines everything to use the
mp_obj_t type
py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
and defines everything to use the general runtime support functions
Additional, specific headers (eg py/objlist.h) can be included if needed.
Taking the address of a local variable leads to increased stack usage, so
the mp_decode_uint_skip() function is added to reduce the need for taking
addresses. The changes in this patch reduce stack usage of a Python call
by 8 bytes on ARM Thumb, by 16 bytes on non-windowing Xtensa archs, and by
16 bytes on x86-64. Code size is also slightly reduced on most archs by
around 32 bytes.
Instead of caching data that is constant (code_info, const_table and
n_state), store just a pointer to the underlying function object from which
this data can be derived.
This helps reduce stack usage for the case when the mp_code_state_t
structure is stored on the stack, as well as heap usage when it's stored
on the heap.
The downside is that the VM becomes a little more complex because it now
needs to derive the data from the underlying function object. But this
doesn't impact the performance by much (if at all) because most of the
decoding of data is done outside the main opcode loop. Measurements using
pystone show that little to no performance is lost.
This patch also fixes a nasty bug whereby the bytecode can be reclaimed by
the GC during execution. With this patch there is always a pointer to the
function object held by the VM during execution, since it's stored in the
mp_code_state_t structure.
Allows to iterate over the following without allocating on the heap:
- tuple
- list
- string, bytes
- bytearray, array
- dict (not dict.keys, dict.values, dict.items)
- set, frozenset
Allows to call the following without heap memory:
- all, any, min, max, sum
TODO: still need to allocate stack memory in bytecode for iter_buf.
Checks for number of args removes where guaranteed by function descriptor,
self checking is replaced with mp_check_self(). In few cases, exception
is raised instead of assert.
This patch changes the type signature of .make_new and .call object method
slots to use size_t for n_args and n_kw (was mp_uint_t. Makes code more
efficient when mp_uint_t is larger than a machine word. Doesn't affect
ports when size_t and mp_uint_t have the same size.
This allows the mp_obj_t type to be configured to something other than a
pointer-sized primitive type.
This patch also includes additional changes to allow the code to compile
when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of
mp_uint_t, and various casts.
Unfortunately, MP_OBJ_STOP_ITERATION doesn't have means to pass an associated
value, so we can't optimize StopIteration exception with (non-None) argument
to MP_OBJ_STOP_ITERATION.