mp_make_raise_obj must be used to convert a possible exception type to an
instance object, otherwise the VM may raise a non-exception object.
An existing test is adjusted to test this case, with the original test
already moved to generator_throw.py.
This matches how bytecode does it, and matches the signature of
mp_emit_glue_assign_native. Since the native emitter doesn't support
nan-boxing uintptr_t and mp_uint_t are anyway the same bit-width.
After the previous commit this macro is no longer needed by the native
emitter because live heap pointers are no longer stored in generated native
machine code.
This commit changes native code to handle constant objects like bytecode:
instead of storing the pointers inside the native code they are now stored
in a separate constant table (such pointers include objects like bignum,
bytes, and raw code for nested functions). This removes the need for the
GC to scan native code for root pointers, and takes a step towards making
native code independent of the runtime (eg so it can be compiled offline by
mpy-cross).
Note that the changes to the struct scope_t did not increase its size: on a
32-bit architecture it is still 48 bytes, and on a 64-bit architecture it
decreased from 80 to 72 bytes.
Nan and inf (signed and unsigned) are also handled correctly by using
signbit (they were also handled correctly with "val<0", but that didn't
handle -0.0 correctly). A test case is added for this behaviour.
When obj.h is compiled as C++ code, the cl compiler emits a warning about
possibly unsafe mixing of size_t and bool types in the or operation in
MP_OBJ_FUN_MAKE_SIG. Similarly there's an implicit narrowing integer
conversion in runtime.h. This commit fixes this by being explicit.
This is an improvement over previous behavior when str was returned for
both str and bytes input format. This new behaviour is also consistent
with how the % operator works, as well as many other str/bytes methods.
It should be noted that it's not how current versions of CPython work,
where there's a gap in the functionality and bytes.format() is not
supported.
This commit adds the math.factorial function in two variants:
- squared difference, which is faster than the naive version, relatively
compact, and non-recursive;
- a mildly optimised recursive version, faster than the above one.
There are some more optimisations that could be done, but they tend to take
more code, and more storage space. The recursive version seems like a
sensible compromise.
The new function is disabled by default, and uses the non-optimised version
by default if it is enabled. The options are MICROPY_PY_MATH_FACTORIAL
and MICROPY_OPT_MATH_FACTORIAL.
This patches avoids multiplying with negative powers-of-10 when parsing
floating-point values, when those powers-of-10 can be exactly represented
as a positive power. When represented as a positive power and used to
divide, the resulting float will not have any rounding errors.
The issue is that mp_parse_num_decimal will sometimes not give the closest
floating representation of the input string. Eg for "0.3", which can't be
represented exactly in floating point, mp_parse_num_decimal gives a
slightly high (by 1LSB) result. This is because it computes the answer as
3 * 0.1, and since 0.1 also can't be represented exactly, multiplying by 3
multiplies up the rounding error in the 0.1. Computing it as 3 / 10, as
now done by the change in this commit, gives an answer which is as close to
the true value of "0.3" as possible.
This commit implements PEP479 which disallows raising StopIteration inside
a generator to signal that it should be finished. Instead, the generator
should simply return when it is complete.
See https://www.python.org/dev/peps/pep-0479/ for details.
In 0e80f345f8 the inplace operations __iadd__
and __isub__ were made unconditionally available, so the comment about this
section is changed to reflect that.
Loading a pointer by indexing into the native function table mp_fun_table,
rather than loading an immediate value (via a PC-relative load), uses less
code space.
This commit makes viper functions have the same signature as native
functions, at the level of the emitter/assembler. This means that viper
functions can now be wrapped in the same uPy object as native functions.
Viper functions are now responsible for parsing their arguments (before it
was done by the runtime), and this makes calling them more efficient (in
most cases) because the viper entry code can be custom generated to suit
the signature of the function.
This change also opens the way forward for viper functions to take
arbitrary numbers of arguments, and for them to handle globals correctly,
among other things.
Now that the compiler can store the results of the viper types in the
scope, the viper parameter annotation compilation stage can be merged with
the normal parameter compilation stage.
With 5 arguments to mp_arg_check_num(), some architectures need to pass
values on the stack. So compressing n_args_min, n_args_max, takes_kw into
a single word and passing only 3 arguments makes the call more efficient,
because almost all calls to this function pass in constant values. Code
size is also reduced by a decent amount:
bare-arm: -116
minimal x86: -64
unix x64: -256
unix nanbox: -112
stm32: -324
cc3200: -192
esp8266: -192
esp32: -144
Prior to this commit a function compiled with the native decorator
@micropython.native would not work correctly when accessing global
variables, because the globals dict was not being set upon function entry.
This commit fixes this problem by, upon function entry, setting as the
current globals dict the globals dict context the function was defined
within, as per normal Python semantics, and as bytecode does. Upon
function exit the original globals dict is restored.
In order to restore the globals dict when an exception is raised the native
function must guard its internals with an nlr_push/nlr_pop pair. Because
this push/pop is relatively expensive, in both C stack usage for the
nlr_buf_t and CPU execution time, the implementation here optimises things
as much as possible. First, the compiler keeps track of whether a function
even needs to access global variables. Using this information the native
emitter then generates three different kinds of code:
1. no globals used, no exception handlers: no nlr handling code and no
setting of the globals dict.
2. globals used, no exception handlers: an nlr_buf_t is allocated on the
C stack but it is not used if the globals dict is unchanged, saving
execution time because nlr_push/nlr_pop don't need to run.
3. function has exception handlers, may use globals: an nlr_buf_t is
allocated and nlr_push/nlr_pop are always called.
In the end, native functions that don't access globals and don't have
exception handlers will run more efficiently than those that do.
Fixes issue #1573.
If bytearray is constructed from str, a second argument of encoding is
required (in CPython), and third arg of Unicode error handling is allowed,
e.g.:
bytearray("str", "utf-8", "strict")
This is similar to bytes:
bytes("str", "utf-8", "strict")
This patch just allows to pass 2nd/3rd arguments to bytearray, but
doesn't try to validate them to not impact code size. (This is also
similar to how bytes constructor is handled, though it does a bit
more validation, e.g. check that in case of str arg, encoding argument
is passed.)
This removes the need for a separate axtls build stage, and builds all
axtls object files along with other code. This simplifies and cleans up
the build process, automatically builds axtls when needed, and puts the
axtls object files in the correct $(BUILD) location.
The MicroPython axtls configuration file is provided in
extmod/axtls-include/config.h
This patch adds full support for unwinding jumps to the native emitter.
This means that return/break/continue can be used in try-except,
try-finally and with statements. For code that doesn't use unwinding jumps
there is almost no overhead added to the generated code.
The native emitter keeps the current exception in a slot in its C stack
(instead of on its Python value stack), so when it catches an exception it
must explicitly clear that slot so the same exception is not reraised later
on.
Back in 8047340d75 basic support was added in
the VM to handle return statements within a finally block. But it didn't
cover all cases, in particular when some finally's were active and others
inactive when the "return" was executed.
This patch adds further support for return-within-finally by correctly
managing the currently_in_except_block flag, and should fix all cases. The
main point is that finally handlers remain on the exception stack even if
they are active (currently being executed), and the unwind return code
should only execute those finally's which are inactive.
New tests are added for the cases which now pass.
Prior to this patch, native code would use a full nlr_buf_t for each
exception handler (try-except, try-finally, with). For nested exception
handlers this would use a lot of C stack and be rather inefficient.
This patch changes how exceptions are handled in native code by setting up
only a single nlr_buf_t context for the entire function, and then manages a
state machine (using the PC) to work out which exception handler to run
when an exception is raised by an nlr_jump. This keeps the C stack usage
at a constant level regardless of the depth of Python exception blocks.
The patch also fixes an existing bug when local variables are written to
within an exception handler, then their value was incorrectly restored if
an exception was raised (since the nlr_jump would restore register values,
back to the point of the nlr_push).
And it also gets nested try-finally+with working with the viper emitter.
Broadly speaking, efficiency of executing native code that doesn't use
any exception blocks is unchanged, and emitted code size is only slightly
increased for such function. C stack usage of all native functions is
either equal or less than before. Emitted code size for native functions
that use exception blocks is increased by roughly 10% (due in part to
fixing of above-mentioned bugs).
But, most importantly, this patch allows to implement more Python features
in native code, like unwind jumps and yielding from within nested exception
blocks.
These POSIX wrappers are assumed to be passed a concrete stream object so
it is more efficient (eg on nan-boxing builds) to pass in the pointer
rather than mp_obj_t, because then the users of these functions only need
to store a void* (and mp_obj_t may be wider than a pointer). And things
would be further improved if the stream protocol functions eventually took
a pointer as their first argument (instead of an mp_obj_t).
This patch is a step to getting ussl/axtls compiling on nan-boxing builds.
See issue #3085.
Otherwise there is the possibility that n_free starts out non-zero from the
previous iteration, which may have found a few (but not enough) free blocks
at the end of the heap. If this is the case, and if the very first blocks
that are scanned the second time around (starting at
gc_last_free_atb_index) are found to give enough memory (including the
blocks at the end of the heap from the previous iteration that left n_free
non-zero) then memory will be allocated starting before the location that
gc_last_free_atb_index points to, most likely leading to corruption.
This serious bug did not manifest itself in the past because a gc_collect
always resets gc_last_free_atb_index to point to the start of the GC heap,
and the first block there is almost always allocated to a long-lived
object (eg entries from sys.path, or mounted filesystem objects), which
means that n_free would be reset at the start of the search loop.
But with threading enabled with the GIL disabled it is possible to trigger
the bug via the following sequence of events:
1. Thread A runs gc_alloc, fails to find enough memory, and has a non-zero
n_free at the end of the search.
2. Thread A calls gc_collect and frees a bunch of blocks on the GC heap.
3. Just after gc_collect finishes in thread A, thread B takes gc_mutex and
does an allocation, moving gc_last_free_atb_index to point to the
interior of the heap, to a place where there is most likely a run of
available blocks.
4. Thread A regains gc_mutex and does its second search for free memory,
starting with a non-zero n_free. Since it's likely that the first block
it searches is available it will allocate memory which overlaps with the
memory before gc_last_free_atb_index.
Without this patch, on 64-bit architectures the "1 << (small_int_bits - 1)"
is computed using only 32-bit values (since small_int_bits is a uint8_t)
and so will overflow (and give the wrong result) if small_int_bits is
larger than 32.
There is no need to have three copies of the exception object on the top of
the native value stack. Instead, the values on the stack should be the
first two items in an nlr_buf_t: the prev pointer and the ret_val pointer.
This is all that is needed and is what the rest of the native emitter
expects is on the stack.
This patch is essentially an optimisation. Behaviour is unchanged,
although the stack layout for native exception handling now makes more
sense.
A native function allocates space on its C stack for mp_code_state_t,
followed by its Python stack, then its locals. This patch makes sure that
the native function actually starts at the start of its Python stack,
rather than at the start of mp_code_state_t (which didn't lead to any
issues so far because the mp_code_state_t is unused after the native
function sets itself up).
On x86 archs (both 32 and 64 bit) a bool return value only sets the 8-bit
al register, and the higher bits of the ax register have an undefined
value. When testing the return value of such cases it is required to just
test al for zero/non-zero. On the other hand, checking for truth or
zero/non-zero on an integer return value requires checking all bits of the
register. These two cases must be distinguished and handled correctly in
generated native code. This patch makes sure of this.
For other supported native archs (ARM, Thumb2, Xtensa) there is no such
distinction and this patch does not change anything for them.
DEBUG_printf and MICROPY_DEBUG_PRINTER is now used instead of normal
printf, and a fault is fixed in mp_obj_class_lookup with debugging enabled;
see issue #3999. Debugging can now be enabled on all ports including when
nan-boxing is used.
This patch in effect renames MICROPY_DEBUG_PRINTER_DEST to
MICROPY_DEBUG_PRINTER, moving its default definition from
lib/utils/printf.c to py/mpconfig.h to make it official and documented, and
makes this macro a pointer rather than the actual mp_print_t struct. This
is done to get consistency with MICROPY_ERROR_PRINTER, and provide this
macro for use outside just lib/utils/printf.c.
Ports are updated to use the new macro name.
This patch makes the Thumb-2 native emitter use wide ldr instructions to
call into the runtime, when the index into the native glue function table
is 32 or greater. This reduces the generated assembler code from 10 bytes
to 6 bytes, saving RAM and making native code run about 0.8% faster.
This error message did not consume all of its variable args, a bug
introduced long ago in baf6f14deb. By fixing
it to use %s (instead of keeping the string as-is and deleting the last
arg) the same error message string is now reused three times in this format
function and gives a code size reduction of around 130 bytes. It also now
gives a better error message when a non-string is passed in as an argument
to format, eg '{:d}'.format([]).
There's no need to call mp_obj_new_int() which will just fail the check for
small int and call mp_obj_new_int_from_ll() anyway.
Thanks to @Jongy for prompting this change.
In non-debug mode MP_OBJ_STOP_ITERATION is zero and comparing something to
zero can be done more efficiently in assembler than comparing to a non-zero
value.
With the recent change b488a4a848, a
generating function now has the same layout in memory as a normal bytecode
function, and so can reuse the latter's attribute accessor code to
implement __name__.
Because this function is simple it saves code size to have it inlined.
Being an auxiliary helper function (and only used in the py/ core) the
argument should always be an mp_obj_module_t*, so there's no need for the
assert (and having it would require including assert.h in obj.h).
It's a very simple function and saves code, and improves efficiency, by
being inline. Note that this is an auxiliary helper function and so
doesn't need mp_check_self -- that's used for functions that can be
accessed directly from Python code (eg from a method table).
mp_obj_module_get_globals() returns a mp_obj_dict_t*, and type->locals_dict
is a mp_obj_dict_t*, so access the map entry of the dict directly instead
of needing to cast this mp_obj_dict_t* up to an object and then calling the
mp_obj_dict_get_map() helper function.
For generating functions there is no need to wrap the bytecode function in
a generator wrapper instance. Instead the type of the bytecode function
can be changed to mp_type_gen_wrap. This reduces code size and saves a
block of GC heap RAM for each generator.
This feature is controlled at compile time by MICROPY_PY_URE_SUB, disabled
by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by
MICROPY_PY_URE_MATCH_SPAN_START_END, disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by MICROPY_PY_URE_MATCH_GROUPS,
disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
Before this patch the context manager's __aexit__() method would not be
executed if a return/break/continue statement was used to exit an async
with block. async with now has the same semantics as normal with.
The fix here applies purely to the compiler, and does not modify the
runtime at all. It might (eventually) be better to define new bytecode(s)
to handle async with (and maybe other async constructs) in a cleaner, more
efficient way.
One minor drawback with addressing this issue purely in the compiler is
that it wasn't possible to get 100% CPython semantics. The thing that is
different here to CPython is that the __aexit__ method is not looked up in
the context manager until it is needed, which is after the body of the
async with statement has executed. So if a context manager doesn't have
__aexit__ then CPython raises an exception before the async with is
executed, whereas uPy will raise it after it is executed. Note that
__aenter__ is looked up at the beginning in uPy because it needs to be
called straightaway, so if the context manager isn't a context manager then
it'll still raise an exception at the same location as CPython. The only
difference is if the context manager has the __aenter__ method but not the
__aexit__ method, then in that case uPy has different behaviour. But this
is a very minor, and acceptable, difference.
Allow including crypto consts based on compilation settings. Disabled by
default to reduce code size; if one wants extra code readability, can
enable them.
The API follows guidelines of https://www.python.org/dev/peps/pep-0272/,
but is optimized for code size, with the idea that full PEP 0272
compatibility can be added with a simple Python wrapper mode.
The naming of the module follows (u)hashlib pattern.
At the bare minimum, this module is expected to provide:
* AES128, ECB (i.e. "null") mode, encrypt only
Implementation in this commit is based on axTLS routines, and implements
following:
* AES 128 and 256
* ECB and CBC modes
* encrypt and decrypt
The existing mp_get_stream_raise() helper does explicit checks that the
input object is a real pointer object, has a non-NULL stream protocol, and
has the desired stream C method (read/write/ioctl). In most cases it is
not necessary to do these checks because it is guaranteed that the input
object has the stream protocol and desired C methods. For example, native
objects that use the stream wrappers (eg mp_stream_readinto_obj) in their
locals dict always have the stream protocol (or else they shouldn't have
these wrappers in their locals dict).
This patch introduces an efficient mp_get_stream() which doesn't do any
checks and just extracts the stream protocol struct. This should be used
in all cases where the argument object is known to be a stream. The
existing mp_get_stream_raise() should be used primarily to verify that an
object does have the correct stream protocol methods.
All uses of mp_get_stream_raise() in py/stream.c have been converted to use
mp_get_stream() because the argument is guaranteed to be a proper stream
object.
This patch improves efficiency of stream operations and reduces code size.
This patch changes dupterm to call the native C stream methods on the
connected stream objects, instead of calling the Python readinto/write
methods. This is much more efficient for native stream objects like UART
and webrepl and doesn't require allocating a special dupterm array.
This change is a minor breaking change from the user's perspective because
dupterm no longer accepts pure user stream objects to duplicate on. But
with the recent addition of uio.IOBase it is possible to still create such
classes just by inheriting from uio.IOBase, for example:
import uio, uos
class MyStream(uio.IOBase):
def write(self, buf):
# existing write implementation
def readinto(self, buf):
# existing readinto implementation
uos.dupterm(MyStream())
Via the config value MICROPY_PY_UHASHLIB_SHA256. Default to enabled to
keep backwards compatibility.
Also add default value for the sha1 class, to at least document its
existence.
A user class derived from IOBase and implementing readinto/write/ioctl can
now be used anywhere a native stream object is accepted.
The mapping from C to Python is:
stream_p->read --> readinto(buf)
stream_p->write --> write(buf)
stream_p->ioctl --> ioctl(request, arg)
Among other things it allows the user to:
- create an object which can be passed as the file argument to print:
print(..., file=myobj), and then print will pass all the data to the
object via the objects write method (same as CPython)
- pass a user object to uio.BufferedWriter to buffer the writes (same as
CPython)
- use select.select on a user object
- register user objects with select.poll, in particular so user objects can
be used with uasyncio
- create user files that can be returned from user filesystems, and import
can import scripts from these user files
For example:
class MyOut(io.IOBase):
def write(self, buf):
print('write', repr(buf))
return len(buf)
print('hello', file=MyOut())
The feature is enabled via MICROPY_PY_IO_IOBASE which is disabled by
default.
This patch adds the gc_sweep_all() function which does a garbage collection
without tracing any root pointers, so frees all the memory, and most
importantly runs any remaining finalisers.
This helps primarily for soft reset: it will close any open files, any open
sockets, and help to get the system back to a clean state upon soft reset.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
This VFS component allows to mount a host POSIX filesystem within the uPy
VFS sub-system. All traditional POSIX file access then goes through the
VFS, allowing to sandbox a uPy process to a certain sub-dir of the host
system, as well as mount other filesystem types alongside the host
filesystem.
Since a long time now, mp_obj_type_t no longer refers explicitly to
mp_stream_p_t but rather to an abstract "const void *protocol". So there's
no longer any need to define mp_stream_p_t in obj.h and it can go with all
its associated definitions in stream.h. Pretty much all users of this type
will already include the stream header.
The code_state.old_globals variable is there to save the globals state so
should be used for this purpose, to avoid the need for additional local
variables on the C stack.
Without this, if GC threshold is hit and there is not enough memory left to
satisfy the request, gc_collect() will run a second time and the search for
memory will happen again and will fail again.
Thanks to @adritium for pointing out this issue, see #3786.
Under ubsan, when evaluating hash(-0.) the following diagnostic occurs:
../../py/objfloat.c:102:15: runtime error: negation of
-9223372036854775808 cannot be represented in type 'mp_int_t' (aka
'long'); cast to an unsigned type to negate this value to itself
So do just that, to tell the compiler that we want to perform this
operation using modulo arithmetic rules.
Before this, ubsan would detect a problem when executing
hash(006699999999999999999999999999999999999999999999999999999999999999999999)
../../py/mpz.c:1539:20: runtime error: left shift of 1067371580458 by
32 places cannot be represented in type 'mp_int_t' (aka 'long')
When the overflow does occur it now happens as defined by the rules of
unsigned arithmetic.
When computing e.g. hash(0.4e3) with ubsan enabled, a diagnostic like the
following would occur:
../../py/objfloat.c:91:30: runtime error: shift exponent 44 is too
large for 32-bit type 'int'
By casting constant "1" to the right type the intended value is preserved.
Fuzz testing combined with the undefined behavior sanitizer found that
parsing unreasonable float literals like 1e+9999999999999 resulted in
undefined behavior due to overflow in signed integer arithmetic, and a
wrong result being returned.
There is no need to use the mp_int_t type which may be 64-bits wide, there
is enough bit-width in a normal int to parse reasonable exponents. Using
int helps to reduce code size for 64-bit ports, especially nan-boxing
builds. (Similarly for the "dig" variable which is now an unsigned int.)
Calling memset(NULL, value, 0) is not standards compliant so we must add an
explicit check that emit->label_offsets is indeed not NULL before calling
memset (this pointer will be NULL on the first pass of the parse tree and
it's more logical / safer to check this pointer rather than check that the
pass is not the first one).
Code sanitizers will warn if NULL is passed as the first value to memset,
and compilers may optimise the code based on the knowledge that any pointer
passed to memset is guaranteed not to be NULL.
Before this patch:
>>> print(')
... ')
Traceback (most recent call last):
File "<stdin>", line 1
SyntaxError: invalid syntax
After this patch:
>>> print(')
Traceback (most recent call last):
File "<stdin>", line 1
SyntaxError: invalid syntax
This matches CPython and prevents getting stuck in REPL continuation when a
1-quote is unmatched.
Before this patch, when using the switch statement for dispatch in the VM
(not computed goto) a pending exception check was done after each opcode.
This is not necessary and this patch makes the pending exception check only
happen when explicitly requested by certain opcodes, like jump. This
improves performance of the VM by about 2.5% when using the switch.
This patch fixes the macro so you can pass any name in, and the macro will
make more sense if you're reading it on its own. It worked previously
because n_state is always passed in as n_state_out_var.
gcc 8.0 supports the naked attribute for x86 systems so it can now be used
here. And in fact it is necessary to use this for nlr_push because gcc 8.0
no longer generates a prelude for this function (even without the naked
attribute).
This patch moves the start of the root pointer section in mp_state_ctx_t
so that it skips entries that are not pointers and don't need scanning.
Previously, the start of the root pointer section was at the very beginning
of the mp_state_ctx_t struct (which is the beginning of mp_state_thread_t).
This was the original assembler version of the NLR code was hard-coded to
have the nlr_top pointer at the start of this state structure. But now
that the NLR code is partially written in C there is no longer this
restriction on the location of nlr_top (and a comment to this effect has
been removed in this patch).
So now the root pointer section starts part way through the
mp_state_thread_t structure, after the entries which are not root pointers.
This patch also moves the non-pointer entries for MICROPY_ENABLE_SCHEDULER
outside the root pointer section.
Moving non-pointer entries out of the root pointer section helps to make
the GC more precise and should help to prevent some cases of collectable
garbage being kept.
This patch also has a measurable improvement in performance of the
pystone.py benchmark: on unix x86-64 and stm32 there was an improvement of
roughly 0.6% (tested with both gcc 7.3 and gcc 8.1).
This patch changes 2 things in the endianness detection:
1. Don't assume that __BYTE_ORDER__ not being __ORDER_LITTLE_ENDIAN__ means
that the machine is big endian, so add an explicit check that this macro
is indeed __ORDER_BIG_ENDIAN__ (same with __BYTE_ORDER, __LITTLE_ENDIAN
and __BIG_ENDIAN). A machine could have PDP endianness.
2. Remove the checks which base their autodetection decision on whether any
little or big endian macros are defined (eg __LITTLE_ENDIAN__ or
__BIG_ENDIAN__). Just because a system defines these does not mean it
has that endianness.
See issue #3760.
For cases where size_t is smaller than mp_int_t (eg nan-boxing builds) the
difference between two size_t's is not sign extended into mp_int_t and so
the result is never negative. This patch fixes this bug by using ssize_t
for the type of the result.
This gives dir() better behaviour when listing the attributes of a user
type that defines __getattr__: it will now not list those attributes for
which __getattr__ raises AttributeError (meaning the attribute is not
supported by the object).