There was an assumption that all names in a module dict are qstr's.
However, they can be dynamically generated (by assigning to globals()),
and in case of a long name, it won't be a qstr. Handle this situation
properly, including taking care of not creating superfluous qstr's for
names starting with "_" (which aren't imported by "import *").
Taking the address of a local variable is mildly expensive, in code size
and stack usage. So optimise scope_find_or_add_id() to not need to take a
pointer to the "added" variable, and instead take the kind to use for newly
added identifiers.
This ensures that implicit variables are only converted to implicit
closed-over variables (nonlocals) at the very end of the function scope.
If variables are closed-over when first used (read from, as was done prior
to this commit) then this can be incorrect because the variable may be
assigned to later on in the function which means they are just a plain
local, not closed over.
Fixes issue #4272.
Building axtls gives a lot of warnings with -Wall enabled, and explicitly
disabling all of them cannot be done in a way compatible with gcc and
clang, and likely other compilers. So just use -Wno-all to prevent all of
the extra warnings (in addition to the necessary -Wno-unused-parameter,
-Wno-uninitialized, -Wno-sign-compare and -Wno-old-style-definition).
Fixes issue #4182.
Configurable via MICROPY_MODULE_GETATTR, disabled by default. Among other
things __getattr__ for modules can help to build lazy loading / code
unloading at runtime.
Configurable via MICROPY_PY_BUILTINS_STR_COUNT. Default is enabled.
Disabled for bare-arm, minimal, unix-minimal and zephyr ports. Disabling
it saves 408 bytes on x86.
So these constant objects can be loaded by dereferencing the REG_FUN_TABLE
pointer instead of loading immediate values. This reduces the size of
generated native code (when such constants are used), and means that
pointers to these constants are no longer stored in the assembly code.
The maximum index into mp_fun_table is currently less than 1024 and should
stay that way to keep things efficient for all architectures, so there is
no need to handle loading the pointer directly via a literal in this
function.
All architectures now have a dedicated register to hold the pointer to the
native function table mp_fun_table, and so they all need to load this
register at the start of the native function. This commit makes the
loading of this register uniform across architectures by passing the
pointer in the constant table for the native function, and then loading the
register from the constant table. Doing it this way means that the pointer
is not stored in the assembly code, helping to make the code more portable.
Instead of storing the function pointer directly in the assembly code.
This makes the generated code more independent of the runtime (so easier to
relocate the code), and reduces the generated code size.
The esp register is always a fixed distance below ebp, and using esp to
reference locals on the stack frees up the ebp register for general purpose
use (which is important for an architecture with only 8 user registers).
Instead of storing the function pointer directly in the assembly code.
This makes the generated code more independent of the runtime (so easier to
relocate the code), and reduces the generated code size.
The rsp register is always a fixed distance below rbp, and using rsp to
reference locals on the stack frees up the rbp register for general purpose
use.
This commit adds first class support for yield and yield-from in the native
emitter, including send and throw support, and yields enclosed in exception
handlers (which requires pulling down the NLR stack before yielding, then
rebuilding it when resuming).
This has been fully tested and is working on unix x86 and x86-64, and
stm32. Also basic tests have been done with the esp8266 port. Performance
of existing native code is unchanged.
The nlr_buf_t doesn't need to be part of the Python value stack (as it was
before this commit), it's simpler to have it separated as auxiliary state
that lives on the C stack. This will help adding yield support because in
that case the nlr_buf_t and Python value stack live in separate memory
areas (C stack and heap respectively).
Instead of at end of state, n_state - 1. It was originally (way back in
v1.0) put at the end of the state because the VM didn't have a pointer to
the start. But now that the VM takes a mp_code_state_t pointer it does
have a pointer to the start of the state so can put the exception object
there.
This commit saves about 30 bytes of code on all architectures, and, more
importantly, reduces C-stack usage by a couple of words (8 bytes on Thumb2
and 16 bytes on x86-64) for every (non-generator) call of a bytecode
function because fun_bc_call no longer needs to remember the n_state
variable.
This makes these special methods have the same calling behaviour as other
methods in a class instance (mp_convert_member_lookup() is already called
by mp_obj_class_lookup()).
And remove related comment about needing such protection when calling send.
Reasoning for removal is as follows:
- mp_resume is only called by the VM in YIELD_FROM opcode
- if send_value != MP_OBJ_NULL then throw_value == MP_OBJ_NULL
- so if __next__ or send are called then throw_value == MP_OBJ_NULL
- if __next__ or send raise an exception without nlr protection then the
exception will be handled by the global exception handler of the VM
- this handler already has code to handle exceptions raised in YIELD_FROM,
including correct handling of StopIteration
- this handler doesn't handle the case of injection of GeneratorExit, but
this won't be needed because throw_value == MP_OBJ_NULL
Note that it's already possible for mp_resume() to raise an exception
(including StopIteration) from the unprotected call to type->iternext(), so
that's why the VM already has code to handle the case of exceptions coming
out of mp_resume().
This commit reduces code size by a bit, and significantly reduces C stack
usage when using yield-from, from 88 bytes down to 40 for Thumb2, and 152
down to 72 bytes for x86-64 (better than half). (Note that gcc doesn't
seem to tail-call optimise the call from mp_resume() to mp_obj_gen_resume()
so this saving in C stack usage helps all uses of yield-from.)
mp_make_raise_obj must be used to convert a possible exception type to an
instance object, otherwise the VM may raise a non-exception object.
An existing test is adjusted to test this case, with the original test
already moved to generator_throw.py.
This matches how bytecode does it, and matches the signature of
mp_emit_glue_assign_native. Since the native emitter doesn't support
nan-boxing uintptr_t and mp_uint_t are anyway the same bit-width.
After the previous commit this macro is no longer needed by the native
emitter because live heap pointers are no longer stored in generated native
machine code.
This commit changes native code to handle constant objects like bytecode:
instead of storing the pointers inside the native code they are now stored
in a separate constant table (such pointers include objects like bignum,
bytes, and raw code for nested functions). This removes the need for the
GC to scan native code for root pointers, and takes a step towards making
native code independent of the runtime (eg so it can be compiled offline by
mpy-cross).
Note that the changes to the struct scope_t did not increase its size: on a
32-bit architecture it is still 48 bytes, and on a 64-bit architecture it
decreased from 80 to 72 bytes.
Nan and inf (signed and unsigned) are also handled correctly by using
signbit (they were also handled correctly with "val<0", but that didn't
handle -0.0 correctly). A test case is added for this behaviour.
When obj.h is compiled as C++ code, the cl compiler emits a warning about
possibly unsafe mixing of size_t and bool types in the or operation in
MP_OBJ_FUN_MAKE_SIG. Similarly there's an implicit narrowing integer
conversion in runtime.h. This commit fixes this by being explicit.
This is an improvement over previous behavior when str was returned for
both str and bytes input format. This new behaviour is also consistent
with how the % operator works, as well as many other str/bytes methods.
It should be noted that it's not how current versions of CPython work,
where there's a gap in the functionality and bytes.format() is not
supported.
This commit adds the math.factorial function in two variants:
- squared difference, which is faster than the naive version, relatively
compact, and non-recursive;
- a mildly optimised recursive version, faster than the above one.
There are some more optimisations that could be done, but they tend to take
more code, and more storage space. The recursive version seems like a
sensible compromise.
The new function is disabled by default, and uses the non-optimised version
by default if it is enabled. The options are MICROPY_PY_MATH_FACTORIAL
and MICROPY_OPT_MATH_FACTORIAL.
This patches avoids multiplying with negative powers-of-10 when parsing
floating-point values, when those powers-of-10 can be exactly represented
as a positive power. When represented as a positive power and used to
divide, the resulting float will not have any rounding errors.
The issue is that mp_parse_num_decimal will sometimes not give the closest
floating representation of the input string. Eg for "0.3", which can't be
represented exactly in floating point, mp_parse_num_decimal gives a
slightly high (by 1LSB) result. This is because it computes the answer as
3 * 0.1, and since 0.1 also can't be represented exactly, multiplying by 3
multiplies up the rounding error in the 0.1. Computing it as 3 / 10, as
now done by the change in this commit, gives an answer which is as close to
the true value of "0.3" as possible.
This commit implements PEP479 which disallows raising StopIteration inside
a generator to signal that it should be finished. Instead, the generator
should simply return when it is complete.
See https://www.python.org/dev/peps/pep-0479/ for details.
In 0e80f345f8 the inplace operations __iadd__
and __isub__ were made unconditionally available, so the comment about this
section is changed to reflect that.
Loading a pointer by indexing into the native function table mp_fun_table,
rather than loading an immediate value (via a PC-relative load), uses less
code space.
This commit makes viper functions have the same signature as native
functions, at the level of the emitter/assembler. This means that viper
functions can now be wrapped in the same uPy object as native functions.
Viper functions are now responsible for parsing their arguments (before it
was done by the runtime), and this makes calling them more efficient (in
most cases) because the viper entry code can be custom generated to suit
the signature of the function.
This change also opens the way forward for viper functions to take
arbitrary numbers of arguments, and for them to handle globals correctly,
among other things.
Now that the compiler can store the results of the viper types in the
scope, the viper parameter annotation compilation stage can be merged with
the normal parameter compilation stage.
With 5 arguments to mp_arg_check_num(), some architectures need to pass
values on the stack. So compressing n_args_min, n_args_max, takes_kw into
a single word and passing only 3 arguments makes the call more efficient,
because almost all calls to this function pass in constant values. Code
size is also reduced by a decent amount:
bare-arm: -116
minimal x86: -64
unix x64: -256
unix nanbox: -112
stm32: -324
cc3200: -192
esp8266: -192
esp32: -144
Prior to this commit a function compiled with the native decorator
@micropython.native would not work correctly when accessing global
variables, because the globals dict was not being set upon function entry.
This commit fixes this problem by, upon function entry, setting as the
current globals dict the globals dict context the function was defined
within, as per normal Python semantics, and as bytecode does. Upon
function exit the original globals dict is restored.
In order to restore the globals dict when an exception is raised the native
function must guard its internals with an nlr_push/nlr_pop pair. Because
this push/pop is relatively expensive, in both C stack usage for the
nlr_buf_t and CPU execution time, the implementation here optimises things
as much as possible. First, the compiler keeps track of whether a function
even needs to access global variables. Using this information the native
emitter then generates three different kinds of code:
1. no globals used, no exception handlers: no nlr handling code and no
setting of the globals dict.
2. globals used, no exception handlers: an nlr_buf_t is allocated on the
C stack but it is not used if the globals dict is unchanged, saving
execution time because nlr_push/nlr_pop don't need to run.
3. function has exception handlers, may use globals: an nlr_buf_t is
allocated and nlr_push/nlr_pop are always called.
In the end, native functions that don't access globals and don't have
exception handlers will run more efficiently than those that do.
Fixes issue #1573.
If bytearray is constructed from str, a second argument of encoding is
required (in CPython), and third arg of Unicode error handling is allowed,
e.g.:
bytearray("str", "utf-8", "strict")
This is similar to bytes:
bytes("str", "utf-8", "strict")
This patch just allows to pass 2nd/3rd arguments to bytearray, but
doesn't try to validate them to not impact code size. (This is also
similar to how bytes constructor is handled, though it does a bit
more validation, e.g. check that in case of str arg, encoding argument
is passed.)
This removes the need for a separate axtls build stage, and builds all
axtls object files along with other code. This simplifies and cleans up
the build process, automatically builds axtls when needed, and puts the
axtls object files in the correct $(BUILD) location.
The MicroPython axtls configuration file is provided in
extmod/axtls-include/config.h
This patch adds full support for unwinding jumps to the native emitter.
This means that return/break/continue can be used in try-except,
try-finally and with statements. For code that doesn't use unwinding jumps
there is almost no overhead added to the generated code.
The native emitter keeps the current exception in a slot in its C stack
(instead of on its Python value stack), so when it catches an exception it
must explicitly clear that slot so the same exception is not reraised later
on.
Back in 8047340d75 basic support was added in
the VM to handle return statements within a finally block. But it didn't
cover all cases, in particular when some finally's were active and others
inactive when the "return" was executed.
This patch adds further support for return-within-finally by correctly
managing the currently_in_except_block flag, and should fix all cases. The
main point is that finally handlers remain on the exception stack even if
they are active (currently being executed), and the unwind return code
should only execute those finally's which are inactive.
New tests are added for the cases which now pass.
Prior to this patch, native code would use a full nlr_buf_t for each
exception handler (try-except, try-finally, with). For nested exception
handlers this would use a lot of C stack and be rather inefficient.
This patch changes how exceptions are handled in native code by setting up
only a single nlr_buf_t context for the entire function, and then manages a
state machine (using the PC) to work out which exception handler to run
when an exception is raised by an nlr_jump. This keeps the C stack usage
at a constant level regardless of the depth of Python exception blocks.
The patch also fixes an existing bug when local variables are written to
within an exception handler, then their value was incorrectly restored if
an exception was raised (since the nlr_jump would restore register values,
back to the point of the nlr_push).
And it also gets nested try-finally+with working with the viper emitter.
Broadly speaking, efficiency of executing native code that doesn't use
any exception blocks is unchanged, and emitted code size is only slightly
increased for such function. C stack usage of all native functions is
either equal or less than before. Emitted code size for native functions
that use exception blocks is increased by roughly 10% (due in part to
fixing of above-mentioned bugs).
But, most importantly, this patch allows to implement more Python features
in native code, like unwind jumps and yielding from within nested exception
blocks.
These POSIX wrappers are assumed to be passed a concrete stream object so
it is more efficient (eg on nan-boxing builds) to pass in the pointer
rather than mp_obj_t, because then the users of these functions only need
to store a void* (and mp_obj_t may be wider than a pointer). And things
would be further improved if the stream protocol functions eventually took
a pointer as their first argument (instead of an mp_obj_t).
This patch is a step to getting ussl/axtls compiling on nan-boxing builds.
See issue #3085.
Otherwise there is the possibility that n_free starts out non-zero from the
previous iteration, which may have found a few (but not enough) free blocks
at the end of the heap. If this is the case, and if the very first blocks
that are scanned the second time around (starting at
gc_last_free_atb_index) are found to give enough memory (including the
blocks at the end of the heap from the previous iteration that left n_free
non-zero) then memory will be allocated starting before the location that
gc_last_free_atb_index points to, most likely leading to corruption.
This serious bug did not manifest itself in the past because a gc_collect
always resets gc_last_free_atb_index to point to the start of the GC heap,
and the first block there is almost always allocated to a long-lived
object (eg entries from sys.path, or mounted filesystem objects), which
means that n_free would be reset at the start of the search loop.
But with threading enabled with the GIL disabled it is possible to trigger
the bug via the following sequence of events:
1. Thread A runs gc_alloc, fails to find enough memory, and has a non-zero
n_free at the end of the search.
2. Thread A calls gc_collect and frees a bunch of blocks on the GC heap.
3. Just after gc_collect finishes in thread A, thread B takes gc_mutex and
does an allocation, moving gc_last_free_atb_index to point to the
interior of the heap, to a place where there is most likely a run of
available blocks.
4. Thread A regains gc_mutex and does its second search for free memory,
starting with a non-zero n_free. Since it's likely that the first block
it searches is available it will allocate memory which overlaps with the
memory before gc_last_free_atb_index.
Without this patch, on 64-bit architectures the "1 << (small_int_bits - 1)"
is computed using only 32-bit values (since small_int_bits is a uint8_t)
and so will overflow (and give the wrong result) if small_int_bits is
larger than 32.
There is no need to have three copies of the exception object on the top of
the native value stack. Instead, the values on the stack should be the
first two items in an nlr_buf_t: the prev pointer and the ret_val pointer.
This is all that is needed and is what the rest of the native emitter
expects is on the stack.
This patch is essentially an optimisation. Behaviour is unchanged,
although the stack layout for native exception handling now makes more
sense.
A native function allocates space on its C stack for mp_code_state_t,
followed by its Python stack, then its locals. This patch makes sure that
the native function actually starts at the start of its Python stack,
rather than at the start of mp_code_state_t (which didn't lead to any
issues so far because the mp_code_state_t is unused after the native
function sets itself up).
On x86 archs (both 32 and 64 bit) a bool return value only sets the 8-bit
al register, and the higher bits of the ax register have an undefined
value. When testing the return value of such cases it is required to just
test al for zero/non-zero. On the other hand, checking for truth or
zero/non-zero on an integer return value requires checking all bits of the
register. These two cases must be distinguished and handled correctly in
generated native code. This patch makes sure of this.
For other supported native archs (ARM, Thumb2, Xtensa) there is no such
distinction and this patch does not change anything for them.
DEBUG_printf and MICROPY_DEBUG_PRINTER is now used instead of normal
printf, and a fault is fixed in mp_obj_class_lookup with debugging enabled;
see issue #3999. Debugging can now be enabled on all ports including when
nan-boxing is used.
This patch in effect renames MICROPY_DEBUG_PRINTER_DEST to
MICROPY_DEBUG_PRINTER, moving its default definition from
lib/utils/printf.c to py/mpconfig.h to make it official and documented, and
makes this macro a pointer rather than the actual mp_print_t struct. This
is done to get consistency with MICROPY_ERROR_PRINTER, and provide this
macro for use outside just lib/utils/printf.c.
Ports are updated to use the new macro name.
This patch makes the Thumb-2 native emitter use wide ldr instructions to
call into the runtime, when the index into the native glue function table
is 32 or greater. This reduces the generated assembler code from 10 bytes
to 6 bytes, saving RAM and making native code run about 0.8% faster.
This error message did not consume all of its variable args, a bug
introduced long ago in baf6f14deb. By fixing
it to use %s (instead of keeping the string as-is and deleting the last
arg) the same error message string is now reused three times in this format
function and gives a code size reduction of around 130 bytes. It also now
gives a better error message when a non-string is passed in as an argument
to format, eg '{:d}'.format([]).
There's no need to call mp_obj_new_int() which will just fail the check for
small int and call mp_obj_new_int_from_ll() anyway.
Thanks to @Jongy for prompting this change.
In non-debug mode MP_OBJ_STOP_ITERATION is zero and comparing something to
zero can be done more efficiently in assembler than comparing to a non-zero
value.
With the recent change b488a4a848, a
generating function now has the same layout in memory as a normal bytecode
function, and so can reuse the latter's attribute accessor code to
implement __name__.
Because this function is simple it saves code size to have it inlined.
Being an auxiliary helper function (and only used in the py/ core) the
argument should always be an mp_obj_module_t*, so there's no need for the
assert (and having it would require including assert.h in obj.h).
It's a very simple function and saves code, and improves efficiency, by
being inline. Note that this is an auxiliary helper function and so
doesn't need mp_check_self -- that's used for functions that can be
accessed directly from Python code (eg from a method table).
mp_obj_module_get_globals() returns a mp_obj_dict_t*, and type->locals_dict
is a mp_obj_dict_t*, so access the map entry of the dict directly instead
of needing to cast this mp_obj_dict_t* up to an object and then calling the
mp_obj_dict_get_map() helper function.
For generating functions there is no need to wrap the bytecode function in
a generator wrapper instance. Instead the type of the bytecode function
can be changed to mp_type_gen_wrap. This reduces code size and saves a
block of GC heap RAM for each generator.
This feature is controlled at compile time by MICROPY_PY_URE_SUB, disabled
by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by
MICROPY_PY_URE_MATCH_SPAN_START_END, disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by MICROPY_PY_URE_MATCH_GROUPS,
disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
Before this patch the context manager's __aexit__() method would not be
executed if a return/break/continue statement was used to exit an async
with block. async with now has the same semantics as normal with.
The fix here applies purely to the compiler, and does not modify the
runtime at all. It might (eventually) be better to define new bytecode(s)
to handle async with (and maybe other async constructs) in a cleaner, more
efficient way.
One minor drawback with addressing this issue purely in the compiler is
that it wasn't possible to get 100% CPython semantics. The thing that is
different here to CPython is that the __aexit__ method is not looked up in
the context manager until it is needed, which is after the body of the
async with statement has executed. So if a context manager doesn't have
__aexit__ then CPython raises an exception before the async with is
executed, whereas uPy will raise it after it is executed. Note that
__aenter__ is looked up at the beginning in uPy because it needs to be
called straightaway, so if the context manager isn't a context manager then
it'll still raise an exception at the same location as CPython. The only
difference is if the context manager has the __aenter__ method but not the
__aexit__ method, then in that case uPy has different behaviour. But this
is a very minor, and acceptable, difference.
Allow including crypto consts based on compilation settings. Disabled by
default to reduce code size; if one wants extra code readability, can
enable them.
The API follows guidelines of https://www.python.org/dev/peps/pep-0272/,
but is optimized for code size, with the idea that full PEP 0272
compatibility can be added with a simple Python wrapper mode.
The naming of the module follows (u)hashlib pattern.
At the bare minimum, this module is expected to provide:
* AES128, ECB (i.e. "null") mode, encrypt only
Implementation in this commit is based on axTLS routines, and implements
following:
* AES 128 and 256
* ECB and CBC modes
* encrypt and decrypt
The existing mp_get_stream_raise() helper does explicit checks that the
input object is a real pointer object, has a non-NULL stream protocol, and
has the desired stream C method (read/write/ioctl). In most cases it is
not necessary to do these checks because it is guaranteed that the input
object has the stream protocol and desired C methods. For example, native
objects that use the stream wrappers (eg mp_stream_readinto_obj) in their
locals dict always have the stream protocol (or else they shouldn't have
these wrappers in their locals dict).
This patch introduces an efficient mp_get_stream() which doesn't do any
checks and just extracts the stream protocol struct. This should be used
in all cases where the argument object is known to be a stream. The
existing mp_get_stream_raise() should be used primarily to verify that an
object does have the correct stream protocol methods.
All uses of mp_get_stream_raise() in py/stream.c have been converted to use
mp_get_stream() because the argument is guaranteed to be a proper stream
object.
This patch improves efficiency of stream operations and reduces code size.
This patch changes dupterm to call the native C stream methods on the
connected stream objects, instead of calling the Python readinto/write
methods. This is much more efficient for native stream objects like UART
and webrepl and doesn't require allocating a special dupterm array.
This change is a minor breaking change from the user's perspective because
dupterm no longer accepts pure user stream objects to duplicate on. But
with the recent addition of uio.IOBase it is possible to still create such
classes just by inheriting from uio.IOBase, for example:
import uio, uos
class MyStream(uio.IOBase):
def write(self, buf):
# existing write implementation
def readinto(self, buf):
# existing readinto implementation
uos.dupterm(MyStream())
Via the config value MICROPY_PY_UHASHLIB_SHA256. Default to enabled to
keep backwards compatibility.
Also add default value for the sha1 class, to at least document its
existence.
A user class derived from IOBase and implementing readinto/write/ioctl can
now be used anywhere a native stream object is accepted.
The mapping from C to Python is:
stream_p->read --> readinto(buf)
stream_p->write --> write(buf)
stream_p->ioctl --> ioctl(request, arg)
Among other things it allows the user to:
- create an object which can be passed as the file argument to print:
print(..., file=myobj), and then print will pass all the data to the
object via the objects write method (same as CPython)
- pass a user object to uio.BufferedWriter to buffer the writes (same as
CPython)
- use select.select on a user object
- register user objects with select.poll, in particular so user objects can
be used with uasyncio
- create user files that can be returned from user filesystems, and import
can import scripts from these user files
For example:
class MyOut(io.IOBase):
def write(self, buf):
print('write', repr(buf))
return len(buf)
print('hello', file=MyOut())
The feature is enabled via MICROPY_PY_IO_IOBASE which is disabled by
default.
This patch adds the gc_sweep_all() function which does a garbage collection
without tracing any root pointers, so frees all the memory, and most
importantly runs any remaining finalisers.
This helps primarily for soft reset: it will close any open files, any open
sockets, and help to get the system back to a clean state upon soft reset.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
This VFS component allows to mount a host POSIX filesystem within the uPy
VFS sub-system. All traditional POSIX file access then goes through the
VFS, allowing to sandbox a uPy process to a certain sub-dir of the host
system, as well as mount other filesystem types alongside the host
filesystem.
Since a long time now, mp_obj_type_t no longer refers explicitly to
mp_stream_p_t but rather to an abstract "const void *protocol". So there's
no longer any need to define mp_stream_p_t in obj.h and it can go with all
its associated definitions in stream.h. Pretty much all users of this type
will already include the stream header.
The code_state.old_globals variable is there to save the globals state so
should be used for this purpose, to avoid the need for additional local
variables on the C stack.
Without this, if GC threshold is hit and there is not enough memory left to
satisfy the request, gc_collect() will run a second time and the search for
memory will happen again and will fail again.
Thanks to @adritium for pointing out this issue, see #3786.
Under ubsan, when evaluating hash(-0.) the following diagnostic occurs:
../../py/objfloat.c:102:15: runtime error: negation of
-9223372036854775808 cannot be represented in type 'mp_int_t' (aka
'long'); cast to an unsigned type to negate this value to itself
So do just that, to tell the compiler that we want to perform this
operation using modulo arithmetic rules.
Before this, ubsan would detect a problem when executing
hash(006699999999999999999999999999999999999999999999999999999999999999999999)
../../py/mpz.c:1539:20: runtime error: left shift of 1067371580458 by
32 places cannot be represented in type 'mp_int_t' (aka 'long')
When the overflow does occur it now happens as defined by the rules of
unsigned arithmetic.
When computing e.g. hash(0.4e3) with ubsan enabled, a diagnostic like the
following would occur:
../../py/objfloat.c:91:30: runtime error: shift exponent 44 is too
large for 32-bit type 'int'
By casting constant "1" to the right type the intended value is preserved.
Fuzz testing combined with the undefined behavior sanitizer found that
parsing unreasonable float literals like 1e+9999999999999 resulted in
undefined behavior due to overflow in signed integer arithmetic, and a
wrong result being returned.
There is no need to use the mp_int_t type which may be 64-bits wide, there
is enough bit-width in a normal int to parse reasonable exponents. Using
int helps to reduce code size for 64-bit ports, especially nan-boxing
builds. (Similarly for the "dig" variable which is now an unsigned int.)
Calling memset(NULL, value, 0) is not standards compliant so we must add an
explicit check that emit->label_offsets is indeed not NULL before calling
memset (this pointer will be NULL on the first pass of the parse tree and
it's more logical / safer to check this pointer rather than check that the
pass is not the first one).
Code sanitizers will warn if NULL is passed as the first value to memset,
and compilers may optimise the code based on the knowledge that any pointer
passed to memset is guaranteed not to be NULL.
Before this patch:
>>> print(')
... ')
Traceback (most recent call last):
File "<stdin>", line 1
SyntaxError: invalid syntax
After this patch:
>>> print(')
Traceback (most recent call last):
File "<stdin>", line 1
SyntaxError: invalid syntax
This matches CPython and prevents getting stuck in REPL continuation when a
1-quote is unmatched.
Before this patch, when using the switch statement for dispatch in the VM
(not computed goto) a pending exception check was done after each opcode.
This is not necessary and this patch makes the pending exception check only
happen when explicitly requested by certain opcodes, like jump. This
improves performance of the VM by about 2.5% when using the switch.
This patch fixes the macro so you can pass any name in, and the macro will
make more sense if you're reading it on its own. It worked previously
because n_state is always passed in as n_state_out_var.
gcc 8.0 supports the naked attribute for x86 systems so it can now be used
here. And in fact it is necessary to use this for nlr_push because gcc 8.0
no longer generates a prelude for this function (even without the naked
attribute).
This patch moves the start of the root pointer section in mp_state_ctx_t
so that it skips entries that are not pointers and don't need scanning.
Previously, the start of the root pointer section was at the very beginning
of the mp_state_ctx_t struct (which is the beginning of mp_state_thread_t).
This was the original assembler version of the NLR code was hard-coded to
have the nlr_top pointer at the start of this state structure. But now
that the NLR code is partially written in C there is no longer this
restriction on the location of nlr_top (and a comment to this effect has
been removed in this patch).
So now the root pointer section starts part way through the
mp_state_thread_t structure, after the entries which are not root pointers.
This patch also moves the non-pointer entries for MICROPY_ENABLE_SCHEDULER
outside the root pointer section.
Moving non-pointer entries out of the root pointer section helps to make
the GC more precise and should help to prevent some cases of collectable
garbage being kept.
This patch also has a measurable improvement in performance of the
pystone.py benchmark: on unix x86-64 and stm32 there was an improvement of
roughly 0.6% (tested with both gcc 7.3 and gcc 8.1).
This patch changes 2 things in the endianness detection:
1. Don't assume that __BYTE_ORDER__ not being __ORDER_LITTLE_ENDIAN__ means
that the machine is big endian, so add an explicit check that this macro
is indeed __ORDER_BIG_ENDIAN__ (same with __BYTE_ORDER, __LITTLE_ENDIAN
and __BIG_ENDIAN). A machine could have PDP endianness.
2. Remove the checks which base their autodetection decision on whether any
little or big endian macros are defined (eg __LITTLE_ENDIAN__ or
__BIG_ENDIAN__). Just because a system defines these does not mean it
has that endianness.
See issue #3760.
For cases where size_t is smaller than mp_int_t (eg nan-boxing builds) the
difference between two size_t's is not sign extended into mp_int_t and so
the result is never negative. This patch fixes this bug by using ssize_t
for the type of the result.
This gives dir() better behaviour when listing the attributes of a user
type that defines __getattr__: it will now not list those attributes for
which __getattr__ raises AttributeError (meaning the attribute is not
supported by the object).
This patch fixes the possibility of a crash of the REPL when tab-completing
an object which raises an exception when its attributes are accessed.
See issue #3729.
This new helper function acts like mp_load_method_maybe but is wrapped in
an NLR handler so it can catch exceptions. It prevents AttributeError from
propagating out, and optionally all other exceptions. This helper can be
used to fully implement hasattr (see follow-up commit), and also for cases
where mp_load_method_maybe is used but it must now raise an exception.
This is a more consistent use of errno codes. For example, it may be that
a stream returns MP_EAGAIN but the mp_is_nonblocking_error() macro doesn't
catch this value because it checks for EAGAIN instead (which may have a
different value than MP_EAGAIN when MICROPY_USE_INTERNAL_ERRNO is enabled).
Most modern systems have EWOULDBLOCK aliased to EAGAIN, ie they have the
same value. But some systems use different values for these errnos and if
a uPy port is using the system errno values (ie not the internal uPy
values) then it's important to be able to distinguish EWOULDBLOCK from
EAGAIN. Eg if a system call returned EWOULDBLOCK it must be possible to
check for this return value, and this patch makes this now possible.
Instead of emitnative.c having configuration code for each supported
architecture, and then compiling this file multiple times with different
macros defined, this patch adds a file per architecture with the necessary
code to configure the native emitter. These files then #include the
emitnative.c file.
This simplifies emitnative.c (which is already very large), and simplifies
the build system because emitnative.c no longer needs special handling for
compilation and qstr extraction.
This patch moves the implementation of stream closure from a dedicated
method to the ioctl of the stream protocol, for each type that implements
closing. The benefits of this are:
1. Rounds out the stream ioctl function, which already includes flush,
seek and poll (among other things).
2. Makes calling mp_stream_close() on an object slightly more efficient
because it now no longer needs to lookup the close method and call it,
rather it just delegates straight to the ioctl function (if it exists).
3. Reduces code size and allows future types that implement the stream
protocol to be smaller because they don't need a dedicated close method.
Code size reduction is around 200 bytes smaller for x86 archs and around
30 bytes smaller for the bare-metal archs.
The LHS passed to mp_obj_int_binary_op() will always be an integer, either
a small int or a big int, so the test for this type doesn't need to include
an "other, unsupported type" case.
Without the compiler enabled the mp_optimise_value is unused, and the
micropython.opt_level() function is not useful, so exclude these from the
build to save RAM and code size.
When pystack is enabled mp_obj_fun_bc_prepare_codestate() will always
return a valid pointer, and if there is no more pystack available then it
will raise an exception (a RuntimeError). So having pystack enabled with
stackless enabled automatically gives strict stackless mode. There is
therefore no need to have code for strict stackless mode when pystack is
enabled, and this patch optimises the VM for such a case.
The VM expects that, if mp_resume() returns MP_VM_RETURN_EXCEPTION, then
the returned value is an exception instance (eg to add a traceback to it).
It's possible that a value passed to a generator's throw() is not an
exception so must be explicitly checked for if the thrown value is not
intercepted by the generator.
Thanks to @jepler for finding the bug.
Prior to this patch the code would crash if a key in a ** dict was anything
other than a str or qstr. This is because mp_setup_code_state() assumes
that keys in kwargs are qstrs (for efficiency).
Thanks to @jepler for finding the bug.
By using pre-compiled regexs, using startswith(), and explicitly checking
for empty lines (of which around 30% of the input lines are), automatic
qstr extraction is speed up by about 10%.
All callers of mp_obj_int_formatted() are expected to pass in a valid int
object, and they do:
- mp_obj_int_print() should always pass through an int object because it is
the print special method for int instances.
- mp_print_mp_int() checks that the argument is an int, and if not converts
it to a small int.
This patch saves around 20-50 bytes of code space.
Prior to this patch, some architectures (eg unix x86) could render floats
with "negative" digits, like ")". For example, '%.23e' % 1e-80 would come
out as "1.0000000000000000/)/(,*0e-80". This patch fixes the known cases.
Prior to this patch, some architectures (eg unix x86) could render floats
with a ":" character in them, eg 1e+39 would come out as ":e+38" (":" is
just after "9" in ASCII so this is like 10e+38). This patch fixes some of
these cases.
Prior to this patch the %f formatting of some FP values could be off by up
to 1, eg '%.0f' % 123 would return "122" (unix x64). Depending on the FP
precision (single vs double) certain numbers would format correctly, but
others wolud not. This patch should fix all cases of rounding for %f.
There's no need to have MP_OBJ_NULL a special case, the code can re-use
the MP_OBJ_STOP_ITERATION value to signal the special case and the VM can
detect this with only one check (for MP_OBJ_STOP_ITERATION).
This patch concerns the handling of an NLR-raised StopIteration, raised
during a call to mp_resume() which is handling the yield from opcode.
Previously, commit 6738c1dded introduced code
to handle this case, along with a test. It seems that it was lucky that
the test worked because the code did not correctly handle the stack pointer
(sp).
Furthermore, commit 79d996a57b improved the
way mp_resume() propagated certain exceptions: it changed raising an NLR
value to returning MP_VM_RETURN_EXCEPTION. This change meant that the
test introduced in gen_yield_from_ducktype.py was no longer hitting the
code introduced in 6738c1dded.
The patch here does two things:
1. Fixes the handling of sp in the VM for the case that yield from is
interrupted by a StopIteration raised via NLR.
2. Introduces a new test to check this handling of sp and re-covers the
code in the VM.
This path for src->deg==NULL is never used because mpz_clone() is always
called with an argument that has a non-zero integer value, and hence has
some digits allocated to it (mpz_clone() is a static function private to
mpz.c all callers of this function first check if the integer value is zero
and if so take a special-case path, bypassing the call to mpz_clone()).
There is some unused and commented-out functions that may actually pass a
zero-valued mpz to mpz_clone(), so some TODOs are added to these function
in case they are needed in the future.
All callers of the asm entry function guarantee that num_locals>=0, so no
need to add an explicit check for it. Use an assertion instead.
Also, the signature of asm_x86_entry is changed to match the other asm
entry functions.
If a port only needs the core files then it can now use the $(PY_CORE_O)
variable instead of $(PY_O). $(PY_EXTMOD_O) contains the list of extmod
files (including some files from lib/). $(PY_O) retains its original
definition as the list of all object file (including those for frozen code)
and is a convenience variable for ports that want everything.
Saves a few bytes of code space, and is more efficient because with
MICROPY_GC_CONSERVATIVE_CLEAR enabled by default all memory is already
cleared when allocated.
Otherwise passing -1 as maxlen will lead to a zero allocation and
subsequent unbound buffer overflow in deque.append() because i_put is
allowed to grow without bound.
So far, implements just append() and popleft() methods, required for
a normal queue. Constructor doesn't accept an arbitarry sequence to
initialize from (am empty deque is always created), so an empty tuple
must be passed as such. Only fixed-size deques are supported, so 2nd
argument (size) is required.
There's also an extension to CPython - if True is passed as 3rd argument,
append(), instead of silently overwriting the oldest item on queue
overflow, will throw IndexError. This behavior is desired in many
cases, where queues should store information reliably, instead of
silently losing some items.
The micropython.stack_use() function is useful to query the current C stack
usage, and it's inclusion in the micropython module doesn't need to be tied
to the inclusion of mem_info()/qstr_info() because it doesn't rely on any
of the code from these functions. So this patch introduces the config
option MICROPY_PY_MICROPYTHON_STACK_USE which can be used to independently
control the inclusion of stack_use(). By default it is enabled if
MICROPY_PY_MICROPYTHON_MEM_INFO is enabled (thus not changing any of the
existing ports).
The new option is MICROPY_ENABLE_EXTERNAL_IMPORT and is enabled by default
so that the default behaviour is the same as before. With it disabled
import is only supported for built-in modules, not for external files nor
frozen modules. This allows to support targets that have no filesystem of
any kind and that only have access to pre-supplied built-in modules
implemented natively.
Prior to this patch uPy (on a 32-bit arch) would have severe issues when
calling bytes(-1): such a call would call vstr_init_len(vstr, -1) which
would then +1 on the len and call vstr_init(vstr, 0), which would then
round this up and allocate a small amount of memory for the vstr. The
bytes constructor would then attempt to zero out all this memory, thinking
it had allocated 2^32-1 bytes.
This patch changes the way REPL autocomplete finds matches. It now probes
the target object for all qstrs via mp_load_method_maybe to look for a
match with the given input string. Similar to how the builtin dir()
function works, this new algorithm now find all methods and instances of
user-defined classes including attributes of their parent classes. This
helps a lot at the REPL prompt for user-discovery and to autocomplete names
even for classes that are derived.
The downside is that this new algorithm is slower than the previous one,
and in particular will be slower the more qstrs there are in the system.
But because REPL autocomplete is primarily used in an interactive way it is
not that important to make it fast, as long as it is "fast enough" compared
to human reaction.
On a slow microcontroller (CPU running at 16MHz) the autocomplete time for
a list of 35 names in the outer namespace (pressing tab at a bare prompt)
takes about 160ms with this algorithm, compared to about 40ms for the
previous implementation (this time includes the actual printing of the
names as well). This time of 160ms is very reasonable especially given the
new functionality of listing all the names.
This patch also decreases code size by:
bare-arm: +0
minimal x86: -128
unix x64: -128
unix nanbox: -224
stm32: -88
cc3200: -80
esp8266: -92
esp32: -84
This patch improves the builtin dir() function by probing the target object
with all possible qstrs via mp_load_method_maybe. This is very simple (in
terms of implementation), doesn't require recursion, and allows to list all
methods of user-defined classes (without duplicates) even if they have
multiple inheritance with a common parent. The downside is that it can be
slow because it has to iterate through all the qstrs in the system, but
the "dir()" function is anyway mostly used for testing frameworks and user
introspection of types, so speed is not considered a priority.
In addition to providing a more complete implementation of dir(), this
patch is simpler than the previous implementation and saves some code
space:
bare-arm: -80
minimal x86: -80
unix x64: -56
unix nanbox: -48
stm32: -80
cc3200: -80
esp8266: -104
esp32: -64
This macro is written out explicitly in the two locations that it is used
and then the code is optimised, opening possibilities for further
optimisations and reducing code size:
unix: -48
minimal CROSS=1: -32
stm32: -32
Using the message "maximum recursion depth exceeded" for when the pystack
runs out of memory can be misleading because the pystack can run out for
reasons other than deep recursion (although in most cases pystack
exhaustion is probably indirectly related to deep recursion). And it's
important to give the user more precise feedback as to the reason for the
error: if they know precisely that the pystack was exhausted then they have
a chance to increase the amount of memory available to the pystack (as
opposed to not knowing if it was the C stack or pystack that ran out).
Also, C stack exhaustion is more serious than pystack exhaustion because it
could have been that the C stack overflowed and overwrote/corrupted some
data and so the system must be restarted. The pystack can never corrupt
data in this way so pystack exhaustion does not require a system restart.
Knowing the difference between these two cases is therefore important.
The actual exception type for pystack exhaustion remains as RuntimeError so
that programatically it behaves the same as a C stack exhaustion.
By adding __builtin_unreachable() at the end of nlr_push, we're
essentially telling the compiler that this function will never return.
When GCC LTO is in use, this means that any time nlr_push() is called
(which is often), the compiler thinks this function will never return
and thus eliminates all code following the call.
Note: I've added a 'return 0' for older GCC versions like 4.6 which
complain about not returning anything (which doesn't make sense in a
naked function). Newer GCC versions (tested 4.8, 5.4 and some others)
don't complain about this.
This constant exception instance was once used by m_malloc_fail() to raise
a MemoryError without allocating memory, but it was made obsolete long ago
by 3556e45711. The functionality is now
replaced by the use of mp_emergency_exception_obj which lives in the global
uPy state, and which can handle any exception type, not just MemoryError.
This feature is not often used so is guarded by the config option
MICROPY_PY_BUILTINS_RANGE_BINOP which is disabled by default. With this
option disabled MicroPython will always return false when comparing two
range objects for equality (unless they are exactly the same object
instance). This does not match CPython so if (in)equality between range
objects is needed then this option should be enabled.
Enabling this option costs between 100 and 200 bytes of code space
depending on the machine architecture.
This patch provides inline versions of the utf8 helper functions for the
case when unicode is disabled (MICROPY_PY_BUILTINS_STR_UNICODE set to 0).
This saves code size.
The unichar_charlen function is also renamed to utf8_charlen to match the
other utf8 helper functions, and the signature of this function is adjusted
for consistency (const char* -> const byte*, mp_uint_t -> size_t).
Prior to this patch, a float literal that was close to subnormal would
have a loss of precision when parsed. The worst case was something like
float('10000000000000000000e-326') which returned 0.0.
This patch simplifies how sentinel values are stored on the stack when
doing an unwind return or jump. Instead of storing two values on the stack
for an unwind jump it now stores only one: a negative small integer means
unwind-return and a non-negative small integer means unwind-jump with the
value being the number of exceptions to unwind. The savings in code size
are:
bare-arm: -56
minimal x86: -68
unix x64: -80
unix nanbox: -4
stm32: -56
cc3200: -64
esp8266: -76
esp32: -156
The array should be of type unsigned byte because that is the type of the
values being stored. And changing to uint8_t helps to prevent warnings
from some static analysers.
Note that the check for elem!=NULL is removed for the
MP_MAP_LOOKUP_ADD_IF_NOT_FOUND case because mp_map_lookup will always
return non-NULL for such a case.
This patch combines the compiler optimisation code for double and triple
tuple-to-tuple assignment, taking it from two separate if-blocks to one
combined if-block. This can be done because the code for both of these
optimisations has a lot in common. Combining them together reduces code
size for ports that have the triple-tuple optimisation enabled (and doesn't
change code size for ports that have it disabled).
The number of registers used should be 10, not 12, to match the assembly
code in nlrx64.c. With this change the 64bit mingw builds don't need to
use the setjmp implementation, and this fixes miscellaneous crashes and
assertion failures as reported in #1751 for instance.
To avoid mistakes in the future where something gcc-related for Windows
only gets fixed for one particular compiler/environment combination,
make use of a MICROPY_NLR_OS_WINDOWS macro.
To make sure everything nlr-related is now ok when built with gcc this
has been verified with:
- unix port built with gcc on Cygwin (i686-pc-cygwin-gcc and
x86_64-pc-cygwin-gcc, version 6.4.0)
- windows port built with mingw-w64's gcc from Cygwin
(i686-w64-mingw32-gcc and x86_64-w64-mingw32-gcc, version 6.4.0)
and MSYS2 (like the ones on Cygwin but version 7.2.0)
There are two checks that are always false so can be converted to (negated)
assertions to save code space and execution time. They are:
1. The check of the str parameter, which is required to be non-NULL as per
the original comment that it has enough space in it as calculated by
mp_int_format_size. And for all uses of this function str is indeed
non-NULL.
2. The check of the base parameter, which is already required to be between
2 and 16 (inclusive) via the assertion in mp_int_format_size.
The motivation behind this patch is to remove unreachable code in mpn_div.
This unreachable code was added some time ago in
9a21d2e070, when a loop in mpn_div was copied
and adjusted to work when mpz_dig_t was exactly half of the size of
mpz_dbl_dig_t (a common case). The loop was copied correctly but it wasn't
noticed at the time that the final part of the calculation of num-quo*den
could be optimised, and hence unreachable code was left for a case that
never occurred.
The observation for the optimisation is that the initial value of quo in
mpn_div is either exact or too large (never too small), and therefore the
subtraction of quo*den from num may subtract exactly enough or too much
(but never too little). Using this observation the part of the algorithm
that handles the borrow value can be simplified, and most importantly this
eliminates the unreachable code.
The new code has been tested with DIG_SIZE=3 and DIG_SIZE=4 by dividing all
possible combinations of non-negative integers with between 0 and 3
(inclusive) mpz digits.
Empty __VA_ARGS__ are not allowed in the C preprocessor so adjust the rule
arg offset calculation to not use them. Also, some compilers (eg MSVC)
require an extra layer of macro expansion.
This is the sixth and final patch in a series of patches to the parser that
aims to reduce code size by compressing the data corresponding to the rules
of the grammar.
Prior to this set of patches the rules were stored as rule_t structs with
rule_id, act and arg members. And then there was a big table of pointers
which allowed to lookup the address of a rule_t struct given the id of that
rule.
The changes that have been made are:
- Breaking up of the rule_t struct into individual components, with each
component in a separate array.
- Removal of the rule_id part of the struct because it's not needed.
- Put all the rule arg data in a big array.
- Change the table of pointers to rules to a table of offsets within the
array of rule arg data.
The last point is what is done in this patch here and brings about the
biggest decreases in code size, because an array of pointers is now an
array of bytes.
Code size changes for the six patches combined is:
bare-arm: -644
minimal x86: -1856
unix x64: -5408
unix nanbox: -2080
stm32: -720
esp8266: -812
cc3200: -712
For the change in parser performance: it was measured on pyboard that these
six patches combined gave an increase in script parse time of about 0.4%.
This is due to the slightly more complicated way of looking up the data for
a rule (since the 9th bit of the offset into the rule arg data table is
calculated with an if statement). This is an acceptable increase in parse
time considering that parsing is only done once per script (if compiled on
the target).
Instead of each rule being stored in ROM as a struct with rule_id, act and
arg, the act and arg parts are now in separate arrays and the rule_id part
is removed because it's not needed. This reduces code size, by roughly one
byte per grammar rule, around 150 bytes.
The rule name is only used for debugging, and this patch makes things a bit
cleaner by completely separating out the rule name from the rest of the
rule data.
Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot
of the NLR code, specifically that dealing with pushing and popping the NLR
pointer to maintain the linked-list of NLR buffers. This patch factors all
of that code out of the specific implementations into generic functions in
nlr.c, along with a helper macro in nlr.h. This eliminates duplicated
code.
If MICROPY_NLR_SETJMP is not enabled and the machine is auto-detected then
nlr.h now defines some convenience macros for the individual NLR
implementations to use (eg MICROPY_NLR_THUMB). This keeps nlr.h and the
implementation in sync, and also makes the nlr_buf_t struct easier to read.
A function with a naked attribute must only contain basic inline asm
statements and no C code.
For nlr_push this means removing the "return 0" statement. But for some
gcc versions this induces a compiler warning so the __builtin_unreachable()
line needs to be added.
For nlr_jump, this function contains a combination of C code and inline asm
so cannot be naked.
This reverts commit 6a3a742a6c.
The above commit has number of faults starting from the motivation down
to the actual implementation.
1. Faulty implementation.
The original code contained functions like:
NORETURN void nlr_jump(void *val) {
nlr_buf_t **top_ptr = &MP_STATE_THREAD(nlr_top);
nlr_buf_t *top = *top_ptr;
...
__asm volatile (
"mov %0, %%edx \n" // %edx points to nlr_buf
"mov 28(%%edx), %%esi \n" // load saved %esi
"mov 24(%%edx), %%edi \n" // load saved %edi
"mov 20(%%edx), %%ebx \n" // load saved %ebx
"mov 16(%%edx), %%esp \n" // load saved %esp
"mov 12(%%edx), %%ebp \n" // load saved %ebp
"mov 8(%%edx), %%eax \n" // load saved %eip
"mov %%eax, (%%esp) \n" // store saved %eip to stack
"xor %%eax, %%eax \n" // clear return register
"inc %%al \n" // increase to make 1, non-local return
"ret \n" // return
: // output operands
: "r"(top) // input operands
: // clobbered registers
);
}
Which clearly stated that C-level variable should be a parameter of the
assembly, whcih then moved it into correct register.
Whereas now it's:
NORETURN void nlr_jump_tail(nlr_buf_t *top) {
(void)top;
__asm volatile (
"mov 28(%edx), %esi \n" // load saved %esi
"mov 24(%edx), %edi \n" // load saved %edi
"mov 20(%edx), %ebx \n" // load saved %ebx
"mov 16(%edx), %esp \n" // load saved %esp
"mov 12(%edx), %ebp \n" // load saved %ebp
"mov 8(%edx), %eax \n" // load saved %eip
"mov %eax, (%esp) \n" // store saved %eip to stack
"xor %eax, %eax \n" // clear return register
"inc %al \n" // increase to make 1, non-local return
"ret \n" // return
);
for (;;); // needed to silence compiler warning
}
Which just tries to perform operations on a completely random register (edx
in this case). The outcome is the expected: saving the pure random luck of
the compiler putting the right value in the random register above, there's
a crash.
2. Non-critical assessment.
The original commit message says "There is a small overhead introduced
(typically 1 machine instruction)". That machine instruction is a call
if a compiler doesn't perform tail optimization (happens regularly), and
it's 1 instruction only with the broken code shown above, fixing it
requires adding more. With inefficiencies already presented in the NLR
code, the overhead becomes "considerable" (several times more than 1%),
not "small".
The commit message also says "This eliminates duplicated code.". An
obvious way to eliminate duplication would be to factor out common code
to macros, not introduce overhead and breakage like above.
3. Faulty motivation.
All this started with a report of warnings/errors happening for a niche
compiler. It could have been solved in one the direct ways: a) fixing it
just for affected compiler(s); b) rewriting it in proper assembly (like
it was before BTW); c) by not doing anything at all, MICROPY_NLR_SETJMP
exists exactly to address minor-impact cases like thar (where a) or b) are
not applicable). Instead, a backwards "solution" was put forward, leading
to all the issues above.
The best action thus appears to be revert and rework, not trying to work
around what went haywire in the first place.
Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot
of the NLR code, specifically that dealing with pushing and popping the NLR
pointer to maintain the linked-list of NLR buffers. This patch factors all
of that code out of the specific implementations into generic functions in
nlr.c. This eliminates duplicated code.
The factoring also allows to make the machine-specific NLR code pure
assembler code, thus allowing nlrthumb.c to use naked function attributes
in the correct way (naked functions can only have basic inline assembler
code in them).
There is a small overhead introduced (typically 1 machine instruction)
because now the generic nlr_jump() must call nlr_jump_tail() rather than
them being one combined function.
set_equal is called only from set_binary_op, and this guarantees that the
second arg to set_equal is always a set or frozenset. So there is no need
to do a further check.
This implements .pend_throw(exc) method, which sets up an exception to be
triggered on the next call to generator's .__next__() or .send() method.
This is unlike .throw(), which immediately starts to execute the generator
to process the exception. This effectively adds Future-like capabilities
to generator protocol (exception will be raised in the future).
The need for such a method arised to implement uasyncio wait_for() function
efficiently (its behavior is clearly "Future" like, and normally would
require to introduce an expensive Future wrapper around all native
couroutines, like upstream asyncio does).
py/objgenerator: pend_throw: Return previous pended value.
This effectively allows to store an additional value (not necessary an
exception) in a coroutine while it's not being executed. uasyncio has
exactly this usecase: to mark a coro waiting in I/O queue (and thus
not executed in the normal scheduling queue), for the purpose of
implementing wait_for() function (cancellation of such waiting coro
by a timeout).
Some compilers can treat enum types as signed, in which case 3 bits is not
enough to encode all mp_raw_code_kind_t values. So change the type to
mp_uint_t.
This is a bit of a clumsy way of doing it but solves the issue of __init__
not running when a module is imported via its weak-link name. Ideally a
better solution would be found.
Before this patch, if a user defined the __new__() function for a class
then two instances of that class would be created: once before __new__ is
called and once during the __new__ call (assuming the user creates some
instance, eg using super().__new__, which is most of the time). The first
one was then discarded. This refactor makes it so that a new instance is
only created if the user __new__ function doesn't exist.
This patch cleans up and generalises part of the code which handles
overriding and calling a native base-class's __init__ method. It defers
the call to the native make_new() function until after the user (Python)
__init__() method has run. That user method now has the chance to call the
native __init__/make_new and pass it different arguments. If the user
doesn't call the super().__init__ method then it will be called
automatically after the user code finishes, to finalise construction of the
instance.
The nan-boxing representation has an extra 16-bits of space to store
small-int values, and making use of it allows to create and manipulate full
32-bit positive integers (ie up to 0xffffffff) without using the heap.
This patch introduces the MICROPY_ENABLE_PYSTACK option (disabled by
default) which enables a "Python stack" that allows to allocate and free
memory in a scoped, or Last-In-First-Out (LIFO) way, similar to alloca().
A new memory allocation API is introduced along with this Py-stack. It
includes both "local" and "nonlocal" LIFO allocation. Local allocation is
intended to be equivalent to using alloca(), whereby the same function must
free the memory. Nonlocal allocation is where another function may free
the memory, so long as it's still LIFO.
Follow-up patches will convert all uses of alloca() and VLA to the new
scoped allocation API. The old behaviour (using alloca()) will still be
available, but when MICROPY_ENABLE_PYSTACK is enabled then alloca() is no
longer required or used.
The benefits of enabling this option are (or will be once subsequent
patches are made to convert alloca()/VLA):
- Toolchains without alloca() can use this feature to obtain correct and
efficient scoped memory allocation (compared to using the heap instead
of alloca(), which is slower).
- Even if alloca() is available, enabling the Py-stack gives slightly more
efficient use of stack space when calling nested Python functions, due to
the way that compilers implement alloca().
- Enabling the Py-stack with the stackless mode allows for even more
efficient stack usage, as well as retaining high performance (because the
heap is no longer used to build and destroy stackless code states).
- With Py-stack and stackless enabled, Python-calling-Python is no longer
recursive in the C mp_execute_bytecode function.
The micropython.pystack_use() function is included to measure usage of the
Python stack.
This function was implemented as an experiment, and was enabled only in
unix port. To remind, it allows to access arbitrary files frozen as
source modules (vs bytecode).
However, further experimentation showed that the same functionality can
be implemented with frozen bytecode. The process requires more steps, but
with suitable toolset it doesn't matter patch. This process is:
1. Convert binary files into "Python resource module" with
tools/mpy_bin2res.py.
2. Freeze as the bytecode.
3. Use micropython-lib's pkg_resources.resource_stream() to access it.
In other words, the extra step is using tools/mpy_bin2res.py (because
there would be wrapper for uio.resource_stream() anyway).
Going frozen bytecode route allows more flexibility, and same/additional
efficiency:
1. Frozen source support can be disabled altogether for additional code
savings.
2. Resources could be also accessed as a buffer, not just as a stream.
There're few caveats too:
1. It wasn't actually profiled the overhead of storing a resource in
"Python resource module" vs storing it directly, but it's assumed that
overhead is small.
2. The "efficiency" claim above applies to the case when resource
file is frozen as the bytecode. If it's not, it actually will take a
lot of RAM on loading. But in this case, the resource file should not
be used (i.e. generated) in the first place, and micropython-lib's
pkg_resources.resource_stream() implementation has the appropriate
fallback to read the raw files instead. This still poses some distribution
issues, e.g. to deployable to baremetal ports (which almost certainly
would require freezeing as the bytecode), a distribution package should
include the resource module. But for non-freezing deployment, presense
of resource module will lead to memory inefficiency.
All the discussion above reminds why uio.resource_stream() was implemented
in the first place - to address some of the issues above. However, since
then, frozen bytecode approach seems to prevail, so, while there're still
some issues to address with it, this change is being made.
This change saves 488 bytes for the unix x86_64 port.
This target removes any stray files (i.e. something not committed to git)
from scripts/ and modules/ dirs (or whatever FROZEN_DIR and FROZEN_MPY_DIR
is set to).
The expected workflow is:
1. make clean-frozen
2. micropython -m upip -p modules <packages_to_freeze>
3. make
As it can be expected that people may drop random thing in those dirs which
they can miss later, the content is actually backed up before cleaning.
This is second part of fun_bc_call() vs mp_obj_fun_bc_prepare_codestate()
common code refactor. This factors out code to initialize codestate
object. After this patch, mp_obj_fun_bc_prepare_codestate() is effectively
DECODE_CODESTATE_SIZE() followed by allocation followed by
INIT_CODESTATE(), and fun_bc_call() starts with that too.
fun_bc_call() starts with almost the same code as
mp_obj_fun_bc_prepare_codestate(), the only difference is a way to
allocate the codestate object (heap vs stack with heap fallback).
Still, would be nice to avoid code duplication to make further
refactoring easier.
So, this commit factors out the common code before the allocation -
decoding and calculating codestate size. It produces two values,
so structured as a macro which writes to 2 variables passed as
arguments.
The assembler back-end for most architectures needs to know if a jump is
backwards in order to emit optimised machine code, and they do this by
checking if the destination label has been set or not. So always reset
label offsets to -1 (this reverts partially the previous commit, with some
minor optimisation for the if-logic with the pass variable).
Clearing the labels to -1 is purely a debugging measure. For release
builds there is no need to do it as the label offset table should always
have the correct value assigned.
Accessing them will crash immediately instead still working for some time,
until overwritten by some other data, leading to much less deterministic
crashes.
This is mostly a workaround for forceful rebuilding of mpy-cross on every
codebase change. If this file has debug logging enabled (by patching),
mpy-cross build failed.
Before that, the output was truncated to 32 bits. Only "%x" format is
handled, because a typical use is for addresses.
This refactor actually decreased x86_64 code size by 30 bytes.
This allows the function to raise an exception when unknown keyword args
are passed in. This patch also reduces code size by (in bytes):
bare-arm: -24
minimal x86: -76
unix x64: -56
unix nanbox: -84
stm32: -40
esp8266: -68
cc3200: -48
Furthermore, this patch adds space (" ") to the set of ROM qstrs which
means it doesn't need to be put in RAM if it's ever used.
Return the result of called function. If exception happened, return
MP_OBJ_NULL. Allows to use mp_call_function_*_protected() with callbacks
returning values, etc.
This commit essentially reverts aa9dbb1b03
where this if-condition was added. It seems that even when that commit
was made the code was never reached by any tests, nor reachable by
analysis (see below). The same is true with the code as it currently
stands: no test triggers this if-condition, nor any uasyncio examples.
Analysing the flow of the program also shows that it's not reachable:
==START==
-> to trigger this if condition mp_execute_bytecode() must return
MP_VM_RETURN_YIELD with *sp==MP_OBJ_STOP_ITERATION
-> mp_execute_bytecode() can only return MP_VM_RETURN_YIELD from the
MP_BC_YIELD_VALUE bytecode, which can happen in 2 ways:
-> 1) from a "yield <x>" in bytecode, but <x> must always be a proper
object, never MP_OBJ_STOP_ITERATION; ==END1==
-> 2) via yield from, via mp_resume() which must return
MP_VM_RETURN_YIELD with ret_value==MP_OBJ_STOP_ITERATION, which
can happen in 3 ways:
-> 1) it delegates to mp_obj_gen_resume(); go back to ==START==
-> 2) it returns MP_VM_RETURN_YIELD directly but with a guard that
ret_val!=MP_OBJ_STOP_ITERATION; ==END2==
-> 3) it returns MP_VM_RETURN_YIELD with ret_val set from
mp_call_method_n_kw(), but mp_call_method_n_kw() must return a
proper object, never MP_OBJ_STOP_ITERATION; ==END3==
The above shows there is no way to trigger the if-condition and it can be
removed.
These checks are assumed to be true in all cases where gc_realloc is
called with a valid pointer, so no need to waste code space and time
checking them in a non-debug build.
So long as the input qstr identifier is valid (below the maximum number of
qstrs) the function will always return a valid pointer. This patch
eliminates the "return 0" dead-code.
This patch improves parsing of floating point numbers by converting all the
digits (integer and fractional) together into a number 1 or greater, and
then applying the correct power of 10 at the very end. In particular the
multiple "multiply by 0.1" operations to build a fraction are now combined
together and applied at the same time as the exponent, at the very end.
This helps to retain precision during parsing of floats, and also includes
a check that the number doesn't overflow during the parsing. One benefit
is that a float will have the same value no matter where the decimal point
is located, eg 1.23 == 123e-2.
Before this patch MP_BINARY_OP_IN had two meanings: coming from bytecode it
meant that the args needed to be swapped, but coming from within the
runtime meant that the args were already in the correct order. This lead
to some confusion in the code and comments stating how args were reversed.
It also lead to 2 bugs: 1) containment for a subclass of a native type
didn't work; 2) the expression "{True} in True" would illegally succeed and
return True. In both of these cases it was because the args to
MP_BINARY_OP_IN ended up being reversed twice.
To fix these things this patch introduces MP_BINARY_OP_CONTAINS which
corresponds exactly to the __contains__ special method, and this is the
operator that built-in types should implement. MP_BINARY_OP_IN is now only
emitted by the compiler and is converted to MP_BINARY_OP_CONTAINS by
swapping the arguments.
In mp_binary_op, there is no need to explicitly check for type->getiter
being non-null and raising an exception because this is handled exactly by
mp_getiter(). So just call the latter unconditionally.
This patch introduces a new compile-time config option to disable multiple
inheritance at the Python level: MICROPY_MULTIPLE_INHERITANCE. It is
enabled by default.
Disabling multiple inheritance eliminates a lot of recursion in the call
graph (which is important for some embedded systems), and can be used to
reduce code size for ports that are really constrained (by around 200 bytes
for Thumb2 archs).
With multiple inheritance disabled all tests in the test-suite pass except
those that explicitly test for multiple inheritance.
The function mp_obj_new_str_of_type is a general str object constructor
used in many places in the code to create either a str or bytes object.
When creating a str it should first check if the string data already exists
as an interned qstr, and if so then return the qstr object. This patch
makes the function have such behaviour, which helps to reduce heap usage by
reusing existing interned data where possible.
The old behaviour of mp_obj_new_str_of_type (which didn't check for
existing interned data) is made available through the function
mp_obj_new_str_copy, but should only be used in very special cases.
One consequence of this patch is that the following expression is now True:
'abc' is ' abc '.split()[0]
This patch simplifies the str creation API to favour the common case of
creating a str object that is not forced to be interned. To force
interning of a new str the new mp_obj_new_str_via_qstr function is added,
and should only be used if warranted.
Apart from simplifying the mp_obj_new_str function (and making it have the
same signature as mp_obj_new_bytes), this patch also reduces code size by a
bit (-16 bytes for bare-arm and roughly -40 bytes on the bare-metal archs).
Rationale:
* Calling Python build tool scripts from makefiles should be done
consistently using `python </path/to/script>`, instead of relying on the
correct she-bang line in the script [1] and the executable bit on the
script being set. This is more platform-independent.
* The name/path of the Python executable should always be used via the
makefile variable `PYTHON` set in `py/mkenv.mk`. This way it can be
easily overwritten by the user with `make PYTHON=/path/to/my/python`.
* The Python executable name should be part of the value of the makefile
variable, which stands for the build tool command (e.g. `MAKE_FROZEN` and
`MPY_TOOL`), not part of the command line where it is used. If a Python
tool is substituted by another (non-python) program, no change to the
Makefiles is necessary, except in `py/mkenv.mk`.
* This also solves #3369 and #1616.
[1] There are systems, where even the assumption that `/usr/bin/env` always
exists, doesn't hold true, for example on Android (where otherwise the unix
port compiles perfectly well).
All the asm macro names that convert a particular architecture to a generic
interface now follow the convention whereby the "destination" (usually a
register) is specified first.
Macros to convert big-endian values to host byte order and vice-versa.
These were defined in adhoc way for some ports (e.g. esp8266), allow
reuse, provide default implementations, while allow ports to override.
The technique of using alloca is how dotted import names are composed in
mp_import_from and mp_builtin___import__, so use the same technique in the
compiler. This puts less pressure on the heap (only the stack is used if
the qstr already exists, and if it doesn't exist then the standard qstr
block memory is used for the new qstr rather than a separate chunk of the
heap) and reduces overall code size.
This reverts commit 3289b9b7a7.
The commit broke building on MINGW because the filename became
micropython.exe.exe. A proper solution to support more Windows build
environments requires more thought and testing.
Per the comment found here
https://github.com/micropython/micropython-esp32/issues/209#issuecomment-339855157,
this patch adds finaliser code to prevent memory leaks from ussl objects,
which is especially useful when memory for a ussl context is allocated
outside the uPy heap. This patch is in-line with the finaliser code found
in many modsocket implementations for various ports.
This feature is configured via MICROPY_PY_USSL_FINALISER and is disabled by
default because there may be issues using it when the ussl state *is*
allocated on the uPy heap, rather than externally.
This allows to configure support for inplace special methods separately,
similar to "normal" and reverse special methods. This is useful, because
inplace methods are "the most optional" ones, for example, if inplace
methods aren't defined, the operation will be executed using normal
methods instead.
As a caveat, __iadd__ and __isub__ are implemented even if
MICROPY_PY_ALL_INPLACE_SPECIAL_METHODS isn't defined. This is similar
to the state of affairs before binary operations refactor, and allows
to run existing tests even if MICROPY_PY_ALL_INPLACE_SPECIAL_METHODS
isn't defined.
If MICROPY_PY_ALL_SPECIAL_METHODS is defined, actually define all special
methods (still subject to gating by e.g. MICROPY_PY_REVERSE_SPECIAL_METHODS).
This adds quite a number of qstr's, so should be used sparingly.
Update makeqstrdata.py to sort strings starting with "__" to the beginning
of qstr list, so they get low qstr id's, guaranteedly fitting in 8 bits.
Then use this property to further compact op_id => qstr mapping arrays.
Per https://docs.python.org/3/library/sys.html#sys.getsizeof:
getsizeof() calls the object’s __sizeof__ method. Previously, "getsizeof"
was used mostly to save on new qstr, as we don't really support calling
this method on arbitrary objects (so it was used only for reporting).
However, normalize it all now.
Not all compilers/analysers are smart enough to realise that this function
is never called if MICROPY_ERROR_REPORTING is not TERSE, because the logic
in the code uses if statements rather than #if to select whether to call
this function or not (MSC in debug mode is an example of this, but there
are others). So just unconditionally compile this helper function. The
code-base anyway relies on the linker to remove unused functions.
The uos.dupterm() signature and behaviour is updated to reflect the latest
enhancements in the docs. It has minor backwards incompatibility in that
it no longer accepts zero arguments.
The dupterm_rx helper function is moved from esp8266 to extmod and
generalised to support multiple dupterm slots.
A port can specify multiple slots by defining the MICROPY_PY_OS_DUPTERM
config macro to an integer, being the number of slots it wants to have;
0 means to disable the dupterm feature altogether.
The unix and esp8266 ports are updated to work with the new interface and
are otherwise unchanged with respect to functionality.
So that a pointer to it can be passed as a pointer to math_generic_1. This
patch also makes the function work for single and double precision floating
point.
This patch changes how most of the plain math functions are implemented:
there are now two generic math wrapper functions that take a pointer to a
math function (like sin, cos) and perform the necessary conversion to and
from MicroPython types. This helps to reduce code size. The generic
functions can also check for math domain errors in a generic way, by
testing if the result is NaN or infinity combined with finite inputs.
The result is that, with this patch, all math functions now have full
domain error checking (even gamma and lgamma) and code size has decreased
for most ports. Code size changes in bytes for those with the math module
are:
unix x64: -432
unix nanbox: -792
stm32: -88
esp8266: +12
Tests are also added to check domain errors are handled correctly.
Printing "(null)" when a NULL string pointer is passed to %s is a debugging
feature and not a feature that's relied upon by the code. So it only needs
to be compiled in when debugging (such as assert) is enabled, and saves
roughy 30 bytes of code when disabled.
This patch also fixes this NULL check to not do the check if the precision
is specified as zero.
Header files that are considered internal to the py core and should not
normally be included directly are:
py/nlr.h - internal nlr configuration and declarations
py/bc0.h - contains bytecode macro definitions
py/runtime0.h - contains basic runtime enums
Instead, the top-level header files to include are one of:
py/obj.h - includes runtime0.h and defines everything to use the
mp_obj_t type
py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
and defines everything to use the general runtime support functions
Additional, specific headers (eg py/objlist.h) can be included if needed.
Qstr values fit in 16-bits (and this fact is used elsewhere in the code) so
no need to use more than that for the large lookup tables. The compiler
will anyway give a warning if the qstr values don't fit in 16 bits. Saves
around 80 bytes of code space for Thumb2 archs.
Building mpy-cross: this patch adds .exe to the PROG name when building
executables for host (eg mpy-cross) on Windows. make clean now removes
mpy-cross.exe under Windows.
Building MicroPython: this patch sets MPY_CROSS to mpy-cross.exe or
mpy-cross so they can coexist and use cygwin or WSL without rebuilding
mpy-cross. The dependency in the mpy rule now uses mpy-cross.exe for
Windows and mpy-cross for Linux.
CPython docs explicitly state that the RHS of a set/frozenset binary op
must be a set to prevent user errors. It also preserves commutativity of
the ops, eg: "abc" & set() is a TypeError, and so should be set() & "abc".
This change actually decreases unix (x64) code by 160 bytes; it increases
stm32 by 4 bytes and esp8266 by 28 bytes (but previous patch already
introduced a much large saving).
A lot of set's methods (the mutable ones) are not allowed to operate on a
frozenset, and giving frozenset a separate locals dict with only the
methods that it supports allows to simplify the logic that verifies if
args are a set or a frozenset. Even though the new frozenset locals dict
is relatively large (88 bytes on 32-bit archs) there is a much bigger
saving coming from the removal of a const string for an error message,
along with the removal of some checks for set or frozenset type.
Changes in code size due to this patch are (for ports that changed at all):
unix x64: -56
unix nanbox: -304
stm32: -64
esp8266: -124
cc3200: -40
Apart from the reduced code, frozenset now has better tab-completion
because it only lists the valid methods. And the error message for
accessing an invalid method is now more detailed (it includes the
method name that wasn't found).
This returns a complex number, following CPython behaviour. For ports that
don't have complex numbers enabled this will raise a ValueError which gives
a fail-safe for scripts that were written assuming complex numbers exist.
This adds a new configuration option to print runtime warnings and errors to
stderr. On Unix, CPython prints warnings and unhandled exceptions to stderr,
so the unix port here is configured to use this option.
The unix port already printed unhandled exceptions on the main thread to
stderr. This patch fixes unhandled exceptions on other threads and warnings
(issue #2838) not printing on stderr.
Additionally, a couple tests needed to be fixed to handle this new behavior.
This is done by also capturing stderr when running tests.
Current users of fixed vstr buffers (building file paths) assume that there
is no overflow and do not check for overflow after building the vstr. This
has the potential to lead to NULL pointer dereferences
(when vstr_null_terminated_str returns NULL because it can't allocate RAM
for the terminating byte) and stat'ing and loading invalid path names (due
to the path being truncated). The safest and simplest thing to do in these
cases is just raise an exception if a write goes beyond the end of a fixed
vstr buffer, which is what this patch does. It also simplifies the vstr
code.
The vstr argument to the calls to vstr_add_len are dynamically allocated
(ie fixed_buf=false) and so vstr_add_len will never return NULL. So
there's no need to check for it. Any out-of-memory errors are raised by
the call to m_renew in vstr_ensure_extra.
The aim of this patch is to rewrite the functions that create exception
instances (mp_obj_exception_make_new and mp_obj_new_exception_msg_varg) so
that they do not call any functions that may raise an exception. Otherwise
it's possible to create infinite recursion with an exception being raised
while trying to create an exception object.
The two main things that are done to accomplish this are:
1. Change mp_obj_new_exception_msg_varg to just format the string, then
call mp_obj_exception_make_new to actually create the exception object.
2. In mp_obj_exception_make_new and mp_obj_new_exception_msg_varg try to
allocate all memory first using functions that don't raise exceptions
If any of the memory allocations fail (return NULL) then degrade
gracefully by trying other options for memory allocation, eg using the
emergency exception buffer.
3. Use a custom printer backend to conservatively format strings: if it
can't allocate memory then it just truncates the string.
As part of this rewrite, raising an exception without a message, like
KeyError(123), will now use the emergency buffer to store the arg and
traceback data if there is no heap memory available.
Memory use with this patch is unchanged. Code size is increased by:
bare-arm: +136
minimal x86: +124
unix x64: +72
unix nanbox: +96
stm32: +88
esp8266: +92
cc3200: +80
This allows user classes to implement __abs__ special method, and saves
code size (104 bytes for x86_64), even though during refactor, an issue
was fixed and few optimizations were made:
* abs() of minimum (negative) small int value is calculated properly.
* objint_longlong and objint_mpz avoid allocating new object is the
argument is already non-negative.
If, for class X, X.__add__(Y) doesn't exist (or returns NotImplemented),
try Y.__radd__(X) instead.
This patch could be simpler, but requires undoing operand swap and
operation switch to get non-confusing error message in case __radd__
doesn't exist.
This is to allow to place reverse ops immediately after normal ops, so
they can be tested as one range (which is optimization for reverse ops
introduction in the next patch).
Originally, there were grouped in blocks of 5, to make it easier e.g.
to assess and numeric code of each. But now it makes more sense to
group it by semantics/properties, and then split in chunks still,
which usually leads to chunks of ~6 ops.
It starts a dichotomy of mp_binary_op_t values which can't appear in the
bytecode. Another reason to move it is to VALUES of OP_* and OP_INPLACE_*
nicely adjacent. This also will be needed for OP_REVERSE_*, to be soon
introduced.
This patch adds a function utf8_check() to check for a valid UTF-8 encoded
string, and calls it when constructing a str from raw bytes. The feature
is selectable at compile time via MICROPY_PY_BUILTINS_STR_UNICODE_CHECK and
is enabled if unicode is enabled. It costs about 110 bytes on Thumb-2, 150
bytes on Xtensa and 170 bytes on x86-64.
IEEE floating point is specified such that a comparison of NaN with itself
returns false, and Python respects these semantics. This patch makes uPy
also have these semantics. The fix has a minor impact on the speed of the
object-equality fast-path, but that seems to be unavoidable and it's much
more important to have correct behaviour (especially in this case where
the wrong answer for nan==nan is silently returned).
These are now returned as "operation not supported" instead of raising
TypeError. In particular, this fixes equality for float vs incompatible
types, which now properly results in False instead of exception. This
also paves the road to support reverse operation (e.g. __radd__) with
float objects.
This is achieved by introducing mp_obj_get_float_maybe(), similar to
existing mp_obj_get_int_maybe().
Prior to this patch, the size of the buffer given to pack_into() was checked
for being too small by using the count of the arguments, not their actual
size. For example, a format spec of '4I' would only check that there was 4
bytes available, not 16; and 'I' would check for 1 byte, not 4.
The pack() function is ok because its buffer is created to be exactly the
correct size.
The fix in this patch calculates the total size of the format spec at the
start of pack_into() and verifies that the buffer is large enough. This
adds some computational overhead, to iterate through the whole format spec.
The alternative is to check during the packing, but that requires extra
code to handle alignment, and the check is anyway not needed for pack().
So to maintain minimal code size the check is done using struct_calcsize.
Prior to this patch, the size of the buffer given to unpack/unpack_from was
checked for being too small by using the count of the arguments, not their
actual size. For example, a format spec of '4I' would only check that
there was 4 bytes available, not 16; and 'I' would check for 1 byte, not 4.
This bug is fixed in this patch by calculating the total size of the format
spec at the start of the unpacking function. This function anyway needs to
calculate the number of items at the start, so calculating the total size
can be done at the same time.
This patch makes a repeat counter behave the same as repeating the
typecode, when there are not enough args. For example:
struct.pack('2I', 1) now behave the same as struct.pack('II', 1).
NotImplemented means "try other fallbacks (like calling __rop__
instead of __op__) and if nothing works, raise TypeError". As
MicroPython doesn't implement any fallbacks, signal to raise
TypeError right away.
The unary-op/binary-op enums are already defined, and there are no
arithmetic tricks used with these types, so it makes sense to use the
correct enum type for arguments that take these values. It also reduces
code size quite a bit for nan-boxing builds.
Otherwise, it will silently get incorrect result on other values types,
including CPython tuple form like "foo.png".endswith(("png", "jpg"))
(which MicroPython doesn't support for unbloatedness).
For SEEK_SET, offset should be treated as unsigned, to allow full-width
stream sizes (e.g. 32-bit instead of 31-bit). This is now fully documented
in stream.h. Also, seek symbolic constants are added.
Too big positive, or too big negative offset values could lead to overflow
and address space wraparound and thus access to unrelated areas of memory
(a security issue).
The value of 0 can't be used because otherwise mp_binary_get_size will let
a null byte through as the type code (intepreted as byterray). This can
lead to invalid type-specifier strings being let through without an error
in the struct module, and even buffer overruns.
- Changed: ValueError, TypeError, NotImplementedError
- OSError invocations unchanged, because the corresponding utility
function takes ints, not strings like the long form invocation.
- OverflowError, IndexError and RuntimeError etc. not changed for now
until we decide whether to add new utility functions.
Before this patch the mperrno.h file could be included and would silently
succeed with incorrect config settings, because mpconfig.h was not yet
included.
If constants (eg mp_const_none_obj) are placed in very high memory
locations that require 64-bits for the pointer then the assembler must be
able to emit instructions to move such pointers to one of the top 8
registers (ie r8-r15).
It's not used anywhere else in the VM loop, and clashes with (is shadowed
by) the n_state variable that's redeclared towards the end of the
mp_execute_bytecode function. Code size is unchanged.
The code conventions suggest using header guards, but do not define how
those should look like and instead point to existing files. However, not
all existing files follow the same scheme, sometimes omitting header guards
altogether, sometimes using non-standard names, making it easy to
accidentally pick a "wrong" example.
This commit ensures that all header files of the MicroPython project (that
were not simply copied from somewhere else) follow the same pattern, that
was already present in the majority of files, especially in the py folder.
The rules are as follows.
Naming convention:
* start with the words MICROPY_INCLUDED
* contain the full path to the file
* replace special characters with _
In addition, there are no empty lines before #ifndef, between #ifndef and
one empty line before #endif. #endif is followed by a comment containing
the name of the guard macro.
py/grammar.h cannot use header guards by design, since it has to be
included multiple times in a single C file. Several other files also do not
need header guards as they are only used internally and guaranteed to be
included only once:
* MICROPY_MPHALPORT_H
* mpconfigboard.h
* mpconfigport.h
* mpthreadport.h
* pin_defs_*.h
* qstrdefs*.h
Prior to this patch there were 2 paths for creating the namedtuple, one for
when no keyword args were passed, and one when there were keyword args.
And alloca was used in the keyword-arg path to temporarily create the array
of elements for the namedtuple, which would then be copied to a
heap-allocated object (the namedtuple itself).
This patch simplifies the code by combining the no-keyword and keyword
paths, and removing the need for the alloca by constructing the namedtuple
on the heap before populating it.
Heap usage in unchanged, stack usage is reduced, use of alloca is removed,
and code size is not increased and is actually reduced by between 20-30
bytes for most ports.
The while-loop that calls chop_component will guarantee that level==-1 at
the end of the loop. Hence the code following it is unnecessary.
The check for p==this_name will catch imports that are beyond the
top-level, and also covers the case of new_mod_q==MP_QSTR_ (equivalent to
new_mod_l==0) so that check is removed.
There is also a new check at the start for level>=0 to guard against
__import__ being called with bad level values.
Previous to this patch, a label with value "0" was used to indicate an
invalid label, but that meant a wasted word (at slot 0) in the array of
label offsets. This patch adjusts the label indices so the first one
starts at 0, and the maximum value indicates an invalid label.
This patch fixes a bug whereby the Python stack was not correctly reset if
there was a break/continue statement in the else black of an optimised
for-range loop.
For example, in the following code the "j" variable from the inner for loop
was not being popped off the Python stack:
for i in range(4):
for j in range(4):
pass
else:
continue
This is now fixed with this patch.
In CPython 3.4 this raises a SyntaxError. In CPython 3.5+ having a
positional after * is allowed but uPy has the wrong semantics and passes
the arguments in the incorrect order. To prevent incorrect use of a
function going unnoticed it is important to raise the SyntaxError in uPy,
until the behaviour is fixed to follow CPython 3.5+.
This patch fixes 2 things when printing a floating-point number that
requires rounding up of the mantissa:
- retain the correct precision; eg 0.99 becomes 1.0, not 1.00
- if the exponent goes from -1 to 0 then render it as +0, not -0
Taking the address of a local variable leads to increased stack usage, so
the mp_decode_uint_skip() function is added to reduce the need for taking
addresses. The changes in this patch reduce stack usage of a Python call
by 8 bytes on ARM Thumb, by 16 bytes on non-windowing Xtensa archs, and by
16 bytes on x86-64. Code size is also slightly reduced on most archs by
around 32 bytes.
The implementation is taken from stmhal/input.c, with code added to handle
ctrl-C. This built-in is controlled by MICROPY_PY_BUILTINS_INPUT and is
disabled by default. It uses readline() to capture input but this can be
overridden by defining the mp_hal_readline macro.
For make v3.81, using "make -B" can set $? to empty and in this case the
auto-qstr generation needs to pass all args (ie $^) to cpp. The previous
fix for this (which was removed in 23a693ec2d)
used if statements in the shell command, which gave very long lines that
didn't work on certain systems (eg cygwin).
The fix in this patch is to use an $if(...) expression, which will evaluate
to $? (only newer prerequisites) if it's non empty, otherwise it will use
$^ (all prerequisites).
Previous to this patch the mp_emit_bc_adjust_stack_size function would
adjust the current stack size but would not increase the maximum stack size
if the current size went above it. This meant that certain Python code
(eg a try-finally block with no statements inside it) would not have enough
Python stack allocated to it.
This patch fixes the problem by always checking if the current stack size
goes above the maximum, and adjusting the latter if it does.
This patch fixes a regression introduced by
71a3d6ec3b
Previous to this patch the n_state variable was referring to that computed
at the very start of the mp_execute_bytecode function. This patch fixes it
so that n_state is recomputed when the code_state changes.
Working on a build with PY_IO enabled (for PY_UJSON support) but PY_SYS_STDFILES disabled (no filesystem). There are multiple references to mp_sys_stdout_obj that should only be enabled if both PY_IO and PY_SYS_STDFILES are enabled.
This ensures that mpy-cross is automatically built (and is up-to-date) for
ports that use frozen bytecode. It also makes sure that .mpy files are
re-built if mpy-cross is changed.
Now consistently uses the EOL processing ("\r" and "\r\n" convert to "\n")
and EOF processing (ensure "\n" before EOF) provided by next_char().
In particular the lexer can now correctly handle input that starts with CR.
Prior to this patch only 'q' and 'Q' type arrays could store big-int
values. With this patch any big int that is stored to an array is handled
by the big-int implementation, regardless of the typecode of the array.
This allows arrays to work with all type sizes on all architectures.
The with semantics of this function is close to
pkg_resources.resource_stream() function from setuptools, which
is the canonical way to access non-source files belonging to a package
(resources), regardless of what medium the package uses (e.g. individual
source files vs zip archive). In the case of MicroPython, this function
allows to access resources which are frozen into the executable, besides
accessing resources in the file system.
This is initial stage of the implementation, which actually doesn't
implement "package" part of the semantics, just accesses frozen resources
from "root", or filesystem resource - from current dir.
The standard preprocessor definition to differentiate debug and non-debug
builds is NDEBUG, not DEBUG, so don't rely on the latter:
- just delete the use of it in objint_longlong.c as it has been stale code
for years anyway (since commit [c4029e5]): SUFFIX isn't used anywhere.
- replace DEBUG with MICROPY_DEBUG_NLR in nlr.h: it is rarely used anymore
so can be off by default
This patch allows the following code to run without allocating on the heap:
super().foo(...)
Before this patch such a call would allocate a super object on the heap and
then load the foo method and call it right away. The super object is only
needed to perform the lookup of the method and not needed after that. This
patch makes an optimisation to allocate the super object on the C stack and
discard it right after use.
Changes in code size due to this patch are:
bare-arm: +128
minimal: +232
unix x64: +416
unix nanbox: +364
stmhal: +184
esp8266: +340
cc3200: +128
This patch refactors the handling of the special super() call within the
compiler. It removes the need for a global (to the compiler) state variable
which keeps track of whether the subject of an expression is super. The
handling of super() is now done entirely within one function, which makes
the compiler a bit cleaner and allows to easily add more optimisations to
super calls.
Changes to the code size are:
bare-arm: +12
minimal: +0
unix x64: +48
unix nanbox: -16
stmhal: +4
cc3200: +0
esp8266: -56
With this optimisation enabled the compiler optimises the if-else
expression within a return statement. The optimisation reduces bytecode
size by 2 bytes for each use of such a return-if-else statement. Since
such a statement is not often used, and costs bytes for the code, the
feature is disabled by default.
For example the following code:
def f(x):
return 1 if x else 2
compiles to this bytecode with the optimisation disabled (left column is
bytecode offset in bytes):
00 LOAD_FAST 0
01 POP_JUMP_IF_FALSE 8
04 LOAD_CONST_SMALL_INT 1
05 JUMP 9
08 LOAD_CONST_SMALL_INT 2
09 RETURN_VALUE
and to this bytecode with the optimisation enabled:
00 LOAD_FAST 0
01 POP_JUMP_IF_FALSE 6
04 LOAD_CONST_SMALL_INT 1
05 RETURN_VALUE
06 LOAD_CONST_SMALL_INT 2
07 RETURN_VALUE
So the JUMP to RETURN_VALUE is optimised and replaced by RETURN_VALUE,
saving 2 bytes and making the code a bit faster.
Otherwise the type of parse-node and its kind has to be re-extracted
multiple times. This optimisation reduces code size by a bit (16 bytes on
bare-arm).
It controls the character that's used to (asynchronously) raise a
KeyboardInterrupt exception. Passing "-1" allows to disable the
interception of the interrupt character (as long as a port allows such a
behaviour).
If a finaliser raises an exception then it must not propagate through the
GC sweep function. This patch protects against such a thing by running
finaliser code via the mp_call_function_1_protected call.
This patch also adds scheduler lock/unlock calls around the finaliser
execution to further protect against any possible reentrancy issues: the
memory manager is already locked when doing a collection, but we also don't
want to allow any scheduled code to run, KeyboardInterrupts to interupt the
code, nor threads to switch.
The common cases for inheritance are 0 or 1 parent types, for both built-in
types (eg built-in exceptions) as well as user defined types. So it makes
sense to optimise the case of 1 parent type by storing just the type and
not a tuple of 1 value (that value being the single parent type).
This patch makes such an optimisation. Even though there is a bit more
code to handle the two cases (either a single type or a tuple with 2 or
more values) it helps reduce overall code size because it eliminates the
need to create a static tuple to hold single parents (eg for the built-in
exceptions). It also helps reduce RAM usage for user defined types that
only derive from a single parent.
Changes in code size (in bytes) due to this patch:
bare-arm: -16
minimal (x86): -176
unix (x86-64): -320
unix nanbox: -384
stmhal: -64
cc3200: -32
esp8266: -108
This buffer is used to allocate objects temporarily, and such objects
require that their underlying memory be correctly aligned for their data
type. Aligning for mp_obj_t should be sufficient for emergency exceptions,
but in general the memory buffer should aligned to the maximum alignment of
the machine (eg on a 32-bit machine with mp_obj_t being 4 bytes, a double
may not be correctly aligned).
This patch fixes a bug for certain nan-boxing builds, where mp_obj_t is 8
bytes and must be aligned to 8 bytes (even though the machine is 32 bit).
Hashing of float and complex numbers that are exact (real) integers should
return the same integer hash value as hashing the corresponding integer
value. Eg hash(1), hash(1.0) and hash(1+0j) should all be the same (this
is how Python is specified: if x==y then hash(x)==hash(y)).
This patch implements the simplest way of doing float/complex hashing by
just converting the value to int and returning that value.
Split this setting from MICROPY_CPYTHON_COMPAT. The idea is to be able to
keep MICROPY_CPYTHON_COMPAT disabled, but still pass more of regression
testsuite. In particular, this fixes last failing test in basics/ for
Zephyr port.
The first memmove now copies less bytes in some cases (because len_adj <=
slice_len), and the memcpy is replaced with memmove to support the
possibility that dest and slice regions are overlapping.
This follows the pattern of how all other headers are now included, and
makes it explicit where the header file comes from. This patch also
removes -I options from Makefile's that specify the mp-readline/timeutils/
netutils directories, which are no longer needed.
Build happens in 3 stages:
1. Zephyr config header and make vars are generated from prj.conf.
2. libmicropython is built using them.
3. Zephyr is built and final link happens.
This patch changes mp_uint_t to size_t for the len argument of the
following public facing C functions:
mp_obj_tuple_get
mp_obj_list_get
mp_obj_get_array
These functions take a pointer to the len argument (to be filled in by the
function) and callers of these functions should update their code so the
type of len is changed to size_t. For ports that don't use nan-boxing
there should be no change in generate code because the size of the type
remains the same (word sized), and in a lot of cases there won't even be a
compiler warning if the type remains as mp_uint_t.
The reason for this change is to standardise on the use of size_t for
variables that count memory (or memory related) sizes/lengths. It helps
builds that use nan-boxing.
With this patch all illegal assignments are reported as "can't assign to
expression". Before the patch there were special cases for a literal on
the LHS, and for augmented assignments (eg +=), but it seems a waste of
bytes (and there are lots of bytes used in error messages) to spend on
distinguishing such errors which a user will rarely encounter.
By removing the 'E' code from the operator token encoding mini-language the
tokenising can be simplified. The 'E' code was only used for the !=
operator which is now handled as a special case; the optimisations for the
general case more than make up for the addition of this single, special
case. Furthermore, the . and ... operators can be handled in the same way
as != which reduces the code size a little further.
This simplification also removes a "goto".
Changes in code size for this patch are (measured in bytes):
bare-arm: -48
minimal x86: -64
unix x86-64: -112
unix nanbox: -64
stmhal: -48
cc3200: -48
esp8266: -76
The self variable may be closed-over in the function, and in that case the
call to super() should load the contents of the closure cell using
LOAD_DEREF (before this patch it would just load the cell directly).
Previous to this patch, if the result of the round function overflowed a
small int, or was inf or nan, then a garbage value was returned. With
this patch the correct big-int is returned if necessary and exceptions are
raised for inf or nan.
The C nearbyint function has exactly the semantics that Python's round()
requires, whereas C's round() requires extra steps to handle rounding of
numbers half way between integers. So using nearbyint reduces code size
and potentially eliminates any source of errors in the handling of half-way
numbers.
Also, bare-metal implementations of nearbyint can be more efficient than
round, so further code size is saved (and efficiency improved).
nearbyint is provided in the C99 standard so it should be available on all
supported platforms.
Previous to this patch, if the result of the trunc/ceil/floor functions
overflowed a small int, or was inf or nan, then a garbage value was
returned. With this patch the correct big-int is returned if necessary,
and exceptions are raised for inf or nan.
It improves readability of code and reduces the chance to make a mistake.
This patch also fixes a bug with nan-boxing builds by rounding up the
calculation of the new NSLOTS variable, giving the correct number of slots
(being 4) even if mp_obj_t is larger than the native machine size.
Now, passing a keyword argument that is not expected will correctly report
that fact. If normal or detailed error messages are enabled then the name
of the unexpected argument will be reported.
This patch decreases the code size of bare-arm and stmhal by 12 bytes, and
cc3200 by 8 bytes. Other ports (minimal, unix, esp8266) remain the same in
code size. For terse error message configuration this is because the new
message is shorter than the old one. For normal (and detailed) error
message configuration this is because the new error message already exists
in py/objnamedtuple.c so there's no extra space in ROM needed for the
string.
The scheduler being locked general means we are running a scheduled
function, and switching to another thread violates that, so don't switch in
such a case (even though we technically could).
And if we are running a scheduled function then we want to finish it ASAP,
so we shouldn't switch to another thread.
Furthermore, ports with threading enabled will lock the scheduler during a
hard IRQ, and this patch to the VM will make sure that threads are not
switched during a hard IRQ (which would crash the VM).
Instead of always reporting some object cannot be implicitly be converted
to a 'str', even when it is a 'bytes' object, adjust the logic so that
when trying to convert str to bytes it is shown like that.
This will still report bad implicit conversion from e.g. 'int to bytes'
as 'int to str' but it will not result in the confusing
'can't convert 'str' object to str implicitly' anymore for calls like
b'somestring'.count('a').
Instead of caching data that is constant (code_info, const_table and
n_state), store just a pointer to the underlying function object from which
this data can be derived.
This helps reduce stack usage for the case when the mp_code_state_t
structure is stored on the stack, as well as heap usage when it's stored
on the heap.
The downside is that the VM becomes a little more complex because it now
needs to derive the data from the underlying function object. But this
doesn't impact the performance by much (if at all) because most of the
decoding of data is done outside the main opcode loop. Measurements using
pystone show that little to no performance is lost.
This patch also fixes a nasty bug whereby the bytecode can be reclaimed by
the GC during execution. With this patch there is always a pointer to the
function object held by the VM during execution, since it's stored in the
mp_code_state_t structure.
When make is passed "-B" it seems that everything is considered out-of-date
and so $? expands to all prerequisites. Thus there is no need for a
special check to see if $? is emtpy.
Some stack is allocated to format ints, and when the int implementation uses
long-long there should be additional stack allocated compared with the other
cases. This patch uses the existing "fmt_int_t" type to determine the
amount of stack to allocate.
This patch refactors the error handling in the lexer, to simplify it (ie
reduce code size).
A long time ago, when the lexer/parser/compiler were first written, the
lexer and parser were designed so they didn't use exceptions (ie nlr) to
report errors but rather returned an error code. Over time that has
gradually changed, the parser in particular has more and more ways of
raising exceptions. Also, the lexer never really handled all errors without
raising, eg there were some memory errors which could raise an exception
(and in these rare cases one would get a fatal nlr-not-handled fault).
This patch accepts the fact that the lexer can raise exceptions in some
cases and allows it to raise exceptions to handle all its errors, which are
for the most part just out-of-memory errors during construction of the
lexer. This makes the lexer a bit simpler, and also the persistent code
stuff is simplified.
What this means for users of the lexer is that calls to it must be wrapped
in a nlr handler. But all uses of the lexer already have such an nlr
handler for the parser (and compiler) so that doesn't put any extra burden
on the callers.
INT_MAX used previosly is indeed max value for int, whereas on LP64
platforms, long is used for mp_int_t. Using MP_SMALL_INT_MAX is the
correct way to do it anyway.
Each threads needs to have its own private references to its current
locals/globals dicts, otherwise functions running within different
contexts (eg imported from different files) can behave very strangely.
There were 2 bugs, now fixed by this patch:
- after deleting an element the len of the dict did not decrease by 1
- after deleting an element searching through the dict could lead to
a seg fault due to there being an MP_OBJ_SENTINEL in the ordered array
In this case, raise an exception without a message.
This would allow to shove few code bytes comparing to currently used
mp_raise_msg(..., "") pattern. (Actual savings depend on function code
alignment used by a particular platform.)
The parser was originally written to work without raising any exceptions
and instead return an error value to the caller. But it's now required
that a call to the parser be wrapped in an nlr handler, so we may as well
make use of that fact and simplify the parser so that it doesn't need to
keep track of any memory errors that it had. The parser anyway explicitly
raises an exception at the end if there was an error.
This patch simplifies the parser by letting the underlying memory
allocation functions raise an exception if they fail to allocate any
memory. And if there is an error parsing the "<id> = const(<val>)" pattern
then that also raises an exception right away instead of trying to recover
gracefully and then raise.
Previous to this patch any non-interned str/bytes objects would create a
special parse node that held a copy of the str/bytes data. Then in the
compiler this data would be turned into a str/bytes object. This actually
lead to 2 copies of the data, one in the parse node and one in the object.
The parse node's copy of the data would be freed at the end of the compile
stage but nevertheless it meant that the peak memory usage of the
parse/compile stage was higher than it needed to be (by an amount equal to
the number of bytes in all the non-interned str/bytes objects).
This patch changes the behaviour so that str/bytes objects are created
directly in the parser and the object stored in a const-object parse node
(which already exists for bignum, float and complex const objects). This
reduces peak RAM usage of the parse/compile stage, simplifies the parser
and compiler, and reduces code size by about 170 bytes on Thumb2 archs,
and by about 300 bytes on Xtensa archs.
This patch allows uPy consts to be bignums, eg:
X = const(1 << 100)
The infrastructure for consts to be a bignum (rather than restricted to
small integers) has been in place for a while, ever since constant folding
was upgraded to allow bignums. It just required a small change (in this
patch) to enable it.
It's configured by MICROPY_PY_UERRNO_ERRORCODE and enabled by default
(since that's the behaviour before this patch).
Without this dict the lookup of errno codes to strings must use the
uerrno module itself.
It's much more efficient in RAM and code size to do implicit literal string
concatenation in the lexer, as opposed to the compiler.
RAM usage is reduced because the concatenation can be done right away in the
tokeniser by just accumulating the string/bytes literals into the lexer's
vstr. Prior to this patch adjacent strings/bytes would create a parse tree
(one node per string/bytes) and then in the compiler a whole new chunk of
memory was allocated to store the concatenated string, which used more than
double the memory compared to just accumulating in the lexer.
This patch also significantly reduces code size:
bare-arm: -204
minimal: -204
unix x64: -328
stmhal: -208
esp8266: -284
cc3200: -224
Previous to this patch there was an explicit check for errors with line
continuation (where backslash was not immediately followed by a newline).
But this check is not necessary: if there is an error then the remaining
logic of the tokeniser will reject the backslash and correctly produce a
syntax error.
Since the table of keywords is sorted, we can use strcmp to do the search
and stop part way through the search if the comparison is less-than.
Because all tokens that are names are subject to this search, this
optimisation will improve the overall speed of the lexer when processing
a script.
The change also decreases code size by a little bit because we now use
strcmp instead of the custom str_strn_equal function.
Keywords only needs to be searched for if the token is a MP_TOKEN_NAME, so
we can move the seach to the part of the code that does the tokenising for
MP_TOKEN_NAME.