There are two checks that are always false so can be converted to (negated)
assertions to save code space and execution time. They are:
1. The check of the str parameter, which is required to be non-NULL as per
the original comment that it has enough space in it as calculated by
mp_int_format_size. And for all uses of this function str is indeed
non-NULL.
2. The check of the base parameter, which is already required to be between
2 and 16 (inclusive) via the assertion in mp_int_format_size.
The motivation behind this patch is to remove unreachable code in mpn_div.
This unreachable code was added some time ago in
9a21d2e070, when a loop in mpn_div was copied
and adjusted to work when mpz_dig_t was exactly half of the size of
mpz_dbl_dig_t (a common case). The loop was copied correctly but it wasn't
noticed at the time that the final part of the calculation of num-quo*den
could be optimised, and hence unreachable code was left for a case that
never occurred.
The observation for the optimisation is that the initial value of quo in
mpn_div is either exact or too large (never too small), and therefore the
subtraction of quo*den from num may subtract exactly enough or too much
(but never too little). Using this observation the part of the algorithm
that handles the borrow value can be simplified, and most importantly this
eliminates the unreachable code.
The new code has been tested with DIG_SIZE=3 and DIG_SIZE=4 by dividing all
possible combinations of non-negative integers with between 0 and 3
(inclusive) mpz digits.
Empty __VA_ARGS__ are not allowed in the C preprocessor so adjust the rule
arg offset calculation to not use them. Also, some compilers (eg MSVC)
require an extra layer of macro expansion.
This is the sixth and final patch in a series of patches to the parser that
aims to reduce code size by compressing the data corresponding to the rules
of the grammar.
Prior to this set of patches the rules were stored as rule_t structs with
rule_id, act and arg members. And then there was a big table of pointers
which allowed to lookup the address of a rule_t struct given the id of that
rule.
The changes that have been made are:
- Breaking up of the rule_t struct into individual components, with each
component in a separate array.
- Removal of the rule_id part of the struct because it's not needed.
- Put all the rule arg data in a big array.
- Change the table of pointers to rules to a table of offsets within the
array of rule arg data.
The last point is what is done in this patch here and brings about the
biggest decreases in code size, because an array of pointers is now an
array of bytes.
Code size changes for the six patches combined is:
bare-arm: -644
minimal x86: -1856
unix x64: -5408
unix nanbox: -2080
stm32: -720
esp8266: -812
cc3200: -712
For the change in parser performance: it was measured on pyboard that these
six patches combined gave an increase in script parse time of about 0.4%.
This is due to the slightly more complicated way of looking up the data for
a rule (since the 9th bit of the offset into the rule arg data table is
calculated with an if statement). This is an acceptable increase in parse
time considering that parsing is only done once per script (if compiled on
the target).
Instead of each rule being stored in ROM as a struct with rule_id, act and
arg, the act and arg parts are now in separate arrays and the rule_id part
is removed because it's not needed. This reduces code size, by roughly one
byte per grammar rule, around 150 bytes.
The rule name is only used for debugging, and this patch makes things a bit
cleaner by completely separating out the rule name from the rest of the
rule data.
Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot
of the NLR code, specifically that dealing with pushing and popping the NLR
pointer to maintain the linked-list of NLR buffers. This patch factors all
of that code out of the specific implementations into generic functions in
nlr.c, along with a helper macro in nlr.h. This eliminates duplicated
code.
If MICROPY_NLR_SETJMP is not enabled and the machine is auto-detected then
nlr.h now defines some convenience macros for the individual NLR
implementations to use (eg MICROPY_NLR_THUMB). This keeps nlr.h and the
implementation in sync, and also makes the nlr_buf_t struct easier to read.
A function with a naked attribute must only contain basic inline asm
statements and no C code.
For nlr_push this means removing the "return 0" statement. But for some
gcc versions this induces a compiler warning so the __builtin_unreachable()
line needs to be added.
For nlr_jump, this function contains a combination of C code and inline asm
so cannot be naked.
This reverts commit 6a3a742a6c.
The above commit has number of faults starting from the motivation down
to the actual implementation.
1. Faulty implementation.
The original code contained functions like:
NORETURN void nlr_jump(void *val) {
nlr_buf_t **top_ptr = &MP_STATE_THREAD(nlr_top);
nlr_buf_t *top = *top_ptr;
...
__asm volatile (
"mov %0, %%edx \n" // %edx points to nlr_buf
"mov 28(%%edx), %%esi \n" // load saved %esi
"mov 24(%%edx), %%edi \n" // load saved %edi
"mov 20(%%edx), %%ebx \n" // load saved %ebx
"mov 16(%%edx), %%esp \n" // load saved %esp
"mov 12(%%edx), %%ebp \n" // load saved %ebp
"mov 8(%%edx), %%eax \n" // load saved %eip
"mov %%eax, (%%esp) \n" // store saved %eip to stack
"xor %%eax, %%eax \n" // clear return register
"inc %%al \n" // increase to make 1, non-local return
"ret \n" // return
: // output operands
: "r"(top) // input operands
: // clobbered registers
);
}
Which clearly stated that C-level variable should be a parameter of the
assembly, whcih then moved it into correct register.
Whereas now it's:
NORETURN void nlr_jump_tail(nlr_buf_t *top) {
(void)top;
__asm volatile (
"mov 28(%edx), %esi \n" // load saved %esi
"mov 24(%edx), %edi \n" // load saved %edi
"mov 20(%edx), %ebx \n" // load saved %ebx
"mov 16(%edx), %esp \n" // load saved %esp
"mov 12(%edx), %ebp \n" // load saved %ebp
"mov 8(%edx), %eax \n" // load saved %eip
"mov %eax, (%esp) \n" // store saved %eip to stack
"xor %eax, %eax \n" // clear return register
"inc %al \n" // increase to make 1, non-local return
"ret \n" // return
);
for (;;); // needed to silence compiler warning
}
Which just tries to perform operations on a completely random register (edx
in this case). The outcome is the expected: saving the pure random luck of
the compiler putting the right value in the random register above, there's
a crash.
2. Non-critical assessment.
The original commit message says "There is a small overhead introduced
(typically 1 machine instruction)". That machine instruction is a call
if a compiler doesn't perform tail optimization (happens regularly), and
it's 1 instruction only with the broken code shown above, fixing it
requires adding more. With inefficiencies already presented in the NLR
code, the overhead becomes "considerable" (several times more than 1%),
not "small".
The commit message also says "This eliminates duplicated code.". An
obvious way to eliminate duplication would be to factor out common code
to macros, not introduce overhead and breakage like above.
3. Faulty motivation.
All this started with a report of warnings/errors happening for a niche
compiler. It could have been solved in one the direct ways: a) fixing it
just for affected compiler(s); b) rewriting it in proper assembly (like
it was before BTW); c) by not doing anything at all, MICROPY_NLR_SETJMP
exists exactly to address minor-impact cases like thar (where a) or b) are
not applicable). Instead, a backwards "solution" was put forward, leading
to all the issues above.
The best action thus appears to be revert and rework, not trying to work
around what went haywire in the first place.
Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot
of the NLR code, specifically that dealing with pushing and popping the NLR
pointer to maintain the linked-list of NLR buffers. This patch factors all
of that code out of the specific implementations into generic functions in
nlr.c. This eliminates duplicated code.
The factoring also allows to make the machine-specific NLR code pure
assembler code, thus allowing nlrthumb.c to use naked function attributes
in the correct way (naked functions can only have basic inline assembler
code in them).
There is a small overhead introduced (typically 1 machine instruction)
because now the generic nlr_jump() must call nlr_jump_tail() rather than
them being one combined function.
set_equal is called only from set_binary_op, and this guarantees that the
second arg to set_equal is always a set or frozenset. So there is no need
to do a further check.
This implements .pend_throw(exc) method, which sets up an exception to be
triggered on the next call to generator's .__next__() or .send() method.
This is unlike .throw(), which immediately starts to execute the generator
to process the exception. This effectively adds Future-like capabilities
to generator protocol (exception will be raised in the future).
The need for such a method arised to implement uasyncio wait_for() function
efficiently (its behavior is clearly "Future" like, and normally would
require to introduce an expensive Future wrapper around all native
couroutines, like upstream asyncio does).
py/objgenerator: pend_throw: Return previous pended value.
This effectively allows to store an additional value (not necessary an
exception) in a coroutine while it's not being executed. uasyncio has
exactly this usecase: to mark a coro waiting in I/O queue (and thus
not executed in the normal scheduling queue), for the purpose of
implementing wait_for() function (cancellation of such waiting coro
by a timeout).
Some compilers can treat enum types as signed, in which case 3 bits is not
enough to encode all mp_raw_code_kind_t values. So change the type to
mp_uint_t.
This is a bit of a clumsy way of doing it but solves the issue of __init__
not running when a module is imported via its weak-link name. Ideally a
better solution would be found.
Before this patch, if a user defined the __new__() function for a class
then two instances of that class would be created: once before __new__ is
called and once during the __new__ call (assuming the user creates some
instance, eg using super().__new__, which is most of the time). The first
one was then discarded. This refactor makes it so that a new instance is
only created if the user __new__ function doesn't exist.
This patch cleans up and generalises part of the code which handles
overriding and calling a native base-class's __init__ method. It defers
the call to the native make_new() function until after the user (Python)
__init__() method has run. That user method now has the chance to call the
native __init__/make_new and pass it different arguments. If the user
doesn't call the super().__init__ method then it will be called
automatically after the user code finishes, to finalise construction of the
instance.
The nan-boxing representation has an extra 16-bits of space to store
small-int values, and making use of it allows to create and manipulate full
32-bit positive integers (ie up to 0xffffffff) without using the heap.
This patch introduces the MICROPY_ENABLE_PYSTACK option (disabled by
default) which enables a "Python stack" that allows to allocate and free
memory in a scoped, or Last-In-First-Out (LIFO) way, similar to alloca().
A new memory allocation API is introduced along with this Py-stack. It
includes both "local" and "nonlocal" LIFO allocation. Local allocation is
intended to be equivalent to using alloca(), whereby the same function must
free the memory. Nonlocal allocation is where another function may free
the memory, so long as it's still LIFO.
Follow-up patches will convert all uses of alloca() and VLA to the new
scoped allocation API. The old behaviour (using alloca()) will still be
available, but when MICROPY_ENABLE_PYSTACK is enabled then alloca() is no
longer required or used.
The benefits of enabling this option are (or will be once subsequent
patches are made to convert alloca()/VLA):
- Toolchains without alloca() can use this feature to obtain correct and
efficient scoped memory allocation (compared to using the heap instead
of alloca(), which is slower).
- Even if alloca() is available, enabling the Py-stack gives slightly more
efficient use of stack space when calling nested Python functions, due to
the way that compilers implement alloca().
- Enabling the Py-stack with the stackless mode allows for even more
efficient stack usage, as well as retaining high performance (because the
heap is no longer used to build and destroy stackless code states).
- With Py-stack and stackless enabled, Python-calling-Python is no longer
recursive in the C mp_execute_bytecode function.
The micropython.pystack_use() function is included to measure usage of the
Python stack.
This function was implemented as an experiment, and was enabled only in
unix port. To remind, it allows to access arbitrary files frozen as
source modules (vs bytecode).
However, further experimentation showed that the same functionality can
be implemented with frozen bytecode. The process requires more steps, but
with suitable toolset it doesn't matter patch. This process is:
1. Convert binary files into "Python resource module" with
tools/mpy_bin2res.py.
2. Freeze as the bytecode.
3. Use micropython-lib's pkg_resources.resource_stream() to access it.
In other words, the extra step is using tools/mpy_bin2res.py (because
there would be wrapper for uio.resource_stream() anyway).
Going frozen bytecode route allows more flexibility, and same/additional
efficiency:
1. Frozen source support can be disabled altogether for additional code
savings.
2. Resources could be also accessed as a buffer, not just as a stream.
There're few caveats too:
1. It wasn't actually profiled the overhead of storing a resource in
"Python resource module" vs storing it directly, but it's assumed that
overhead is small.
2. The "efficiency" claim above applies to the case when resource
file is frozen as the bytecode. If it's not, it actually will take a
lot of RAM on loading. But in this case, the resource file should not
be used (i.e. generated) in the first place, and micropython-lib's
pkg_resources.resource_stream() implementation has the appropriate
fallback to read the raw files instead. This still poses some distribution
issues, e.g. to deployable to baremetal ports (which almost certainly
would require freezeing as the bytecode), a distribution package should
include the resource module. But for non-freezing deployment, presense
of resource module will lead to memory inefficiency.
All the discussion above reminds why uio.resource_stream() was implemented
in the first place - to address some of the issues above. However, since
then, frozen bytecode approach seems to prevail, so, while there're still
some issues to address with it, this change is being made.
This change saves 488 bytes for the unix x86_64 port.
This target removes any stray files (i.e. something not committed to git)
from scripts/ and modules/ dirs (or whatever FROZEN_DIR and FROZEN_MPY_DIR
is set to).
The expected workflow is:
1. make clean-frozen
2. micropython -m upip -p modules <packages_to_freeze>
3. make
As it can be expected that people may drop random thing in those dirs which
they can miss later, the content is actually backed up before cleaning.
This is second part of fun_bc_call() vs mp_obj_fun_bc_prepare_codestate()
common code refactor. This factors out code to initialize codestate
object. After this patch, mp_obj_fun_bc_prepare_codestate() is effectively
DECODE_CODESTATE_SIZE() followed by allocation followed by
INIT_CODESTATE(), and fun_bc_call() starts with that too.
fun_bc_call() starts with almost the same code as
mp_obj_fun_bc_prepare_codestate(), the only difference is a way to
allocate the codestate object (heap vs stack with heap fallback).
Still, would be nice to avoid code duplication to make further
refactoring easier.
So, this commit factors out the common code before the allocation -
decoding and calculating codestate size. It produces two values,
so structured as a macro which writes to 2 variables passed as
arguments.
The assembler back-end for most architectures needs to know if a jump is
backwards in order to emit optimised machine code, and they do this by
checking if the destination label has been set or not. So always reset
label offsets to -1 (this reverts partially the previous commit, with some
minor optimisation for the if-logic with the pass variable).
Clearing the labels to -1 is purely a debugging measure. For release
builds there is no need to do it as the label offset table should always
have the correct value assigned.
Accessing them will crash immediately instead still working for some time,
until overwritten by some other data, leading to much less deterministic
crashes.
This is mostly a workaround for forceful rebuilding of mpy-cross on every
codebase change. If this file has debug logging enabled (by patching),
mpy-cross build failed.
Before that, the output was truncated to 32 bits. Only "%x" format is
handled, because a typical use is for addresses.
This refactor actually decreased x86_64 code size by 30 bytes.
This allows the function to raise an exception when unknown keyword args
are passed in. This patch also reduces code size by (in bytes):
bare-arm: -24
minimal x86: -76
unix x64: -56
unix nanbox: -84
stm32: -40
esp8266: -68
cc3200: -48
Furthermore, this patch adds space (" ") to the set of ROM qstrs which
means it doesn't need to be put in RAM if it's ever used.
Return the result of called function. If exception happened, return
MP_OBJ_NULL. Allows to use mp_call_function_*_protected() with callbacks
returning values, etc.
This commit essentially reverts aa9dbb1b03
where this if-condition was added. It seems that even when that commit
was made the code was never reached by any tests, nor reachable by
analysis (see below). The same is true with the code as it currently
stands: no test triggers this if-condition, nor any uasyncio examples.
Analysing the flow of the program also shows that it's not reachable:
==START==
-> to trigger this if condition mp_execute_bytecode() must return
MP_VM_RETURN_YIELD with *sp==MP_OBJ_STOP_ITERATION
-> mp_execute_bytecode() can only return MP_VM_RETURN_YIELD from the
MP_BC_YIELD_VALUE bytecode, which can happen in 2 ways:
-> 1) from a "yield <x>" in bytecode, but <x> must always be a proper
object, never MP_OBJ_STOP_ITERATION; ==END1==
-> 2) via yield from, via mp_resume() which must return
MP_VM_RETURN_YIELD with ret_value==MP_OBJ_STOP_ITERATION, which
can happen in 3 ways:
-> 1) it delegates to mp_obj_gen_resume(); go back to ==START==
-> 2) it returns MP_VM_RETURN_YIELD directly but with a guard that
ret_val!=MP_OBJ_STOP_ITERATION; ==END2==
-> 3) it returns MP_VM_RETURN_YIELD with ret_val set from
mp_call_method_n_kw(), but mp_call_method_n_kw() must return a
proper object, never MP_OBJ_STOP_ITERATION; ==END3==
The above shows there is no way to trigger the if-condition and it can be
removed.
These checks are assumed to be true in all cases where gc_realloc is
called with a valid pointer, so no need to waste code space and time
checking them in a non-debug build.
So long as the input qstr identifier is valid (below the maximum number of
qstrs) the function will always return a valid pointer. This patch
eliminates the "return 0" dead-code.
This patch improves parsing of floating point numbers by converting all the
digits (integer and fractional) together into a number 1 or greater, and
then applying the correct power of 10 at the very end. In particular the
multiple "multiply by 0.1" operations to build a fraction are now combined
together and applied at the same time as the exponent, at the very end.
This helps to retain precision during parsing of floats, and also includes
a check that the number doesn't overflow during the parsing. One benefit
is that a float will have the same value no matter where the decimal point
is located, eg 1.23 == 123e-2.
Before this patch MP_BINARY_OP_IN had two meanings: coming from bytecode it
meant that the args needed to be swapped, but coming from within the
runtime meant that the args were already in the correct order. This lead
to some confusion in the code and comments stating how args were reversed.
It also lead to 2 bugs: 1) containment for a subclass of a native type
didn't work; 2) the expression "{True} in True" would illegally succeed and
return True. In both of these cases it was because the args to
MP_BINARY_OP_IN ended up being reversed twice.
To fix these things this patch introduces MP_BINARY_OP_CONTAINS which
corresponds exactly to the __contains__ special method, and this is the
operator that built-in types should implement. MP_BINARY_OP_IN is now only
emitted by the compiler and is converted to MP_BINARY_OP_CONTAINS by
swapping the arguments.
In mp_binary_op, there is no need to explicitly check for type->getiter
being non-null and raising an exception because this is handled exactly by
mp_getiter(). So just call the latter unconditionally.
This patch introduces a new compile-time config option to disable multiple
inheritance at the Python level: MICROPY_MULTIPLE_INHERITANCE. It is
enabled by default.
Disabling multiple inheritance eliminates a lot of recursion in the call
graph (which is important for some embedded systems), and can be used to
reduce code size for ports that are really constrained (by around 200 bytes
for Thumb2 archs).
With multiple inheritance disabled all tests in the test-suite pass except
those that explicitly test for multiple inheritance.
The function mp_obj_new_str_of_type is a general str object constructor
used in many places in the code to create either a str or bytes object.
When creating a str it should first check if the string data already exists
as an interned qstr, and if so then return the qstr object. This patch
makes the function have such behaviour, which helps to reduce heap usage by
reusing existing interned data where possible.
The old behaviour of mp_obj_new_str_of_type (which didn't check for
existing interned data) is made available through the function
mp_obj_new_str_copy, but should only be used in very special cases.
One consequence of this patch is that the following expression is now True:
'abc' is ' abc '.split()[0]
This patch simplifies the str creation API to favour the common case of
creating a str object that is not forced to be interned. To force
interning of a new str the new mp_obj_new_str_via_qstr function is added,
and should only be used if warranted.
Apart from simplifying the mp_obj_new_str function (and making it have the
same signature as mp_obj_new_bytes), this patch also reduces code size by a
bit (-16 bytes for bare-arm and roughly -40 bytes on the bare-metal archs).
Rationale:
* Calling Python build tool scripts from makefiles should be done
consistently using `python </path/to/script>`, instead of relying on the
correct she-bang line in the script [1] and the executable bit on the
script being set. This is more platform-independent.
* The name/path of the Python executable should always be used via the
makefile variable `PYTHON` set in `py/mkenv.mk`. This way it can be
easily overwritten by the user with `make PYTHON=/path/to/my/python`.
* The Python executable name should be part of the value of the makefile
variable, which stands for the build tool command (e.g. `MAKE_FROZEN` and
`MPY_TOOL`), not part of the command line where it is used. If a Python
tool is substituted by another (non-python) program, no change to the
Makefiles is necessary, except in `py/mkenv.mk`.
* This also solves #3369 and #1616.
[1] There are systems, where even the assumption that `/usr/bin/env` always
exists, doesn't hold true, for example on Android (where otherwise the unix
port compiles perfectly well).
All the asm macro names that convert a particular architecture to a generic
interface now follow the convention whereby the "destination" (usually a
register) is specified first.
Macros to convert big-endian values to host byte order and vice-versa.
These were defined in adhoc way for some ports (e.g. esp8266), allow
reuse, provide default implementations, while allow ports to override.
The technique of using alloca is how dotted import names are composed in
mp_import_from and mp_builtin___import__, so use the same technique in the
compiler. This puts less pressure on the heap (only the stack is used if
the qstr already exists, and if it doesn't exist then the standard qstr
block memory is used for the new qstr rather than a separate chunk of the
heap) and reduces overall code size.
This reverts commit 3289b9b7a7.
The commit broke building on MINGW because the filename became
micropython.exe.exe. A proper solution to support more Windows build
environments requires more thought and testing.
Per the comment found here
https://github.com/micropython/micropython-esp32/issues/209#issuecomment-339855157,
this patch adds finaliser code to prevent memory leaks from ussl objects,
which is especially useful when memory for a ussl context is allocated
outside the uPy heap. This patch is in-line with the finaliser code found
in many modsocket implementations for various ports.
This feature is configured via MICROPY_PY_USSL_FINALISER and is disabled by
default because there may be issues using it when the ussl state *is*
allocated on the uPy heap, rather than externally.
This allows to configure support for inplace special methods separately,
similar to "normal" and reverse special methods. This is useful, because
inplace methods are "the most optional" ones, for example, if inplace
methods aren't defined, the operation will be executed using normal
methods instead.
As a caveat, __iadd__ and __isub__ are implemented even if
MICROPY_PY_ALL_INPLACE_SPECIAL_METHODS isn't defined. This is similar
to the state of affairs before binary operations refactor, and allows
to run existing tests even if MICROPY_PY_ALL_INPLACE_SPECIAL_METHODS
isn't defined.
If MICROPY_PY_ALL_SPECIAL_METHODS is defined, actually define all special
methods (still subject to gating by e.g. MICROPY_PY_REVERSE_SPECIAL_METHODS).
This adds quite a number of qstr's, so should be used sparingly.
Update makeqstrdata.py to sort strings starting with "__" to the beginning
of qstr list, so they get low qstr id's, guaranteedly fitting in 8 bits.
Then use this property to further compact op_id => qstr mapping arrays.
Per https://docs.python.org/3/library/sys.html#sys.getsizeof:
getsizeof() calls the object’s __sizeof__ method. Previously, "getsizeof"
was used mostly to save on new qstr, as we don't really support calling
this method on arbitrary objects (so it was used only for reporting).
However, normalize it all now.
Not all compilers/analysers are smart enough to realise that this function
is never called if MICROPY_ERROR_REPORTING is not TERSE, because the logic
in the code uses if statements rather than #if to select whether to call
this function or not (MSC in debug mode is an example of this, but there
are others). So just unconditionally compile this helper function. The
code-base anyway relies on the linker to remove unused functions.
The uos.dupterm() signature and behaviour is updated to reflect the latest
enhancements in the docs. It has minor backwards incompatibility in that
it no longer accepts zero arguments.
The dupterm_rx helper function is moved from esp8266 to extmod and
generalised to support multiple dupterm slots.
A port can specify multiple slots by defining the MICROPY_PY_OS_DUPTERM
config macro to an integer, being the number of slots it wants to have;
0 means to disable the dupterm feature altogether.
The unix and esp8266 ports are updated to work with the new interface and
are otherwise unchanged with respect to functionality.
So that a pointer to it can be passed as a pointer to math_generic_1. This
patch also makes the function work for single and double precision floating
point.
This patch changes how most of the plain math functions are implemented:
there are now two generic math wrapper functions that take a pointer to a
math function (like sin, cos) and perform the necessary conversion to and
from MicroPython types. This helps to reduce code size. The generic
functions can also check for math domain errors in a generic way, by
testing if the result is NaN or infinity combined with finite inputs.
The result is that, with this patch, all math functions now have full
domain error checking (even gamma and lgamma) and code size has decreased
for most ports. Code size changes in bytes for those with the math module
are:
unix x64: -432
unix nanbox: -792
stm32: -88
esp8266: +12
Tests are also added to check domain errors are handled correctly.
Printing "(null)" when a NULL string pointer is passed to %s is a debugging
feature and not a feature that's relied upon by the code. So it only needs
to be compiled in when debugging (such as assert) is enabled, and saves
roughy 30 bytes of code when disabled.
This patch also fixes this NULL check to not do the check if the precision
is specified as zero.
Header files that are considered internal to the py core and should not
normally be included directly are:
py/nlr.h - internal nlr configuration and declarations
py/bc0.h - contains bytecode macro definitions
py/runtime0.h - contains basic runtime enums
Instead, the top-level header files to include are one of:
py/obj.h - includes runtime0.h and defines everything to use the
mp_obj_t type
py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
and defines everything to use the general runtime support functions
Additional, specific headers (eg py/objlist.h) can be included if needed.
Qstr values fit in 16-bits (and this fact is used elsewhere in the code) so
no need to use more than that for the large lookup tables. The compiler
will anyway give a warning if the qstr values don't fit in 16 bits. Saves
around 80 bytes of code space for Thumb2 archs.
Building mpy-cross: this patch adds .exe to the PROG name when building
executables for host (eg mpy-cross) on Windows. make clean now removes
mpy-cross.exe under Windows.
Building MicroPython: this patch sets MPY_CROSS to mpy-cross.exe or
mpy-cross so they can coexist and use cygwin or WSL without rebuilding
mpy-cross. The dependency in the mpy rule now uses mpy-cross.exe for
Windows and mpy-cross for Linux.
CPython docs explicitly state that the RHS of a set/frozenset binary op
must be a set to prevent user errors. It also preserves commutativity of
the ops, eg: "abc" & set() is a TypeError, and so should be set() & "abc".
This change actually decreases unix (x64) code by 160 bytes; it increases
stm32 by 4 bytes and esp8266 by 28 bytes (but previous patch already
introduced a much large saving).
A lot of set's methods (the mutable ones) are not allowed to operate on a
frozenset, and giving frozenset a separate locals dict with only the
methods that it supports allows to simplify the logic that verifies if
args are a set or a frozenset. Even though the new frozenset locals dict
is relatively large (88 bytes on 32-bit archs) there is a much bigger
saving coming from the removal of a const string for an error message,
along with the removal of some checks for set or frozenset type.
Changes in code size due to this patch are (for ports that changed at all):
unix x64: -56
unix nanbox: -304
stm32: -64
esp8266: -124
cc3200: -40
Apart from the reduced code, frozenset now has better tab-completion
because it only lists the valid methods. And the error message for
accessing an invalid method is now more detailed (it includes the
method name that wasn't found).
This returns a complex number, following CPython behaviour. For ports that
don't have complex numbers enabled this will raise a ValueError which gives
a fail-safe for scripts that were written assuming complex numbers exist.
This adds a new configuration option to print runtime warnings and errors to
stderr. On Unix, CPython prints warnings and unhandled exceptions to stderr,
so the unix port here is configured to use this option.
The unix port already printed unhandled exceptions on the main thread to
stderr. This patch fixes unhandled exceptions on other threads and warnings
(issue #2838) not printing on stderr.
Additionally, a couple tests needed to be fixed to handle this new behavior.
This is done by also capturing stderr when running tests.
Current users of fixed vstr buffers (building file paths) assume that there
is no overflow and do not check for overflow after building the vstr. This
has the potential to lead to NULL pointer dereferences
(when vstr_null_terminated_str returns NULL because it can't allocate RAM
for the terminating byte) and stat'ing and loading invalid path names (due
to the path being truncated). The safest and simplest thing to do in these
cases is just raise an exception if a write goes beyond the end of a fixed
vstr buffer, which is what this patch does. It also simplifies the vstr
code.
The vstr argument to the calls to vstr_add_len are dynamically allocated
(ie fixed_buf=false) and so vstr_add_len will never return NULL. So
there's no need to check for it. Any out-of-memory errors are raised by
the call to m_renew in vstr_ensure_extra.
The aim of this patch is to rewrite the functions that create exception
instances (mp_obj_exception_make_new and mp_obj_new_exception_msg_varg) so
that they do not call any functions that may raise an exception. Otherwise
it's possible to create infinite recursion with an exception being raised
while trying to create an exception object.
The two main things that are done to accomplish this are:
1. Change mp_obj_new_exception_msg_varg to just format the string, then
call mp_obj_exception_make_new to actually create the exception object.
2. In mp_obj_exception_make_new and mp_obj_new_exception_msg_varg try to
allocate all memory first using functions that don't raise exceptions
If any of the memory allocations fail (return NULL) then degrade
gracefully by trying other options for memory allocation, eg using the
emergency exception buffer.
3. Use a custom printer backend to conservatively format strings: if it
can't allocate memory then it just truncates the string.
As part of this rewrite, raising an exception without a message, like
KeyError(123), will now use the emergency buffer to store the arg and
traceback data if there is no heap memory available.
Memory use with this patch is unchanged. Code size is increased by:
bare-arm: +136
minimal x86: +124
unix x64: +72
unix nanbox: +96
stm32: +88
esp8266: +92
cc3200: +80
This allows user classes to implement __abs__ special method, and saves
code size (104 bytes for x86_64), even though during refactor, an issue
was fixed and few optimizations were made:
* abs() of minimum (negative) small int value is calculated properly.
* objint_longlong and objint_mpz avoid allocating new object is the
argument is already non-negative.
If, for class X, X.__add__(Y) doesn't exist (or returns NotImplemented),
try Y.__radd__(X) instead.
This patch could be simpler, but requires undoing operand swap and
operation switch to get non-confusing error message in case __radd__
doesn't exist.
This is to allow to place reverse ops immediately after normal ops, so
they can be tested as one range (which is optimization for reverse ops
introduction in the next patch).
Originally, there were grouped in blocks of 5, to make it easier e.g.
to assess and numeric code of each. But now it makes more sense to
group it by semantics/properties, and then split in chunks still,
which usually leads to chunks of ~6 ops.
It starts a dichotomy of mp_binary_op_t values which can't appear in the
bytecode. Another reason to move it is to VALUES of OP_* and OP_INPLACE_*
nicely adjacent. This also will be needed for OP_REVERSE_*, to be soon
introduced.
This patch adds a function utf8_check() to check for a valid UTF-8 encoded
string, and calls it when constructing a str from raw bytes. The feature
is selectable at compile time via MICROPY_PY_BUILTINS_STR_UNICODE_CHECK and
is enabled if unicode is enabled. It costs about 110 bytes on Thumb-2, 150
bytes on Xtensa and 170 bytes on x86-64.
IEEE floating point is specified such that a comparison of NaN with itself
returns false, and Python respects these semantics. This patch makes uPy
also have these semantics. The fix has a minor impact on the speed of the
object-equality fast-path, but that seems to be unavoidable and it's much
more important to have correct behaviour (especially in this case where
the wrong answer for nan==nan is silently returned).
These are now returned as "operation not supported" instead of raising
TypeError. In particular, this fixes equality for float vs incompatible
types, which now properly results in False instead of exception. This
also paves the road to support reverse operation (e.g. __radd__) with
float objects.
This is achieved by introducing mp_obj_get_float_maybe(), similar to
existing mp_obj_get_int_maybe().
Prior to this patch, the size of the buffer given to pack_into() was checked
for being too small by using the count of the arguments, not their actual
size. For example, a format spec of '4I' would only check that there was 4
bytes available, not 16; and 'I' would check for 1 byte, not 4.
The pack() function is ok because its buffer is created to be exactly the
correct size.
The fix in this patch calculates the total size of the format spec at the
start of pack_into() and verifies that the buffer is large enough. This
adds some computational overhead, to iterate through the whole format spec.
The alternative is to check during the packing, but that requires extra
code to handle alignment, and the check is anyway not needed for pack().
So to maintain minimal code size the check is done using struct_calcsize.
Prior to this patch, the size of the buffer given to unpack/unpack_from was
checked for being too small by using the count of the arguments, not their
actual size. For example, a format spec of '4I' would only check that
there was 4 bytes available, not 16; and 'I' would check for 1 byte, not 4.
This bug is fixed in this patch by calculating the total size of the format
spec at the start of the unpacking function. This function anyway needs to
calculate the number of items at the start, so calculating the total size
can be done at the same time.
This patch makes a repeat counter behave the same as repeating the
typecode, when there are not enough args. For example:
struct.pack('2I', 1) now behave the same as struct.pack('II', 1).
NotImplemented means "try other fallbacks (like calling __rop__
instead of __op__) and if nothing works, raise TypeError". As
MicroPython doesn't implement any fallbacks, signal to raise
TypeError right away.
The unary-op/binary-op enums are already defined, and there are no
arithmetic tricks used with these types, so it makes sense to use the
correct enum type for arguments that take these values. It also reduces
code size quite a bit for nan-boxing builds.
Otherwise, it will silently get incorrect result on other values types,
including CPython tuple form like "foo.png".endswith(("png", "jpg"))
(which MicroPython doesn't support for unbloatedness).
For SEEK_SET, offset should be treated as unsigned, to allow full-width
stream sizes (e.g. 32-bit instead of 31-bit). This is now fully documented
in stream.h. Also, seek symbolic constants are added.
Too big positive, or too big negative offset values could lead to overflow
and address space wraparound and thus access to unrelated areas of memory
(a security issue).
The value of 0 can't be used because otherwise mp_binary_get_size will let
a null byte through as the type code (intepreted as byterray). This can
lead to invalid type-specifier strings being let through without an error
in the struct module, and even buffer overruns.
- Changed: ValueError, TypeError, NotImplementedError
- OSError invocations unchanged, because the corresponding utility
function takes ints, not strings like the long form invocation.
- OverflowError, IndexError and RuntimeError etc. not changed for now
until we decide whether to add new utility functions.
Before this patch the mperrno.h file could be included and would silently
succeed with incorrect config settings, because mpconfig.h was not yet
included.
If constants (eg mp_const_none_obj) are placed in very high memory
locations that require 64-bits for the pointer then the assembler must be
able to emit instructions to move such pointers to one of the top 8
registers (ie r8-r15).
It's not used anywhere else in the VM loop, and clashes with (is shadowed
by) the n_state variable that's redeclared towards the end of the
mp_execute_bytecode function. Code size is unchanged.
The code conventions suggest using header guards, but do not define how
those should look like and instead point to existing files. However, not
all existing files follow the same scheme, sometimes omitting header guards
altogether, sometimes using non-standard names, making it easy to
accidentally pick a "wrong" example.
This commit ensures that all header files of the MicroPython project (that
were not simply copied from somewhere else) follow the same pattern, that
was already present in the majority of files, especially in the py folder.
The rules are as follows.
Naming convention:
* start with the words MICROPY_INCLUDED
* contain the full path to the file
* replace special characters with _
In addition, there are no empty lines before #ifndef, between #ifndef and
one empty line before #endif. #endif is followed by a comment containing
the name of the guard macro.
py/grammar.h cannot use header guards by design, since it has to be
included multiple times in a single C file. Several other files also do not
need header guards as they are only used internally and guaranteed to be
included only once:
* MICROPY_MPHALPORT_H
* mpconfigboard.h
* mpconfigport.h
* mpthreadport.h
* pin_defs_*.h
* qstrdefs*.h