If constants (eg mp_const_none_obj) are placed in very high memory
locations that require 64-bits for the pointer then the assembler must be
able to emit instructions to move such pointers to one of the top 8
registers (ie r8-r15).
It's not used anywhere else in the VM loop, and clashes with (is shadowed
by) the n_state variable that's redeclared towards the end of the
mp_execute_bytecode function. Code size is unchanged.
The code conventions suggest using header guards, but do not define how
those should look like and instead point to existing files. However, not
all existing files follow the same scheme, sometimes omitting header guards
altogether, sometimes using non-standard names, making it easy to
accidentally pick a "wrong" example.
This commit ensures that all header files of the MicroPython project (that
were not simply copied from somewhere else) follow the same pattern, that
was already present in the majority of files, especially in the py folder.
The rules are as follows.
Naming convention:
* start with the words MICROPY_INCLUDED
* contain the full path to the file
* replace special characters with _
In addition, there are no empty lines before #ifndef, between #ifndef and
one empty line before #endif. #endif is followed by a comment containing
the name of the guard macro.
py/grammar.h cannot use header guards by design, since it has to be
included multiple times in a single C file. Several other files also do not
need header guards as they are only used internally and guaranteed to be
included only once:
* MICROPY_MPHALPORT_H
* mpconfigboard.h
* mpconfigport.h
* mpthreadport.h
* pin_defs_*.h
* qstrdefs*.h
Prior to this patch there were 2 paths for creating the namedtuple, one for
when no keyword args were passed, and one when there were keyword args.
And alloca was used in the keyword-arg path to temporarily create the array
of elements for the namedtuple, which would then be copied to a
heap-allocated object (the namedtuple itself).
This patch simplifies the code by combining the no-keyword and keyword
paths, and removing the need for the alloca by constructing the namedtuple
on the heap before populating it.
Heap usage in unchanged, stack usage is reduced, use of alloca is removed,
and code size is not increased and is actually reduced by between 20-30
bytes for most ports.
The while-loop that calls chop_component will guarantee that level==-1 at
the end of the loop. Hence the code following it is unnecessary.
The check for p==this_name will catch imports that are beyond the
top-level, and also covers the case of new_mod_q==MP_QSTR_ (equivalent to
new_mod_l==0) so that check is removed.
There is also a new check at the start for level>=0 to guard against
__import__ being called with bad level values.
Previous to this patch, a label with value "0" was used to indicate an
invalid label, but that meant a wasted word (at slot 0) in the array of
label offsets. This patch adjusts the label indices so the first one
starts at 0, and the maximum value indicates an invalid label.
This patch fixes a bug whereby the Python stack was not correctly reset if
there was a break/continue statement in the else black of an optimised
for-range loop.
For example, in the following code the "j" variable from the inner for loop
was not being popped off the Python stack:
for i in range(4):
for j in range(4):
pass
else:
continue
This is now fixed with this patch.
In CPython 3.4 this raises a SyntaxError. In CPython 3.5+ having a
positional after * is allowed but uPy has the wrong semantics and passes
the arguments in the incorrect order. To prevent incorrect use of a
function going unnoticed it is important to raise the SyntaxError in uPy,
until the behaviour is fixed to follow CPython 3.5+.
This patch fixes 2 things when printing a floating-point number that
requires rounding up of the mantissa:
- retain the correct precision; eg 0.99 becomes 1.0, not 1.00
- if the exponent goes from -1 to 0 then render it as +0, not -0
Taking the address of a local variable leads to increased stack usage, so
the mp_decode_uint_skip() function is added to reduce the need for taking
addresses. The changes in this patch reduce stack usage of a Python call
by 8 bytes on ARM Thumb, by 16 bytes on non-windowing Xtensa archs, and by
16 bytes on x86-64. Code size is also slightly reduced on most archs by
around 32 bytes.
The implementation is taken from stmhal/input.c, with code added to handle
ctrl-C. This built-in is controlled by MICROPY_PY_BUILTINS_INPUT and is
disabled by default. It uses readline() to capture input but this can be
overridden by defining the mp_hal_readline macro.
For make v3.81, using "make -B" can set $? to empty and in this case the
auto-qstr generation needs to pass all args (ie $^) to cpp. The previous
fix for this (which was removed in 23a693ec2d)
used if statements in the shell command, which gave very long lines that
didn't work on certain systems (eg cygwin).
The fix in this patch is to use an $if(...) expression, which will evaluate
to $? (only newer prerequisites) if it's non empty, otherwise it will use
$^ (all prerequisites).
Previous to this patch the mp_emit_bc_adjust_stack_size function would
adjust the current stack size but would not increase the maximum stack size
if the current size went above it. This meant that certain Python code
(eg a try-finally block with no statements inside it) would not have enough
Python stack allocated to it.
This patch fixes the problem by always checking if the current stack size
goes above the maximum, and adjusting the latter if it does.
This patch fixes a regression introduced by
71a3d6ec3b
Previous to this patch the n_state variable was referring to that computed
at the very start of the mp_execute_bytecode function. This patch fixes it
so that n_state is recomputed when the code_state changes.
Working on a build with PY_IO enabled (for PY_UJSON support) but PY_SYS_STDFILES disabled (no filesystem). There are multiple references to mp_sys_stdout_obj that should only be enabled if both PY_IO and PY_SYS_STDFILES are enabled.
This ensures that mpy-cross is automatically built (and is up-to-date) for
ports that use frozen bytecode. It also makes sure that .mpy files are
re-built if mpy-cross is changed.
Now consistently uses the EOL processing ("\r" and "\r\n" convert to "\n")
and EOF processing (ensure "\n" before EOF) provided by next_char().
In particular the lexer can now correctly handle input that starts with CR.
Prior to this patch only 'q' and 'Q' type arrays could store big-int
values. With this patch any big int that is stored to an array is handled
by the big-int implementation, regardless of the typecode of the array.
This allows arrays to work with all type sizes on all architectures.
The with semantics of this function is close to
pkg_resources.resource_stream() function from setuptools, which
is the canonical way to access non-source files belonging to a package
(resources), regardless of what medium the package uses (e.g. individual
source files vs zip archive). In the case of MicroPython, this function
allows to access resources which are frozen into the executable, besides
accessing resources in the file system.
This is initial stage of the implementation, which actually doesn't
implement "package" part of the semantics, just accesses frozen resources
from "root", or filesystem resource - from current dir.
The standard preprocessor definition to differentiate debug and non-debug
builds is NDEBUG, not DEBUG, so don't rely on the latter:
- just delete the use of it in objint_longlong.c as it has been stale code
for years anyway (since commit [c4029e5]): SUFFIX isn't used anywhere.
- replace DEBUG with MICROPY_DEBUG_NLR in nlr.h: it is rarely used anymore
so can be off by default
This patch allows the following code to run without allocating on the heap:
super().foo(...)
Before this patch such a call would allocate a super object on the heap and
then load the foo method and call it right away. The super object is only
needed to perform the lookup of the method and not needed after that. This
patch makes an optimisation to allocate the super object on the C stack and
discard it right after use.
Changes in code size due to this patch are:
bare-arm: +128
minimal: +232
unix x64: +416
unix nanbox: +364
stmhal: +184
esp8266: +340
cc3200: +128
This patch refactors the handling of the special super() call within the
compiler. It removes the need for a global (to the compiler) state variable
which keeps track of whether the subject of an expression is super. The
handling of super() is now done entirely within one function, which makes
the compiler a bit cleaner and allows to easily add more optimisations to
super calls.
Changes to the code size are:
bare-arm: +12
minimal: +0
unix x64: +48
unix nanbox: -16
stmhal: +4
cc3200: +0
esp8266: -56
With this optimisation enabled the compiler optimises the if-else
expression within a return statement. The optimisation reduces bytecode
size by 2 bytes for each use of such a return-if-else statement. Since
such a statement is not often used, and costs bytes for the code, the
feature is disabled by default.
For example the following code:
def f(x):
return 1 if x else 2
compiles to this bytecode with the optimisation disabled (left column is
bytecode offset in bytes):
00 LOAD_FAST 0
01 POP_JUMP_IF_FALSE 8
04 LOAD_CONST_SMALL_INT 1
05 JUMP 9
08 LOAD_CONST_SMALL_INT 2
09 RETURN_VALUE
and to this bytecode with the optimisation enabled:
00 LOAD_FAST 0
01 POP_JUMP_IF_FALSE 6
04 LOAD_CONST_SMALL_INT 1
05 RETURN_VALUE
06 LOAD_CONST_SMALL_INT 2
07 RETURN_VALUE
So the JUMP to RETURN_VALUE is optimised and replaced by RETURN_VALUE,
saving 2 bytes and making the code a bit faster.
Otherwise the type of parse-node and its kind has to be re-extracted
multiple times. This optimisation reduces code size by a bit (16 bytes on
bare-arm).
It controls the character that's used to (asynchronously) raise a
KeyboardInterrupt exception. Passing "-1" allows to disable the
interception of the interrupt character (as long as a port allows such a
behaviour).
If a finaliser raises an exception then it must not propagate through the
GC sweep function. This patch protects against such a thing by running
finaliser code via the mp_call_function_1_protected call.
This patch also adds scheduler lock/unlock calls around the finaliser
execution to further protect against any possible reentrancy issues: the
memory manager is already locked when doing a collection, but we also don't
want to allow any scheduled code to run, KeyboardInterrupts to interupt the
code, nor threads to switch.
The common cases for inheritance are 0 or 1 parent types, for both built-in
types (eg built-in exceptions) as well as user defined types. So it makes
sense to optimise the case of 1 parent type by storing just the type and
not a tuple of 1 value (that value being the single parent type).
This patch makes such an optimisation. Even though there is a bit more
code to handle the two cases (either a single type or a tuple with 2 or
more values) it helps reduce overall code size because it eliminates the
need to create a static tuple to hold single parents (eg for the built-in
exceptions). It also helps reduce RAM usage for user defined types that
only derive from a single parent.
Changes in code size (in bytes) due to this patch:
bare-arm: -16
minimal (x86): -176
unix (x86-64): -320
unix nanbox: -384
stmhal: -64
cc3200: -32
esp8266: -108
This buffer is used to allocate objects temporarily, and such objects
require that their underlying memory be correctly aligned for their data
type. Aligning for mp_obj_t should be sufficient for emergency exceptions,
but in general the memory buffer should aligned to the maximum alignment of
the machine (eg on a 32-bit machine with mp_obj_t being 4 bytes, a double
may not be correctly aligned).
This patch fixes a bug for certain nan-boxing builds, where mp_obj_t is 8
bytes and must be aligned to 8 bytes (even though the machine is 32 bit).
Hashing of float and complex numbers that are exact (real) integers should
return the same integer hash value as hashing the corresponding integer
value. Eg hash(1), hash(1.0) and hash(1+0j) should all be the same (this
is how Python is specified: if x==y then hash(x)==hash(y)).
This patch implements the simplest way of doing float/complex hashing by
just converting the value to int and returning that value.
Split this setting from MICROPY_CPYTHON_COMPAT. The idea is to be able to
keep MICROPY_CPYTHON_COMPAT disabled, but still pass more of regression
testsuite. In particular, this fixes last failing test in basics/ for
Zephyr port.
The first memmove now copies less bytes in some cases (because len_adj <=
slice_len), and the memcpy is replaced with memmove to support the
possibility that dest and slice regions are overlapping.
This follows the pattern of how all other headers are now included, and
makes it explicit where the header file comes from. This patch also
removes -I options from Makefile's that specify the mp-readline/timeutils/
netutils directories, which are no longer needed.
Build happens in 3 stages:
1. Zephyr config header and make vars are generated from prj.conf.
2. libmicropython is built using them.
3. Zephyr is built and final link happens.
This patch changes mp_uint_t to size_t for the len argument of the
following public facing C functions:
mp_obj_tuple_get
mp_obj_list_get
mp_obj_get_array
These functions take a pointer to the len argument (to be filled in by the
function) and callers of these functions should update their code so the
type of len is changed to size_t. For ports that don't use nan-boxing
there should be no change in generate code because the size of the type
remains the same (word sized), and in a lot of cases there won't even be a
compiler warning if the type remains as mp_uint_t.
The reason for this change is to standardise on the use of size_t for
variables that count memory (or memory related) sizes/lengths. It helps
builds that use nan-boxing.
With this patch all illegal assignments are reported as "can't assign to
expression". Before the patch there were special cases for a literal on
the LHS, and for augmented assignments (eg +=), but it seems a waste of
bytes (and there are lots of bytes used in error messages) to spend on
distinguishing such errors which a user will rarely encounter.
By removing the 'E' code from the operator token encoding mini-language the
tokenising can be simplified. The 'E' code was only used for the !=
operator which is now handled as a special case; the optimisations for the
general case more than make up for the addition of this single, special
case. Furthermore, the . and ... operators can be handled in the same way
as != which reduces the code size a little further.
This simplification also removes a "goto".
Changes in code size for this patch are (measured in bytes):
bare-arm: -48
minimal x86: -64
unix x86-64: -112
unix nanbox: -64
stmhal: -48
cc3200: -48
esp8266: -76
The self variable may be closed-over in the function, and in that case the
call to super() should load the contents of the closure cell using
LOAD_DEREF (before this patch it would just load the cell directly).
Previous to this patch, if the result of the round function overflowed a
small int, or was inf or nan, then a garbage value was returned. With
this patch the correct big-int is returned if necessary and exceptions are
raised for inf or nan.
The C nearbyint function has exactly the semantics that Python's round()
requires, whereas C's round() requires extra steps to handle rounding of
numbers half way between integers. So using nearbyint reduces code size
and potentially eliminates any source of errors in the handling of half-way
numbers.
Also, bare-metal implementations of nearbyint can be more efficient than
round, so further code size is saved (and efficiency improved).
nearbyint is provided in the C99 standard so it should be available on all
supported platforms.
Previous to this patch, if the result of the trunc/ceil/floor functions
overflowed a small int, or was inf or nan, then a garbage value was
returned. With this patch the correct big-int is returned if necessary,
and exceptions are raised for inf or nan.
It improves readability of code and reduces the chance to make a mistake.
This patch also fixes a bug with nan-boxing builds by rounding up the
calculation of the new NSLOTS variable, giving the correct number of slots
(being 4) even if mp_obj_t is larger than the native machine size.
Now, passing a keyword argument that is not expected will correctly report
that fact. If normal or detailed error messages are enabled then the name
of the unexpected argument will be reported.
This patch decreases the code size of bare-arm and stmhal by 12 bytes, and
cc3200 by 8 bytes. Other ports (minimal, unix, esp8266) remain the same in
code size. For terse error message configuration this is because the new
message is shorter than the old one. For normal (and detailed) error
message configuration this is because the new error message already exists
in py/objnamedtuple.c so there's no extra space in ROM needed for the
string.
The scheduler being locked general means we are running a scheduled
function, and switching to another thread violates that, so don't switch in
such a case (even though we technically could).
And if we are running a scheduled function then we want to finish it ASAP,
so we shouldn't switch to another thread.
Furthermore, ports with threading enabled will lock the scheduler during a
hard IRQ, and this patch to the VM will make sure that threads are not
switched during a hard IRQ (which would crash the VM).
Instead of always reporting some object cannot be implicitly be converted
to a 'str', even when it is a 'bytes' object, adjust the logic so that
when trying to convert str to bytes it is shown like that.
This will still report bad implicit conversion from e.g. 'int to bytes'
as 'int to str' but it will not result in the confusing
'can't convert 'str' object to str implicitly' anymore for calls like
b'somestring'.count('a').
Instead of caching data that is constant (code_info, const_table and
n_state), store just a pointer to the underlying function object from which
this data can be derived.
This helps reduce stack usage for the case when the mp_code_state_t
structure is stored on the stack, as well as heap usage when it's stored
on the heap.
The downside is that the VM becomes a little more complex because it now
needs to derive the data from the underlying function object. But this
doesn't impact the performance by much (if at all) because most of the
decoding of data is done outside the main opcode loop. Measurements using
pystone show that little to no performance is lost.
This patch also fixes a nasty bug whereby the bytecode can be reclaimed by
the GC during execution. With this patch there is always a pointer to the
function object held by the VM during execution, since it's stored in the
mp_code_state_t structure.
When make is passed "-B" it seems that everything is considered out-of-date
and so $? expands to all prerequisites. Thus there is no need for a
special check to see if $? is emtpy.
Some stack is allocated to format ints, and when the int implementation uses
long-long there should be additional stack allocated compared with the other
cases. This patch uses the existing "fmt_int_t" type to determine the
amount of stack to allocate.
This patch refactors the error handling in the lexer, to simplify it (ie
reduce code size).
A long time ago, when the lexer/parser/compiler were first written, the
lexer and parser were designed so they didn't use exceptions (ie nlr) to
report errors but rather returned an error code. Over time that has
gradually changed, the parser in particular has more and more ways of
raising exceptions. Also, the lexer never really handled all errors without
raising, eg there were some memory errors which could raise an exception
(and in these rare cases one would get a fatal nlr-not-handled fault).
This patch accepts the fact that the lexer can raise exceptions in some
cases and allows it to raise exceptions to handle all its errors, which are
for the most part just out-of-memory errors during construction of the
lexer. This makes the lexer a bit simpler, and also the persistent code
stuff is simplified.
What this means for users of the lexer is that calls to it must be wrapped
in a nlr handler. But all uses of the lexer already have such an nlr
handler for the parser (and compiler) so that doesn't put any extra burden
on the callers.
INT_MAX used previosly is indeed max value for int, whereas on LP64
platforms, long is used for mp_int_t. Using MP_SMALL_INT_MAX is the
correct way to do it anyway.
Each threads needs to have its own private references to its current
locals/globals dicts, otherwise functions running within different
contexts (eg imported from different files) can behave very strangely.
There were 2 bugs, now fixed by this patch:
- after deleting an element the len of the dict did not decrease by 1
- after deleting an element searching through the dict could lead to
a seg fault due to there being an MP_OBJ_SENTINEL in the ordered array
In this case, raise an exception without a message.
This would allow to shove few code bytes comparing to currently used
mp_raise_msg(..., "") pattern. (Actual savings depend on function code
alignment used by a particular platform.)
The parser was originally written to work without raising any exceptions
and instead return an error value to the caller. But it's now required
that a call to the parser be wrapped in an nlr handler, so we may as well
make use of that fact and simplify the parser so that it doesn't need to
keep track of any memory errors that it had. The parser anyway explicitly
raises an exception at the end if there was an error.
This patch simplifies the parser by letting the underlying memory
allocation functions raise an exception if they fail to allocate any
memory. And if there is an error parsing the "<id> = const(<val>)" pattern
then that also raises an exception right away instead of trying to recover
gracefully and then raise.
Previous to this patch any non-interned str/bytes objects would create a
special parse node that held a copy of the str/bytes data. Then in the
compiler this data would be turned into a str/bytes object. This actually
lead to 2 copies of the data, one in the parse node and one in the object.
The parse node's copy of the data would be freed at the end of the compile
stage but nevertheless it meant that the peak memory usage of the
parse/compile stage was higher than it needed to be (by an amount equal to
the number of bytes in all the non-interned str/bytes objects).
This patch changes the behaviour so that str/bytes objects are created
directly in the parser and the object stored in a const-object parse node
(which already exists for bignum, float and complex const objects). This
reduces peak RAM usage of the parse/compile stage, simplifies the parser
and compiler, and reduces code size by about 170 bytes on Thumb2 archs,
and by about 300 bytes on Xtensa archs.
This patch allows uPy consts to be bignums, eg:
X = const(1 << 100)
The infrastructure for consts to be a bignum (rather than restricted to
small integers) has been in place for a while, ever since constant folding
was upgraded to allow bignums. It just required a small change (in this
patch) to enable it.
It's configured by MICROPY_PY_UERRNO_ERRORCODE and enabled by default
(since that's the behaviour before this patch).
Without this dict the lookup of errno codes to strings must use the
uerrno module itself.
It's much more efficient in RAM and code size to do implicit literal string
concatenation in the lexer, as opposed to the compiler.
RAM usage is reduced because the concatenation can be done right away in the
tokeniser by just accumulating the string/bytes literals into the lexer's
vstr. Prior to this patch adjacent strings/bytes would create a parse tree
(one node per string/bytes) and then in the compiler a whole new chunk of
memory was allocated to store the concatenated string, which used more than
double the memory compared to just accumulating in the lexer.
This patch also significantly reduces code size:
bare-arm: -204
minimal: -204
unix x64: -328
stmhal: -208
esp8266: -284
cc3200: -224
Previous to this patch there was an explicit check for errors with line
continuation (where backslash was not immediately followed by a newline).
But this check is not necessary: if there is an error then the remaining
logic of the tokeniser will reject the backslash and correctly produce a
syntax error.
Since the table of keywords is sorted, we can use strcmp to do the search
and stop part way through the search if the comparison is less-than.
Because all tokens that are names are subject to this search, this
optimisation will improve the overall speed of the lexer when processing
a script.
The change also decreases code size by a little bit because we now use
strcmp instead of the custom str_strn_equal function.
Keywords only needs to be searched for if the token is a MP_TOKEN_NAME, so
we can move the seach to the part of the code that does the tokenising for
MP_TOKEN_NAME.
Grammar rules have 2 variants: ones that are attached to a specific
compile function which is called to compile that grammar node, and ones
that don't have a compile function and are instead just inspected to see
what form they take.
In the compiler there is a table of all grammar rules, with each entry
having a pointer to the associated compile function. Those rules with no
compile function have a null pointer. There are 120 such rules, so that's
120 words of essentially wasted code space.
By grouping together the compile vs no-compile rules we can put all the
no-compile rules at the end of the list of rules, and then we don't need
to store the null pointers. We just have a truncated table and it's
guaranteed that when indexing this table we only index the first half,
the half with populated pointers.
This patch implements such a grouping by having a specific macro for the
compile vs no-compile grammar rules (DEF_RULE vs DEF_RULE_NC). It saves
around 460 bytes of code on 32-bit archs.
Allows to iterate over the following without allocating on the heap:
- tuple
- list
- string, bytes
- bytearray, array
- dict (not dict.keys, dict.values, dict.items)
- set, frozenset
Allows to call the following without heap memory:
- all, any, min, max, sum
TODO: still need to allocate stack memory in bytecode for iter_buf.
This improves efficiency of GIL release within the VM, by only doing the
release after a fixed number of jump-opcodes have executed in the current
thread.
It's more efficient using the system mutexs instead of synthetic ones with
a busy-wait loop. The system can do proper scheduling and blocking of the
threads waiting on the mutex.
Previous to this patch, for large chunks of bytecode that originated from
a single source-code line, the bytecode-line mapping would generate
something like (for 42 bytecode bytes and 1 line):
BC_SKIP=31 LINE_SKIP=1
BC_SKIP=11 LINE_SKIP=0
This would mean that any errors in the last 11 bytecode bytes would be
reported on the following line. This patch fixes it to generate instead:
BC_SKIP=31 LINE_SKIP=0
BC_SKIP=11 LINE_SKIP=1
This patch implements support for class methods __delattr__ and __setattr__
for customising attribute access. It is controlled by the config option
MICROPY_PY_DELATTR_SETATTR and is disabled by default.
It seems that the gcc toolchain on the RaspberryPi
likes %progbits instead of @progbits. I verified that
%progbits also works under x86, so this should
fix#2848 and fix#2842
I verified that unix and mpy-cross both compile
on my RaspberryPi and on my x64 machine.
The internal map/set functions now use size_t exclusively for computing
addresses. size_t is enough to reach all of available memory when
computing addresses so is the right type to use. In particular, for
nanbox builds it saves quite a bit of code size and RAM compared to the
original use of mp_uint_t (which is 64-bits on nanbox builds).
For archs that have 16-bit pointers, the asmxtensa.h file can give compiler
warnings about left-shift being greater than the width of the type (due to
the inline functions in this header file). Explicitly casting the
constants to uint32_t stops these warnings.
This patch fixes two main things:
- dicts can be printed directly using '%s' % dict
- %-formatting should not crash when passed a non-dict to, eg, '%(foo)s'
Updated modbuiltin.c to add conditional support for 3-arg calls to
pow() using MICROPY_PY_BUILTINS_POW3 config parameter. Added support in
objint_mpz.c for for optimised implementation.
A signal is like a pin, but ca also be inverted (active low). As such, it
abstracts properties of various physical devices, like LEDs, buttons,
relays, buzzers, etc. To instantiate a Signal:
pin = machine.Pin(...)
signal = machine.Signal(pin, inverted=True)
signal has the same .value() and __call__() methods as a pin.
This provides mp_vfs_XXX functions (eg mount, open, listdir) which are
agnostic to the underlying filesystem type, and just require an object with
the relevant filesystem-like methods (eg .mount, .open, .listidr) which can
then be mounted.
These mp_vfs_XXX functions would typically be used by a port to implement
the "uos" module, and mp_vfs_open would be the builtin open function.
This feature is controlled by MICROPY_VFS, disabled by default.
In this, don't allocate copy, just return non-empty string. This helps
with a standard pattern of buffering data in case of short reads:
buf = b""
while ...:
s = f.read(...)
buf += s
...
For a typical case when single read returns all data needed, there won't
be extra allocation. This optimization helps uasyncio.
They are one-line functions and having them inline in mp_init/mp_deinit
eliminates the overhead of a function call, and matches how other state
is initialised in mp_init.
This is how CPython does it, and it's very useful to help users discover
the available modules for a given port, especially built-in and frozen
modules. The function does not list modules that are in the filesystem
because this would require a fair bit of work to do correctly, and is very
port specific (depending on the filesystem).
If result guaranteedly fits in a small int, it is handled in objint.c.
Otherwise, it is delegated to mp_obj_int_from_bytes_impl(), which should
be implemented by individual objint_*.c, similar to
mp_obj_int_to_bytes_impl().
If GeneratorExit is injected as a throw-value then that should lead to
the close() method being called, if it exists. If close() does not exist
then throw() should not be called, and this patch fixes this.
The commit d9047d3c8a introduced a bug
whereby "from a.b import c" stopped working for frozen packages. This is
because the path was not properly truncated and became "a//b". Such a
path resolves correctly for a "real" filesystem, but not for a search in
the list of frozen modules.
UART REPL support was lost in os.dupterm() refactorings, etc. As
os.dupterm() is there, implement UART REPL support at the high level -
if MICROPY_STDIO_UART is set, make default boot.py contain os.dupterm()
call for a UART. This means that changing MICROPY_STDIO_UART value will
also require erasing flash on a module to force boot.py re-creation.
This check always fails (ie chr0 is never EOF) because the callers of this
function never call it past the end of the input stream. And even if they
did it would be harmless because 1) reader.readbyte must continue to
return an EOF char if the stream is exhausted; 2) next_char would just
count the subsequent EOF's as characters worth 1 column.
import utimeq, utime
# Max queue size, the queue allocated statically on creation
q = utimeq.utimeq(10)
q.push(utime.ticks_ms(), data1, data2)
res = [0, 0, 0]
# Items in res are filled up with results
q.pop(res)
Defining and initialising mp_kbd_exception is boiler-plate code and so the
core runtime can provide it, instead of each port needing to do it
themselves.
The exception object is placed in the VM state rather than on the heap.
sys.exit() is an important function to terminate a program. In particular,
the testsuite relies on it to skip tests (i.e. any other functionality may
be disabled, but sys.exit() is required to at least report that properly).
For all but the last pass the assembler only needs to count how much space
is needed for the machine code, it doesn't actually need to emit anything.
The dummy_data just uses unnecessary RAM and without it the code is not
any more complex (and code size does not increase for Thumb and Xtensa
archs).
This patch moves some common code from the individual inline assemblers to
the compiler, the code that calls the emit-glue to assign the machine code
to the functions scope.
This patch adds the MICROPY_EMIT_INLINE_XTENSA option, which, when
enabled, allows the @micropython.asm_xtensa decorator to be used.
The following opcodes are currently supported (ax is a register, a0-a15):
ret_n()
callx0(ax)
j(label)
jx(ax)
beqz(ax, label)
bnez(ax, label)
mov(ax, ay)
movi(ax, imm) # imm can be full 32-bit, uses l32r if needed
and_(ax, ay, az)
or_(ax, ay, az)
xor(ax, ay, az)
add(ax, ay, az)
sub(ax, ay, az)
mull(ax, ay, az)
l8ui(ax, ay, imm)
l16ui(ax, ay, imm)
l32i(ax, ay, imm)
s8i(ax, ay, imm)
s16i(ax, ay, imm)
s32i(ax, ay, imm)
l16si(ax, ay, imm)
addi(ax, ay, imm)
ball(ax, ay, label)
bany(ax, ay, label)
bbc(ax, ay, label)
bbs(ax, ay, label)
beq(ax, ay, label)
bge(ax, ay, label)
bgeu(ax, ay, label)
blt(ax, ay, label)
bnall(ax, ay, label)
bne(ax, ay, label)
bnone(ax, ay, label)
Upon entry to the assembly function the registers a0, a12, a13, a14 are
pushed to the stack and the stack pointer (a1) decreased by 16. Upon
exit, these registers and the stack pointer are restored, and ret.n is
executed to return to the caller (caller address is in a0).
Note that the ABI for the Xtensa emitters is non-windowing.
If a port defines MP_PLAT_COMMIT_EXEC then this function is used to turn
RAM data into executable code. For example a port may want to write the
data to flash for execution. The function must return a pointer to the
executable data.
The constants MP_IOCTL_POLL_xxx, which were stmhal-specific, are moved
from stmhal/pybioctl.h (now deleted) to py/stream.h. And they are renamed
to MP_STREAM_POLL_xxx to be consistent with other such constants.
All uses of these constants have been updated.
If a port defines MICROPY_READER_POSIX or MICROPY_READER_FATFS then
lexer.c now provides an implementation of mp_lexer_new_from_file using
the mp_reader_new_file function.
Implementations of persistent-code reader are provided for POSIX systems
and systems using FatFS. Macros to use these are MICROPY_READER_POSIX and
MICROPY_READER_FATFS respectively. If an alternative implementation is
needed then a port can define the function mp_reader_new_file.
It is split into 2 functions, one to make small ints and the other to make
a non-small-int leaf node. This reduces code size by 32 bytes on
bare-arm, 64 bytes on unix (x64-64) and 144 bytes on stmhal.
This includes StopIteration and thus are important to make Python-coded
iterables work with yield from/await.
Exceptions in Python send() are still not handled and left for future
consideration and optimization.
We allow 'exc.__traceback__ = None' assignment as a low-level optimization
of pre-allocating exception instance and raising it repeatedly - this
avoids memory allocation during raise. However, uPy will keep adding
traceback entries to such exception instance, so before throwing it,
traceback should be cleared like above.
'exc.__traceback__ = None' syntax is CPython compatible. However, unlike
it, reading that attribute or setting it to any other value is not
supported (and not intended to be supported, again, the only reason for
adding this feature is to allow zero-memalloc exception raising).
Its addition was due to an early exploration on how to add CPython-like
stream interface. It's clear that it's not needed and just takes up
bytes in all ports.
With this patch one can now do "make FROZEN_MPY_DIR=../../frozen" to
specify a directory containing scripts to be frozen (as well as absolute
paths).
The compiled .mpy files are now stored in $(BUILD)/frozen_mpy/.
Now, to use frozen bytecode all a port needs to do is define
FROZEN_MPY_DIR to the directory containing the .py files to freeze, and
define MICROPY_MODULE_FROZEN_MPY and MICROPY_QSTR_EXTRA_POOL.
In both parse.c and qstr.c, an internal chunking allocator tidies up
by calling m_renew to shrink an allocated chunk to the size used, and
assumes that the chunk will not move. However, when MICROPY_ENABLE_GC
is false, m_renew calls the system realloc, which does not guarantee
this behaviour. Environments where realloc may return a different
pointer include:
(1) mbed-os with MBED_HEAP_STATS_ENABLED (which adds a wrapper around
malloc & friends; this is where I was hit by the bug);
(2) valgrind on linux (how I diagnosed it).
The fix is to call m_renew_maybe with allow_move=false.
Builtin functions with a fixed number of arguments (0, 1, 2 or 3) are
quite common. Before this patch the wrapper for such a function cost
3 machine words. After this patch it only takes 2, which can reduce the
code size by quite a bit (and pays off even more, the more functions are
added). It also makes function dispatch slightly more efficient in CPU
usage, and furthermore reduces stack usage for these cases. On x86 and
Thumb archs the dispatch functions are now tail-call optimised by the
compiler.
The bare-arm port has its code size increase by 76 bytes, but stmhal drops
by 904 bytes. Stack usage by these builtin functions is decreased by 48
bytes on Thumb2 archs.
In order to have more fine-grained control over how builtin functions are
constructed, the MP_DECLARE_CONST_FUN_OBJ macros are made more specific,
with suffix of _0, _1, _2, _3, _VAR, _VAR_BETEEN or _KW. These names now
match the MP_DEFINE_CONST_FUN_OBJ macros.
As long as a port implement mp_hal_sleep_ms(), mp_hal_ticks_ms(), etc.
functions, it can just use standard implementations of utime.sleel_ms(),
utime.ticks_ms(), etc. Python-level functions.
Now there is just one function to allocate a new vstr, namely vstr_new
(in addition to vstr_init etc). The caller of this function should know
what initial size to allocate for the buffer, or at least have some policy
or config option, instead of leaving it to a default (as it was before).
This refactors ujson.loads(s) to behave as ujson.load(StringIO(s)).
Increase in code size is: 366 bytes for unix x86-64, 180 bytes for
stmhal, 84 bytes for esp8266.
Setting emit_dent=0 is unnecessary because arriving in that part of the
if-logic will guarantee that emit_dent is already zero.
The block to check indent_top(lex)>0 is unreachable because a newline is
always inserted an the end of the input stream, and hence dedents are
always processed before EOF.
Similar to how binary op already works. Common unary operations already
have fast paths for bool so there's no need to have explicit handling of
ops in bool_unary_op, especially since they have the same behaviour as
integers.
On 32-bit archs this makes the scope_t struct 48 bytes in size, which fits
in 3 GC blocks (previously it used 4 GC blocks). This will lead to some
savings when compiling scripts because there are usually quite a few scopes,
one for each function and class.
Note that qstrs will fit in 16 bits, this assumption is made in a few other
places.
Following how other objects work, set/frozenset methods should use the
mp_check_self() macro to check the type of the self argument, because in
most cases this check can be a null operation.
Saves about 100-180 bytes of code for builds with set and frozenset
enabled.
Having a micropython.const identity function, and writing "from micropython
import const" at the start of scripts that use the const feature, allows to
write scripts which are compatible with CPython, and with uPy builds that
don't include const optimisation.
This patch adds such a function and updates the tests to do the import.
When an exception is raised and is to be handled by the VM, it is stored
on the Python value stack so the bytecode can access it. CPython stores
3 objects on the stack for each exception: exc type, exc instance and
traceback. uPy followed this approach, but it turns out not to be
necessary. Instead, it is enough to store just the exception instance on
the Python value stack. The only place where the 3 values are needed
explicitly is for the __exit__ handler of a with-statement context, but
for these cases the 3 values can be extracted from the single exception
instance.
This patch removes the need to store 3 values on the stack, and instead
just stores the exception instance.
Code size is reduced by about 50-100 bytes, the compiler and VM are
slightly simpler, generate bytecode is smaller (by 2 bytes for each try
block), and the Python value stack is reduced in size for functions that
handle exceptions.