When make is passed "-B" it seems that everything is considered out-of-date
and so $? expands to all prerequisites. Thus there is no need for a
special check to see if $? is emtpy.
Some stack is allocated to format ints, and when the int implementation uses
long-long there should be additional stack allocated compared with the other
cases. This patch uses the existing "fmt_int_t" type to determine the
amount of stack to allocate.
This patch refactors the error handling in the lexer, to simplify it (ie
reduce code size).
A long time ago, when the lexer/parser/compiler were first written, the
lexer and parser were designed so they didn't use exceptions (ie nlr) to
report errors but rather returned an error code. Over time that has
gradually changed, the parser in particular has more and more ways of
raising exceptions. Also, the lexer never really handled all errors without
raising, eg there were some memory errors which could raise an exception
(and in these rare cases one would get a fatal nlr-not-handled fault).
This patch accepts the fact that the lexer can raise exceptions in some
cases and allows it to raise exceptions to handle all its errors, which are
for the most part just out-of-memory errors during construction of the
lexer. This makes the lexer a bit simpler, and also the persistent code
stuff is simplified.
What this means for users of the lexer is that calls to it must be wrapped
in a nlr handler. But all uses of the lexer already have such an nlr
handler for the parser (and compiler) so that doesn't put any extra burden
on the callers.
INT_MAX used previosly is indeed max value for int, whereas on LP64
platforms, long is used for mp_int_t. Using MP_SMALL_INT_MAX is the
correct way to do it anyway.
Each threads needs to have its own private references to its current
locals/globals dicts, otherwise functions running within different
contexts (eg imported from different files) can behave very strangely.
There were 2 bugs, now fixed by this patch:
- after deleting an element the len of the dict did not decrease by 1
- after deleting an element searching through the dict could lead to
a seg fault due to there being an MP_OBJ_SENTINEL in the ordered array
In this case, raise an exception without a message.
This would allow to shove few code bytes comparing to currently used
mp_raise_msg(..., "") pattern. (Actual savings depend on function code
alignment used by a particular platform.)
The parser was originally written to work without raising any exceptions
and instead return an error value to the caller. But it's now required
that a call to the parser be wrapped in an nlr handler, so we may as well
make use of that fact and simplify the parser so that it doesn't need to
keep track of any memory errors that it had. The parser anyway explicitly
raises an exception at the end if there was an error.
This patch simplifies the parser by letting the underlying memory
allocation functions raise an exception if they fail to allocate any
memory. And if there is an error parsing the "<id> = const(<val>)" pattern
then that also raises an exception right away instead of trying to recover
gracefully and then raise.
Previous to this patch any non-interned str/bytes objects would create a
special parse node that held a copy of the str/bytes data. Then in the
compiler this data would be turned into a str/bytes object. This actually
lead to 2 copies of the data, one in the parse node and one in the object.
The parse node's copy of the data would be freed at the end of the compile
stage but nevertheless it meant that the peak memory usage of the
parse/compile stage was higher than it needed to be (by an amount equal to
the number of bytes in all the non-interned str/bytes objects).
This patch changes the behaviour so that str/bytes objects are created
directly in the parser and the object stored in a const-object parse node
(which already exists for bignum, float and complex const objects). This
reduces peak RAM usage of the parse/compile stage, simplifies the parser
and compiler, and reduces code size by about 170 bytes on Thumb2 archs,
and by about 300 bytes on Xtensa archs.
This patch allows uPy consts to be bignums, eg:
X = const(1 << 100)
The infrastructure for consts to be a bignum (rather than restricted to
small integers) has been in place for a while, ever since constant folding
was upgraded to allow bignums. It just required a small change (in this
patch) to enable it.
It's configured by MICROPY_PY_UERRNO_ERRORCODE and enabled by default
(since that's the behaviour before this patch).
Without this dict the lookup of errno codes to strings must use the
uerrno module itself.
It's much more efficient in RAM and code size to do implicit literal string
concatenation in the lexer, as opposed to the compiler.
RAM usage is reduced because the concatenation can be done right away in the
tokeniser by just accumulating the string/bytes literals into the lexer's
vstr. Prior to this patch adjacent strings/bytes would create a parse tree
(one node per string/bytes) and then in the compiler a whole new chunk of
memory was allocated to store the concatenated string, which used more than
double the memory compared to just accumulating in the lexer.
This patch also significantly reduces code size:
bare-arm: -204
minimal: -204
unix x64: -328
stmhal: -208
esp8266: -284
cc3200: -224
Previous to this patch there was an explicit check for errors with line
continuation (where backslash was not immediately followed by a newline).
But this check is not necessary: if there is an error then the remaining
logic of the tokeniser will reject the backslash and correctly produce a
syntax error.
Since the table of keywords is sorted, we can use strcmp to do the search
and stop part way through the search if the comparison is less-than.
Because all tokens that are names are subject to this search, this
optimisation will improve the overall speed of the lexer when processing
a script.
The change also decreases code size by a little bit because we now use
strcmp instead of the custom str_strn_equal function.
Keywords only needs to be searched for if the token is a MP_TOKEN_NAME, so
we can move the seach to the part of the code that does the tokenising for
MP_TOKEN_NAME.
Grammar rules have 2 variants: ones that are attached to a specific
compile function which is called to compile that grammar node, and ones
that don't have a compile function and are instead just inspected to see
what form they take.
In the compiler there is a table of all grammar rules, with each entry
having a pointer to the associated compile function. Those rules with no
compile function have a null pointer. There are 120 such rules, so that's
120 words of essentially wasted code space.
By grouping together the compile vs no-compile rules we can put all the
no-compile rules at the end of the list of rules, and then we don't need
to store the null pointers. We just have a truncated table and it's
guaranteed that when indexing this table we only index the first half,
the half with populated pointers.
This patch implements such a grouping by having a specific macro for the
compile vs no-compile grammar rules (DEF_RULE vs DEF_RULE_NC). It saves
around 460 bytes of code on 32-bit archs.
Allows to iterate over the following without allocating on the heap:
- tuple
- list
- string, bytes
- bytearray, array
- dict (not dict.keys, dict.values, dict.items)
- set, frozenset
Allows to call the following without heap memory:
- all, any, min, max, sum
TODO: still need to allocate stack memory in bytecode for iter_buf.
This improves efficiency of GIL release within the VM, by only doing the
release after a fixed number of jump-opcodes have executed in the current
thread.
It's more efficient using the system mutexs instead of synthetic ones with
a busy-wait loop. The system can do proper scheduling and blocking of the
threads waiting on the mutex.
Previous to this patch, for large chunks of bytecode that originated from
a single source-code line, the bytecode-line mapping would generate
something like (for 42 bytecode bytes and 1 line):
BC_SKIP=31 LINE_SKIP=1
BC_SKIP=11 LINE_SKIP=0
This would mean that any errors in the last 11 bytecode bytes would be
reported on the following line. This patch fixes it to generate instead:
BC_SKIP=31 LINE_SKIP=0
BC_SKIP=11 LINE_SKIP=1
This patch implements support for class methods __delattr__ and __setattr__
for customising attribute access. It is controlled by the config option
MICROPY_PY_DELATTR_SETATTR and is disabled by default.
It seems that the gcc toolchain on the RaspberryPi
likes %progbits instead of @progbits. I verified that
%progbits also works under x86, so this should
fix#2848 and fix#2842
I verified that unix and mpy-cross both compile
on my RaspberryPi and on my x64 machine.
The internal map/set functions now use size_t exclusively for computing
addresses. size_t is enough to reach all of available memory when
computing addresses so is the right type to use. In particular, for
nanbox builds it saves quite a bit of code size and RAM compared to the
original use of mp_uint_t (which is 64-bits on nanbox builds).
For archs that have 16-bit pointers, the asmxtensa.h file can give compiler
warnings about left-shift being greater than the width of the type (due to
the inline functions in this header file). Explicitly casting the
constants to uint32_t stops these warnings.
This patch fixes two main things:
- dicts can be printed directly using '%s' % dict
- %-formatting should not crash when passed a non-dict to, eg, '%(foo)s'
Updated modbuiltin.c to add conditional support for 3-arg calls to
pow() using MICROPY_PY_BUILTINS_POW3 config parameter. Added support in
objint_mpz.c for for optimised implementation.
A signal is like a pin, but ca also be inverted (active low). As such, it
abstracts properties of various physical devices, like LEDs, buttons,
relays, buzzers, etc. To instantiate a Signal:
pin = machine.Pin(...)
signal = machine.Signal(pin, inverted=True)
signal has the same .value() and __call__() methods as a pin.
This provides mp_vfs_XXX functions (eg mount, open, listdir) which are
agnostic to the underlying filesystem type, and just require an object with
the relevant filesystem-like methods (eg .mount, .open, .listidr) which can
then be mounted.
These mp_vfs_XXX functions would typically be used by a port to implement
the "uos" module, and mp_vfs_open would be the builtin open function.
This feature is controlled by MICROPY_VFS, disabled by default.
In this, don't allocate copy, just return non-empty string. This helps
with a standard pattern of buffering data in case of short reads:
buf = b""
while ...:
s = f.read(...)
buf += s
...
For a typical case when single read returns all data needed, there won't
be extra allocation. This optimization helps uasyncio.
They are one-line functions and having them inline in mp_init/mp_deinit
eliminates the overhead of a function call, and matches how other state
is initialised in mp_init.
This is how CPython does it, and it's very useful to help users discover
the available modules for a given port, especially built-in and frozen
modules. The function does not list modules that are in the filesystem
because this would require a fair bit of work to do correctly, and is very
port specific (depending on the filesystem).
If result guaranteedly fits in a small int, it is handled in objint.c.
Otherwise, it is delegated to mp_obj_int_from_bytes_impl(), which should
be implemented by individual objint_*.c, similar to
mp_obj_int_to_bytes_impl().
If GeneratorExit is injected as a throw-value then that should lead to
the close() method being called, if it exists. If close() does not exist
then throw() should not be called, and this patch fixes this.
The commit d9047d3c8a introduced a bug
whereby "from a.b import c" stopped working for frozen packages. This is
because the path was not properly truncated and became "a//b". Such a
path resolves correctly for a "real" filesystem, but not for a search in
the list of frozen modules.
UART REPL support was lost in os.dupterm() refactorings, etc. As
os.dupterm() is there, implement UART REPL support at the high level -
if MICROPY_STDIO_UART is set, make default boot.py contain os.dupterm()
call for a UART. This means that changing MICROPY_STDIO_UART value will
also require erasing flash on a module to force boot.py re-creation.
This check always fails (ie chr0 is never EOF) because the callers of this
function never call it past the end of the input stream. And even if they
did it would be harmless because 1) reader.readbyte must continue to
return an EOF char if the stream is exhausted; 2) next_char would just
count the subsequent EOF's as characters worth 1 column.
import utimeq, utime
# Max queue size, the queue allocated statically on creation
q = utimeq.utimeq(10)
q.push(utime.ticks_ms(), data1, data2)
res = [0, 0, 0]
# Items in res are filled up with results
q.pop(res)
Defining and initialising mp_kbd_exception is boiler-plate code and so the
core runtime can provide it, instead of each port needing to do it
themselves.
The exception object is placed in the VM state rather than on the heap.
sys.exit() is an important function to terminate a program. In particular,
the testsuite relies on it to skip tests (i.e. any other functionality may
be disabled, but sys.exit() is required to at least report that properly).
For all but the last pass the assembler only needs to count how much space
is needed for the machine code, it doesn't actually need to emit anything.
The dummy_data just uses unnecessary RAM and without it the code is not
any more complex (and code size does not increase for Thumb and Xtensa
archs).
This patch moves some common code from the individual inline assemblers to
the compiler, the code that calls the emit-glue to assign the machine code
to the functions scope.
This patch adds the MICROPY_EMIT_INLINE_XTENSA option, which, when
enabled, allows the @micropython.asm_xtensa decorator to be used.
The following opcodes are currently supported (ax is a register, a0-a15):
ret_n()
callx0(ax)
j(label)
jx(ax)
beqz(ax, label)
bnez(ax, label)
mov(ax, ay)
movi(ax, imm) # imm can be full 32-bit, uses l32r if needed
and_(ax, ay, az)
or_(ax, ay, az)
xor(ax, ay, az)
add(ax, ay, az)
sub(ax, ay, az)
mull(ax, ay, az)
l8ui(ax, ay, imm)
l16ui(ax, ay, imm)
l32i(ax, ay, imm)
s8i(ax, ay, imm)
s16i(ax, ay, imm)
s32i(ax, ay, imm)
l16si(ax, ay, imm)
addi(ax, ay, imm)
ball(ax, ay, label)
bany(ax, ay, label)
bbc(ax, ay, label)
bbs(ax, ay, label)
beq(ax, ay, label)
bge(ax, ay, label)
bgeu(ax, ay, label)
blt(ax, ay, label)
bnall(ax, ay, label)
bne(ax, ay, label)
bnone(ax, ay, label)
Upon entry to the assembly function the registers a0, a12, a13, a14 are
pushed to the stack and the stack pointer (a1) decreased by 16. Upon
exit, these registers and the stack pointer are restored, and ret.n is
executed to return to the caller (caller address is in a0).
Note that the ABI for the Xtensa emitters is non-windowing.
If a port defines MP_PLAT_COMMIT_EXEC then this function is used to turn
RAM data into executable code. For example a port may want to write the
data to flash for execution. The function must return a pointer to the
executable data.