This buffer is used to allocate objects temporarily, and such objects
require that their underlying memory be correctly aligned for their data
type. Aligning for mp_obj_t should be sufficient for emergency exceptions,
but in general the memory buffer should aligned to the maximum alignment of
the machine (eg on a 32-bit machine with mp_obj_t being 4 bytes, a double
may not be correctly aligned).
This patch fixes a bug for certain nan-boxing builds, where mp_obj_t is 8
bytes and must be aligned to 8 bytes (even though the machine is 32 bit).
Hashing of float and complex numbers that are exact (real) integers should
return the same integer hash value as hashing the corresponding integer
value. Eg hash(1), hash(1.0) and hash(1+0j) should all be the same (this
is how Python is specified: if x==y then hash(x)==hash(y)).
This patch implements the simplest way of doing float/complex hashing by
just converting the value to int and returning that value.
Split this setting from MICROPY_CPYTHON_COMPAT. The idea is to be able to
keep MICROPY_CPYTHON_COMPAT disabled, but still pass more of regression
testsuite. In particular, this fixes last failing test in basics/ for
Zephyr port.
The first memmove now copies less bytes in some cases (because len_adj <=
slice_len), and the memcpy is replaced with memmove to support the
possibility that dest and slice regions are overlapping.
This follows the pattern of how all other headers are now included, and
makes it explicit where the header file comes from. This patch also
removes -I options from Makefile's that specify the mp-readline/timeutils/
netutils directories, which are no longer needed.
Build happens in 3 stages:
1. Zephyr config header and make vars are generated from prj.conf.
2. libmicropython is built using them.
3. Zephyr is built and final link happens.
This patch changes mp_uint_t to size_t for the len argument of the
following public facing C functions:
mp_obj_tuple_get
mp_obj_list_get
mp_obj_get_array
These functions take a pointer to the len argument (to be filled in by the
function) and callers of these functions should update their code so the
type of len is changed to size_t. For ports that don't use nan-boxing
there should be no change in generate code because the size of the type
remains the same (word sized), and in a lot of cases there won't even be a
compiler warning if the type remains as mp_uint_t.
The reason for this change is to standardise on the use of size_t for
variables that count memory (or memory related) sizes/lengths. It helps
builds that use nan-boxing.
With this patch all illegal assignments are reported as "can't assign to
expression". Before the patch there were special cases for a literal on
the LHS, and for augmented assignments (eg +=), but it seems a waste of
bytes (and there are lots of bytes used in error messages) to spend on
distinguishing such errors which a user will rarely encounter.
By removing the 'E' code from the operator token encoding mini-language the
tokenising can be simplified. The 'E' code was only used for the !=
operator which is now handled as a special case; the optimisations for the
general case more than make up for the addition of this single, special
case. Furthermore, the . and ... operators can be handled in the same way
as != which reduces the code size a little further.
This simplification also removes a "goto".
Changes in code size for this patch are (measured in bytes):
bare-arm: -48
minimal x86: -64
unix x86-64: -112
unix nanbox: -64
stmhal: -48
cc3200: -48
esp8266: -76
The self variable may be closed-over in the function, and in that case the
call to super() should load the contents of the closure cell using
LOAD_DEREF (before this patch it would just load the cell directly).
Previous to this patch, if the result of the round function overflowed a
small int, or was inf or nan, then a garbage value was returned. With
this patch the correct big-int is returned if necessary and exceptions are
raised for inf or nan.
The C nearbyint function has exactly the semantics that Python's round()
requires, whereas C's round() requires extra steps to handle rounding of
numbers half way between integers. So using nearbyint reduces code size
and potentially eliminates any source of errors in the handling of half-way
numbers.
Also, bare-metal implementations of nearbyint can be more efficient than
round, so further code size is saved (and efficiency improved).
nearbyint is provided in the C99 standard so it should be available on all
supported platforms.
Previous to this patch, if the result of the trunc/ceil/floor functions
overflowed a small int, or was inf or nan, then a garbage value was
returned. With this patch the correct big-int is returned if necessary,
and exceptions are raised for inf or nan.
It improves readability of code and reduces the chance to make a mistake.
This patch also fixes a bug with nan-boxing builds by rounding up the
calculation of the new NSLOTS variable, giving the correct number of slots
(being 4) even if mp_obj_t is larger than the native machine size.
Now, passing a keyword argument that is not expected will correctly report
that fact. If normal or detailed error messages are enabled then the name
of the unexpected argument will be reported.
This patch decreases the code size of bare-arm and stmhal by 12 bytes, and
cc3200 by 8 bytes. Other ports (minimal, unix, esp8266) remain the same in
code size. For terse error message configuration this is because the new
message is shorter than the old one. For normal (and detailed) error
message configuration this is because the new error message already exists
in py/objnamedtuple.c so there's no extra space in ROM needed for the
string.
The scheduler being locked general means we are running a scheduled
function, and switching to another thread violates that, so don't switch in
such a case (even though we technically could).
And if we are running a scheduled function then we want to finish it ASAP,
so we shouldn't switch to another thread.
Furthermore, ports with threading enabled will lock the scheduler during a
hard IRQ, and this patch to the VM will make sure that threads are not
switched during a hard IRQ (which would crash the VM).
Instead of always reporting some object cannot be implicitly be converted
to a 'str', even when it is a 'bytes' object, adjust the logic so that
when trying to convert str to bytes it is shown like that.
This will still report bad implicit conversion from e.g. 'int to bytes'
as 'int to str' but it will not result in the confusing
'can't convert 'str' object to str implicitly' anymore for calls like
b'somestring'.count('a').
Instead of caching data that is constant (code_info, const_table and
n_state), store just a pointer to the underlying function object from which
this data can be derived.
This helps reduce stack usage for the case when the mp_code_state_t
structure is stored on the stack, as well as heap usage when it's stored
on the heap.
The downside is that the VM becomes a little more complex because it now
needs to derive the data from the underlying function object. But this
doesn't impact the performance by much (if at all) because most of the
decoding of data is done outside the main opcode loop. Measurements using
pystone show that little to no performance is lost.
This patch also fixes a nasty bug whereby the bytecode can be reclaimed by
the GC during execution. With this patch there is always a pointer to the
function object held by the VM during execution, since it's stored in the
mp_code_state_t structure.
When make is passed "-B" it seems that everything is considered out-of-date
and so $? expands to all prerequisites. Thus there is no need for a
special check to see if $? is emtpy.
Some stack is allocated to format ints, and when the int implementation uses
long-long there should be additional stack allocated compared with the other
cases. This patch uses the existing "fmt_int_t" type to determine the
amount of stack to allocate.
This patch refactors the error handling in the lexer, to simplify it (ie
reduce code size).
A long time ago, when the lexer/parser/compiler were first written, the
lexer and parser were designed so they didn't use exceptions (ie nlr) to
report errors but rather returned an error code. Over time that has
gradually changed, the parser in particular has more and more ways of
raising exceptions. Also, the lexer never really handled all errors without
raising, eg there were some memory errors which could raise an exception
(and in these rare cases one would get a fatal nlr-not-handled fault).
This patch accepts the fact that the lexer can raise exceptions in some
cases and allows it to raise exceptions to handle all its errors, which are
for the most part just out-of-memory errors during construction of the
lexer. This makes the lexer a bit simpler, and also the persistent code
stuff is simplified.
What this means for users of the lexer is that calls to it must be wrapped
in a nlr handler. But all uses of the lexer already have such an nlr
handler for the parser (and compiler) so that doesn't put any extra burden
on the callers.
INT_MAX used previosly is indeed max value for int, whereas on LP64
platforms, long is used for mp_int_t. Using MP_SMALL_INT_MAX is the
correct way to do it anyway.
Each threads needs to have its own private references to its current
locals/globals dicts, otherwise functions running within different
contexts (eg imported from different files) can behave very strangely.
There were 2 bugs, now fixed by this patch:
- after deleting an element the len of the dict did not decrease by 1
- after deleting an element searching through the dict could lead to
a seg fault due to there being an MP_OBJ_SENTINEL in the ordered array
In this case, raise an exception without a message.
This would allow to shove few code bytes comparing to currently used
mp_raise_msg(..., "") pattern. (Actual savings depend on function code
alignment used by a particular platform.)
The parser was originally written to work without raising any exceptions
and instead return an error value to the caller. But it's now required
that a call to the parser be wrapped in an nlr handler, so we may as well
make use of that fact and simplify the parser so that it doesn't need to
keep track of any memory errors that it had. The parser anyway explicitly
raises an exception at the end if there was an error.
This patch simplifies the parser by letting the underlying memory
allocation functions raise an exception if they fail to allocate any
memory. And if there is an error parsing the "<id> = const(<val>)" pattern
then that also raises an exception right away instead of trying to recover
gracefully and then raise.
Previous to this patch any non-interned str/bytes objects would create a
special parse node that held a copy of the str/bytes data. Then in the
compiler this data would be turned into a str/bytes object. This actually
lead to 2 copies of the data, one in the parse node and one in the object.
The parse node's copy of the data would be freed at the end of the compile
stage but nevertheless it meant that the peak memory usage of the
parse/compile stage was higher than it needed to be (by an amount equal to
the number of bytes in all the non-interned str/bytes objects).
This patch changes the behaviour so that str/bytes objects are created
directly in the parser and the object stored in a const-object parse node
(which already exists for bignum, float and complex const objects). This
reduces peak RAM usage of the parse/compile stage, simplifies the parser
and compiler, and reduces code size by about 170 bytes on Thumb2 archs,
and by about 300 bytes on Xtensa archs.
This patch allows uPy consts to be bignums, eg:
X = const(1 << 100)
The infrastructure for consts to be a bignum (rather than restricted to
small integers) has been in place for a while, ever since constant folding
was upgraded to allow bignums. It just required a small change (in this
patch) to enable it.
It's configured by MICROPY_PY_UERRNO_ERRORCODE and enabled by default
(since that's the behaviour before this patch).
Without this dict the lookup of errno codes to strings must use the
uerrno module itself.
It's much more efficient in RAM and code size to do implicit literal string
concatenation in the lexer, as opposed to the compiler.
RAM usage is reduced because the concatenation can be done right away in the
tokeniser by just accumulating the string/bytes literals into the lexer's
vstr. Prior to this patch adjacent strings/bytes would create a parse tree
(one node per string/bytes) and then in the compiler a whole new chunk of
memory was allocated to store the concatenated string, which used more than
double the memory compared to just accumulating in the lexer.
This patch also significantly reduces code size:
bare-arm: -204
minimal: -204
unix x64: -328
stmhal: -208
esp8266: -284
cc3200: -224
Previous to this patch there was an explicit check for errors with line
continuation (where backslash was not immediately followed by a newline).
But this check is not necessary: if there is an error then the remaining
logic of the tokeniser will reject the backslash and correctly produce a
syntax error.
Since the table of keywords is sorted, we can use strcmp to do the search
and stop part way through the search if the comparison is less-than.
Because all tokens that are names are subject to this search, this
optimisation will improve the overall speed of the lexer when processing
a script.
The change also decreases code size by a little bit because we now use
strcmp instead of the custom str_strn_equal function.
Keywords only needs to be searched for if the token is a MP_TOKEN_NAME, so
we can move the seach to the part of the code that does the tokenising for
MP_TOKEN_NAME.
Grammar rules have 2 variants: ones that are attached to a specific
compile function which is called to compile that grammar node, and ones
that don't have a compile function and are instead just inspected to see
what form they take.
In the compiler there is a table of all grammar rules, with each entry
having a pointer to the associated compile function. Those rules with no
compile function have a null pointer. There are 120 such rules, so that's
120 words of essentially wasted code space.
By grouping together the compile vs no-compile rules we can put all the
no-compile rules at the end of the list of rules, and then we don't need
to store the null pointers. We just have a truncated table and it's
guaranteed that when indexing this table we only index the first half,
the half with populated pointers.
This patch implements such a grouping by having a specific macro for the
compile vs no-compile grammar rules (DEF_RULE vs DEF_RULE_NC). It saves
around 460 bytes of code on 32-bit archs.
Allows to iterate over the following without allocating on the heap:
- tuple
- list
- string, bytes
- bytearray, array
- dict (not dict.keys, dict.values, dict.items)
- set, frozenset
Allows to call the following without heap memory:
- all, any, min, max, sum
TODO: still need to allocate stack memory in bytecode for iter_buf.
This improves efficiency of GIL release within the VM, by only doing the
release after a fixed number of jump-opcodes have executed in the current
thread.
It's more efficient using the system mutexs instead of synthetic ones with
a busy-wait loop. The system can do proper scheduling and blocking of the
threads waiting on the mutex.
Previous to this patch, for large chunks of bytecode that originated from
a single source-code line, the bytecode-line mapping would generate
something like (for 42 bytecode bytes and 1 line):
BC_SKIP=31 LINE_SKIP=1
BC_SKIP=11 LINE_SKIP=0
This would mean that any errors in the last 11 bytecode bytes would be
reported on the following line. This patch fixes it to generate instead:
BC_SKIP=31 LINE_SKIP=0
BC_SKIP=11 LINE_SKIP=1
This patch implements support for class methods __delattr__ and __setattr__
for customising attribute access. It is controlled by the config option
MICROPY_PY_DELATTR_SETATTR and is disabled by default.
It seems that the gcc toolchain on the RaspberryPi
likes %progbits instead of @progbits. I verified that
%progbits also works under x86, so this should
fix#2848 and fix#2842
I verified that unix and mpy-cross both compile
on my RaspberryPi and on my x64 machine.
The internal map/set functions now use size_t exclusively for computing
addresses. size_t is enough to reach all of available memory when
computing addresses so is the right type to use. In particular, for
nanbox builds it saves quite a bit of code size and RAM compared to the
original use of mp_uint_t (which is 64-bits on nanbox builds).
For archs that have 16-bit pointers, the asmxtensa.h file can give compiler
warnings about left-shift being greater than the width of the type (due to
the inline functions in this header file). Explicitly casting the
constants to uint32_t stops these warnings.
This patch fixes two main things:
- dicts can be printed directly using '%s' % dict
- %-formatting should not crash when passed a non-dict to, eg, '%(foo)s'
Updated modbuiltin.c to add conditional support for 3-arg calls to
pow() using MICROPY_PY_BUILTINS_POW3 config parameter. Added support in
objint_mpz.c for for optimised implementation.
A signal is like a pin, but ca also be inverted (active low). As such, it
abstracts properties of various physical devices, like LEDs, buttons,
relays, buzzers, etc. To instantiate a Signal:
pin = machine.Pin(...)
signal = machine.Signal(pin, inverted=True)
signal has the same .value() and __call__() methods as a pin.
This provides mp_vfs_XXX functions (eg mount, open, listdir) which are
agnostic to the underlying filesystem type, and just require an object with
the relevant filesystem-like methods (eg .mount, .open, .listidr) which can
then be mounted.
These mp_vfs_XXX functions would typically be used by a port to implement
the "uos" module, and mp_vfs_open would be the builtin open function.
This feature is controlled by MICROPY_VFS, disabled by default.
In this, don't allocate copy, just return non-empty string. This helps
with a standard pattern of buffering data in case of short reads:
buf = b""
while ...:
s = f.read(...)
buf += s
...
For a typical case when single read returns all data needed, there won't
be extra allocation. This optimization helps uasyncio.
They are one-line functions and having them inline in mp_init/mp_deinit
eliminates the overhead of a function call, and matches how other state
is initialised in mp_init.
This is how CPython does it, and it's very useful to help users discover
the available modules for a given port, especially built-in and frozen
modules. The function does not list modules that are in the filesystem
because this would require a fair bit of work to do correctly, and is very
port specific (depending on the filesystem).
If result guaranteedly fits in a small int, it is handled in objint.c.
Otherwise, it is delegated to mp_obj_int_from_bytes_impl(), which should
be implemented by individual objint_*.c, similar to
mp_obj_int_to_bytes_impl().
If GeneratorExit is injected as a throw-value then that should lead to
the close() method being called, if it exists. If close() does not exist
then throw() should not be called, and this patch fixes this.
Support for Xtensa emitter and assembler, and upgraded F4 and F7 STM HAL
This release adds support for the Xtensa architecture as a target for the
native emitter, as well as Xtensa inline assembler. The int.from_bytes
and int.to_bytes methods now require a second argument (the byte order)
per CPython (only "little" is supported at this time). The "readall"
method has been removed from all stream classes that used it; "read" with
no arguments should be used instead. There is now support for importing
packages from compiled .mpy files. Test coverage is increased to 96%.
The generic I2C driver has improvements: configurable clock stretching
timeout, "stop" argument added to readfrom/writeto methods, "nack"
argument added to readinto, and write[to] now returns num of ACKs
received. The framebuf module now handles 16-bit depth (generic colour
format) and has hline, vline, rect, line methods. A new utimeq module is
added for efficient queue ordering defined by modulo time (to be
compatible with time.ticks_xxx functions). The pyboard.py script has been
modified so that the target board is not reset between scripts or commands
that are given on a single command line.
For the stmhal port the STM Cube HAL has been upgraded: Cube F4 HAL to
v1.13.1 (CMSIS 2.5.1, HAL v1.5.2) and Cube F7 HAL to v1.1.2. There is a
more robust pyb.I2C implementation (DMA is now disabled by default, can be
enabled via an option), and there is an implementation of machine.I2C with
robust error handling and hardware acceleration on F4 MCUs. It is now
recommended to use machine.I2C instead of pyb.I2C. The UART class is now
more robust with better handling of errors/timeouts. There is also more
accurate VBAT and VREFINT measurements for the ADC. New boards that are
supported include: NUCLEO_F767ZI, STM32F769DISC and NUCLEO_L476RG.
For the esp8266 port select/poll is now supported for sockets using the
uselect module. There is support for native and viper emitters, as well
as an inline assembler (with limited iRAM for storage of native functions,
or the option to store code to flash). There is improved software I2C
with a slight API change: scl/sda pins can be specified as positional only
when "-1" is passed as the first argument to indicate the use of software
I2C. It is recommended to use keyword arguments for scl/sda. There is
very early support for over-the-air (OTA) updates using the yaota8266
project.
A detailed list of changes follows.
py core:
- emitnative: fix native import emitter when in viper mode
- remove readall() method, which is equivalent to read() w/o args
- objexcept: allow clearing traceback with 'exc.__traceback__ = None'
- runtime: mp_resume: handle exceptions in Python __next__()
- mkrules.mk: rework find command so it works on OSX
- *.mk: replace uses of 'sed' with $(SED)
- parse: move function to check for const parse node to parse.[ch]
- parse: make mp_parse_node_new_leaf an inline function
- parse: add code to fold logical constants in or/and/not operations
- factor persistent code load/save funcs into persistentcode.[ch]
- factor out persistent-code reader into separate files
- lexer: rewrite mp_lexer_new_from_str_len in terms of mp_reader_mem
- lexer: provide generic mp_lexer_new_from_file based on mp_reader
- lexer: rewrite mp_lexer_new_from_fd in terms of mp_reader
- lexer: make lexer use an mp_reader as its source
- objtype: implement __call__ handling for an instance w/o heap alloc
- factor out common code from assemblers into asmbase.[ch]
- stream: move ad-hoc ioctl constants to stream.h and rename them
- compile: simplify configuration of native emitter
- emit.h: remove long-obsolete declarations for cpython emitter
- move arch-specific assembler macros from emitnative to asmXXX.h
- asmbase: add MP_PLAT_COMMIT_EXEC option for handling exec code
- asmxtensa: add low-level Xtensa assembler
- integrate Xtensa assembler into native emitter
- allow inline-assembler emitter to be generic
- add inline Xtensa assembler
- emitinline: embed entire asm struct instead of a pointer to it
- emitinline: move inline-asm align and data methods to compiler
- emitinline: move common code for end of final pass to compiler
- asm: remove need for dummy_data when doing initial assembler passes
- objint: from_bytes, to_bytes: require byteorder arg, require "little"
- binary: do zero extension when storing a value larger than word size
- builtinimport: support importing packages from compiled .mpy files
- mpz: remove unreachable code in mpn_or_neg functions
- runtime: zero out fs_user_mount array in mp_init
- mpconfig.h: enable MICROPY_PY_SYS_EXIT by default
- add MICROPY_KBD_EXCEPTION config option to provide mp_kbd_exception
- compile: add an extra pass for Xtensa inline assembler
- modbuiltins: remove unreachable code
- objint: rename mp_obj_int_as_float to mp_obj_int_as_float_impl
- emitglue: refactor to remove assert(0), to improve coverage
- lexer: remove unreachable code in string tokeniser
- lexer: remove unnecessary check for EOF in lexer's next_char func
- lexer: permanently disable the mp_lexer_show_token function
- parsenum: simplify and generalise decoding of digit values
- mpz: fix assertion in mpz_set_from_str which checks value of base
- mpprint: add assertion for, and comment about, valid base values
- objint: simplify mp_int_format_size and remove unreachable code
- unicode: comment-out unused function unichar_isprint
- consistently update signatures of .make_new and .call methods
- mkrules.mk: add MPY_CROSS_FLAGS option to pass flags to mpy-cross
- builtinimport: fix bug when importing names from frozen packages
extmod:
- machine_i2c: make the clock stretching timeout configurable
- machine_i2c: raise an error when clock stretching times out
- machine_i2c: release SDA on bus error
- machine_i2c: add a C-level I2C-protocol, refactoring soft I2C
- machine_i2c: add argument to C funcs to control stop generation
- machine_i2c: rewrite i2c.scan in terms of C-level protocol
- machine_i2c: rewrite mem xfer funcs in terms of C-level protocol
- machine_i2c: remove unneeded i2c_write_mem/i2c_read_mem funcs
- machine_i2c: make C-level functions return -errno on I2C error
- machine_i2c: add 'nack' argument to i2c.readinto
- machine_i2c: make i2c.write[to] methods return num of ACKs recvd
- machine_i2c: add 'stop' argument to i2c readfrom/writeto meths
- machine_i2c: remove trivial function wrappers
- machine_i2c: expose soft I2C obj and readfrom/writeto funcs
- machine_i2c: add hook to constructor to call port-specific code
- modurandom: allow to build with float disabled
- modframebuf: make FrameBuffer handle 16bit depth
- modframebuf: add back legacy FrameBuffer1 "class"
- modframebuf: optimise fill and fill_rect methods
- vfs_fat: implement POSIX behaviour of rename, allow to overwrite
- moduselect: use stream helper function instead of ad-hoc code
- moduselect: use configurable EVENT_POLL_HOOK instead of WFI
- modlwip: add ioctl method to socket, with poll implementation
- vfs_fat_file: allow file obj to respond to ioctl flush request
- modbtree: add method to sync the database
- modbtree: rename "sync" method to "flush" for consistency
- modframebuf: add hline, vline, rect and line methods
- machine_spi: provide reusable software SPI class
- modframebuf: make framebuf implement the buffer protocol
- modframebuf: store underlying buffer object to prevent GC free
- modutimeq: copy of current moduheapq with timeq support for refactoring
- modutimeq: refactor into optimized class
- modutimeq: make time_less_than be actually "less than", not less/eq
lib:
- utils/interrupt_char: use core-provided mp_kbd_exception if enabled
drivers:
- display/ssd1306.py: update to use FrameBuffer not FrameBuffer1
- onewire: enable pull up on data pin
- onewire/ds18x20: fix negative temperature calc for DS18B20
tools:
- tinytest-codegen: blacklist recently added uheapq_timeq test (qemu-arm)
- pyboard.py: refactor so target is not reset between scripts/cmd
- mpy-tool.py: add support for OPT_CACHE_MAP_LOOKUP_IN_BYTECODE
tests:
- micropython: add test for import from within viper function
- use read() instead of readall()
- basics: add test for logical constant folding
- micropython: add test for creating traceback without allocation
- micropython: move alloc-less traceback test to separate test file
- extmod: improve ujson coverage
- basics: improve user class coverage
- basics: add test for dict.fromkeys where arg is a generator
- basics: add tests for if-expressions
- basics: change dict_fromkeys test so it doesn't use generators
- basics: enable tests for list slice getting with 3rd arg
- extmod/vfs_fat_fileio: add test for constructor of FileIO type
- extmod/btree1: exercise btree.flush()
- extmod/framebuf1: add basics tests for hline, vline, rect, line
- update for required byteorder arg for int.from_bytes()/to_bytes()
- extmod: improve moductypes test coverage
- extmod: improve modframebuf test coverage
- micropython: get heapalloc_traceback test running on baremetal
- struct*: make skippable
- basics: improve mpz test coverage
- float/builtin_float_round: test round() with second arg
- basics/builtin_dir: add test for dir() of a type
- basics: add test for builtin locals()
- basics/set_pop: improve coverage of set functions
- run-tests: for REPL tests make sure the REPL is exited at the end
- basics: improve test coverage for generators
- import: add a test which uses ... in from-import statement
- add tests to improve coverage of runtime.c
- add tests to improve coverage of objarray.c
- extmod: add test for utimeq module
- basics/lexer: add a test for newline-escaping within a string
- add a coverage test for printing the parse-tree
- utimeq_stable: test for partial stability of utimeq queuing
- heapalloc_inst_call: test for no alloc for simple object calls
- basics: add tests for parsing of ints with base 36
- basics: add tests to improve coverage of binary.c
- micropython: add test for micropython.stack_use() function
- extmod: improve ubinascii.c test coverage
- thread: improve modthread.c test coverage
- cmdline: improve repl.c autocomplete test coverage
- unix: improve runtime_utils.c test coverage
- pyb/uart: update test to match recent change to UART timeout_char
- run-tests: allow to skip set tests
- improve warning.c test coverage
- float: improve formatfloat.c test coverage using Python
- unix: improve formatfloat.c test coverage using C
- unix/extra_coverage: add basic tests to import frozen str and mpy
- types1: split out set type test to set_types
- array: allow to skip test if "array" is unavailable
- unix/extra_coverage: add tests for importing frozen packages
unix port:
- rename define for unix moduselect to MICROPY_PY_USELECT_POSIX
- Makefile: update freedos target for change of USELECT config name
- enable utimeq module
- main: allow to print the parse tree in coverage build
- Makefile: make "coverage_test" target mirror Travis test actions
- moduselect: if file object passed to .register(), return it in .poll()
- Makefile: split long line for coverage target, easier to modify
- enable and add basic frozen str and frozen mpy in coverage build
- Makefile: allow cache-map-lookup optimisation with frozen bytecode
windows port:
- enable READER_POSIX to get access to lexer_new_from_file
stmhal port:
- dma: de-init the DMA peripheral properly before initialising
- i2c: add option to I2C to enable/disable use of DMA transfers
- i2c: reset the I2C peripheral if there was an error on the bus
- rename mp_hal_pin_set_af to _config_alt, to simplify alt config
- upgrade to STM32CubeF4 v1.13.0 - CMSIS/Device 2.5.1
- upgrade to STM32CubeF4 v1.13.0 - HAL v1.5.1
- apply STM32CubeF4 v1.13.1 patch - upgrade HAL driver to v1.5.2
- hal/i2c: reapply HAL commit ea040a4 for f4
- hal/sd: reapply HAL commit 1d7fb82 for f4
- hal: reapply HAL commit 9db719b for f4
- hal/rcc: reapply HAL commit c568a2b for f4
- hal/sd: reapply HAL commit 09de030 for f4
- boards: configure all F4 boards to work with new HAL
- make-stmconst.py: fix regex's to work with current CMSIS
- i2c: handle I2C IRQs
- dma: precalculate register base and bitshift on handle init
- dma: mark DMA sate as READY even if HAL_DMA_Init is skipped
- can: clear FIFO flags in IRQ handler
- i2c: provide custom IRQ handlers
- hal: do not include <stdio.h> in HAL headers
- mphalport.h: use single GPIOx->BSRR register
- make-stmconst.py: add support for files with invalid utf8 bytes
- update HALCOMMITS due to change to hal
- make-stmconst.py: restore Python 2 compatibility
- update HALCOMMITS due to change to hal
- moduselect: move to extmod/ for reuse by other ports
- i2c: use the HAL's I2C IRQ handler for F7 and L4 MCUs
- updates to get F411 MCUs compiling with latest ST HAL
- i2c: remove use of legacy I2C_NOSTRETCH_DISABLED option
- add beginnings of port-specific machine.I2C implementation
- i2c: add support for I2C4 hardware block on F7 MCUs
- i2c: expose the pyb_i2c_obj_t struct and some relevant functions
- machine_i2c: provide HW implementation of I2C peripherals for F4
- add support for flash storage on STM32F415
- add back GPIO_BSRRL and GPIO_BSRRH constants to stm module
- add OpenOCD configuration for STM32L4
- add address parameters to openocd config files
- adc: add "mask" selection parameter to pyb.ADCAll constructor
- adc: provide more accurate measure of VBAT and VREFINT
- adc: make ADCAll.read_core_temp return accurate float value
- adc: add ADCAll.read_vref method, returning "3.3v" value
- adc: add support for F767 MCU
- adc: make channel "16" always map to the temperature sensor
- sdcard: clean/invalidate cache before DMA transfers with SD card
- moduos: implement POSIX behaviour of rename, allow to overwrite
- adc: use constants from new HAL version
- refactor UART configuration to use pin objects
- uart: add support for UART7 and UART8 on F7 MCUs
- uart: add check that UART id is valid for the given board
- cmsis: update STM32F7 CMSIS device include files to V1.1.2
- hal: update ST32CubeF7 HAL files to V1.1.2
- port of f4 hal commit c568a2b to updated f7 hal
- port of f4 hal commit 09de030 to updated f7 hal
- port of f4 hal commit 1d7fb82 to updated f7 hal
- declare and initialise PrescTables for F7 MCUs
- boards/STM32F7DISC: define LSE_STARTUP_TIMEOUT
- hal: update HALCOMMITS due to change in f7 hal files
- refactor to use extmod implementation of software SPI class
- cmsis: add CMSIS file stm32f767xx.h, V1.1.2
- add NUCLEO_F767ZI board, with openocd config for stm32f7
- cmsis: add CMSIS file stm32f769xx.h, V1.1.2
- add STM32F769DISC board files
- move PY_SYS_PLATFORM config from board to general config file
- mpconfigport: add weak-module links for io, collections, random
- rename mp_const_vcp_interrupt to mp_kbd_exception
- usb: always use the mp_kbd_exception object for VCP interrupt
- use core-provided keyboard exception object
- led: properly initialise timer handle to zero before using it
- mphalport.h: explicitly use HAL's GPIO constants for pull modes
- usrsw: use mp_hal_pin_config function instead of HAL_GPIO_Init
- led: use mp_hal_pin_config function instead of HAL_GPIO_Init
- sdcard: use mp_hal_pin_config function instead of HAL_GPIO_Init
- add support for STM32 Nucleo64 L476RG
- uart: provide a custom function to transmit over UART
- uart: increase inter-character timeout by 1ms
- enable utimeq module
cc3200 port:
- tools/smoke.py: change readall() to read()
- pybspi: remove static mode=SPI.MASTER parameter for latest HW API
- mods/pybspi: remove SPI.MASTER constant, it's no longer needed
- update for moduselect moved to extmod/
- re-add support for UART REPL (MICROPY_STDIO_UART setting)
- enable UART REPL by default
- README: (re)add information about accessing REPL on serial
- make: rename "deploy" target to "deploy-ota"
- add targets to erase flash, deploy firmware using cc3200tool
- README: reorganize and update to the current state of affairs
- modwlan: add network.WLAN.print_ver() diagnostic function
esp8266 port:
- enable uselect module
- move websocket_helper.py from scripts to modules for frozen BC
- refactor to use extmod implementation of software SPI class
- mpconfigport_512k: disable framebuf module for 512k build
- enable native emitter for Xtensa arch
- enable inline Xtensa assembler
- add "ota" target to produce firmware binary for use with yaota8266
- use core-provided keyboard exception object
- add "erase" target to Makefile, to erase entire flash
- when doing GC be sure to trace the memory holding native code
- modesp: flash_user_start(): support configuration with yaota8266
- force relinking OTA firmware image if built after normal one
- scripts/inisetup: dump FS starting sector/size on error
- Makefile: produce OTA firmware as firmware-ota.bin
- modesp: make check_fw() work with OTA firmware
- enable utimeq module
- Makefile: put firmware-ota.bin in build/, for consistency
- modules/flashbdev: add RESERVED_SECS before the filesystem
- modules/flashbdev: remove code to patch bootloader flash size
- modules/flashbdev: remove now-unused function set_bl_flash_size
- modules/flashbdev: change RESERVED_SECS to 0
zephyr port:
- add .gitignore to ignore Zephyr's "outdir" directory
- zephyr_getchar: update to Zephyr 1.6 unified kernel API
- switch to Zephyr 1.6 unified kernel API
- support raw REPL
- implement soft reset feature
- main: initialize sys.path and sys.argv
- use core-provided keyboard exception object
- uart_core: access console UART directly instead of printk() hack
- enable slice subscription
docs:
- remove references to readall() and update stream read() docs
- library/index: elaborate on u-modules
- library/machine.I2C: refine definitions of I2C methods
- library/pyb.Accel: add hardware note about pins used by accel
- library/pyb.UART: added clarification about timeouts
- library/pyb.UART: moved writechar doc to sit with other writes
- esp8266/tutorial: update intro to add Getting the firmware section
- library/machine.I2C: fix I2C constructor docs to match impl
- esp8266/tutorial: close socket after reading page content
- esp8266/general: add "Scarcity of runtime resources" section
- library/esp: document esp.set_native_code_location() function
- library/esp: remove para and add further warning about flash
- usocket: clarify that socket timeout raises OSError exception
travis:
- build STM32 F7 and L4 boards under Travis CI
- include persistent bytecode with floats in coverage tests
examples:
- hwapi: button_led: Add GPIO pin read example
- hwapi: add soft_pwm example converted to uasyncio
- http_client: use read() instead of readall()
- hwapi: add uasyncio example of fading 2 LEDs in parallel
- hwapi: add example for machine.time_pulse_us()
- hwapi: add hwconfig for console tracing of LED operations
- accellog.py: change 1: to /sd/, and update comment about FS
- hwapi/hwconfig_console: don't alloc memory in value()
The commit d9047d3c8a introduced a bug
whereby "from a.b import c" stopped working for frozen packages. This is
because the path was not properly truncated and became "a//b". Such a
path resolves correctly for a "real" filesystem, but not for a search in
the list of frozen modules.
UART REPL support was lost in os.dupterm() refactorings, etc. As
os.dupterm() is there, implement UART REPL support at the high level -
if MICROPY_STDIO_UART is set, make default boot.py contain os.dupterm()
call for a UART. This means that changing MICROPY_STDIO_UART value will
also require erasing flash on a module to force boot.py re-creation.
This check always fails (ie chr0 is never EOF) because the callers of this
function never call it past the end of the input stream. And even if they
did it would be harmless because 1) reader.readbyte must continue to
return an EOF char if the stream is exhausted; 2) next_char would just
count the subsequent EOF's as characters worth 1 column.
import utimeq, utime
# Max queue size, the queue allocated statically on creation
q = utimeq.utimeq(10)
q.push(utime.ticks_ms(), data1, data2)
res = [0, 0, 0]
# Items in res are filled up with results
q.pop(res)
Defining and initialising mp_kbd_exception is boiler-plate code and so the
core runtime can provide it, instead of each port needing to do it
themselves.
The exception object is placed in the VM state rather than on the heap.
sys.exit() is an important function to terminate a program. In particular,
the testsuite relies on it to skip tests (i.e. any other functionality may
be disabled, but sys.exit() is required to at least report that properly).
For all but the last pass the assembler only needs to count how much space
is needed for the machine code, it doesn't actually need to emit anything.
The dummy_data just uses unnecessary RAM and without it the code is not
any more complex (and code size does not increase for Thumb and Xtensa
archs).
This patch moves some common code from the individual inline assemblers to
the compiler, the code that calls the emit-glue to assign the machine code
to the functions scope.
This patch adds the MICROPY_EMIT_INLINE_XTENSA option, which, when
enabled, allows the @micropython.asm_xtensa decorator to be used.
The following opcodes are currently supported (ax is a register, a0-a15):
ret_n()
callx0(ax)
j(label)
jx(ax)
beqz(ax, label)
bnez(ax, label)
mov(ax, ay)
movi(ax, imm) # imm can be full 32-bit, uses l32r if needed
and_(ax, ay, az)
or_(ax, ay, az)
xor(ax, ay, az)
add(ax, ay, az)
sub(ax, ay, az)
mull(ax, ay, az)
l8ui(ax, ay, imm)
l16ui(ax, ay, imm)
l32i(ax, ay, imm)
s8i(ax, ay, imm)
s16i(ax, ay, imm)
s32i(ax, ay, imm)
l16si(ax, ay, imm)
addi(ax, ay, imm)
ball(ax, ay, label)
bany(ax, ay, label)
bbc(ax, ay, label)
bbs(ax, ay, label)
beq(ax, ay, label)
bge(ax, ay, label)
bgeu(ax, ay, label)
blt(ax, ay, label)
bnall(ax, ay, label)
bne(ax, ay, label)
bnone(ax, ay, label)
Upon entry to the assembly function the registers a0, a12, a13, a14 are
pushed to the stack and the stack pointer (a1) decreased by 16. Upon
exit, these registers and the stack pointer are restored, and ret.n is
executed to return to the caller (caller address is in a0).
Note that the ABI for the Xtensa emitters is non-windowing.
If a port defines MP_PLAT_COMMIT_EXEC then this function is used to turn
RAM data into executable code. For example a port may want to write the
data to flash for execution. The function must return a pointer to the
executable data.
The constants MP_IOCTL_POLL_xxx, which were stmhal-specific, are moved
from stmhal/pybioctl.h (now deleted) to py/stream.h. And they are renamed
to MP_STREAM_POLL_xxx to be consistent with other such constants.
All uses of these constants have been updated.
Docs are here: http://tannewt-micropython.readthedocs.io/en/microcontroller/
It differs from upstream's machine in the following ways:
* Python API is identical across ports due to code structure. (Lives in shared-bindings)
* Focuses on abstracting common functionality (AnalogIn) and not representing structure (ADC).
* Documentation lives with code making it easy to ensure they match.
* Pin is split into references (board.D13 and microcontroller.pin.PA17) and functionality (DigitalInOut).
* All nativeio classes claim underlying hardware resources when inited on construction, support Context Managers (aka with statements) and have deinit methods which release the claimed hardware.
* All constructors take pin references rather than peripheral ids. Its up to the implementation to find hardware or throw and exception.
If a port defines MICROPY_READER_POSIX or MICROPY_READER_FATFS then
lexer.c now provides an implementation of mp_lexer_new_from_file using
the mp_reader_new_file function.
Implementations of persistent-code reader are provided for POSIX systems
and systems using FatFS. Macros to use these are MICROPY_READER_POSIX and
MICROPY_READER_FATFS respectively. If an alternative implementation is
needed then a port can define the function mp_reader_new_file.
It is split into 2 functions, one to make small ints and the other to make
a non-small-int leaf node. This reduces code size by 32 bytes on
bare-arm, 64 bytes on unix (x64-64) and 144 bytes on stmhal.
This includes StopIteration and thus are important to make Python-coded
iterables work with yield from/await.
Exceptions in Python send() are still not handled and left for future
consideration and optimization.
ESP8266 port uses SDK 2.0, has more heap, has support for 512k devices
This release brings some code size reductions to the core as well as
more tests and improved coverage which is now at 94.3%.
The time.ticks_diff(a, b) function has changed: the order of the arguments
has been swapped so that it behaves like "a - b", and it can now return a
negative number if "a" came before "b" (modulo the period of the ticks
functions).
For the ESP8266 port the Espressif SDK has been updated to 2.0.0, the
heap has been increased from 28k to 36k, and there is support for 512k
devices via "make 512k". upip is included by default as frozen bytecode.
The network module now allows access-point reconnection without WiFi
credentials, and exposes configuration for the station DHCP hostname. The
DS18B20 driver now handles negative temperatures, and NeoPixel and APA102
drivers handle 4 bytes-per-pixel LEDs.
For the CC3200 port there is now support for loading of precompiled .mpy
files and threading now works properly with interrupts.
A detailed list of changes follows.
py core:
- py.mk: automatically add frozen.c to source list if FROZEN_DIR is defined
- be more specific with MP_DECLARE_CONST_FUN_OBJ macros
- specialise builtin funcs to use separate type for fixed arg count
- {modbuiltins,obj}: use MP_PYTHON_PRINTER where possible
- modbuiltins: add builtin "slice", pointing to existing slice type
- add "delattr" builtin, conditional on MICROPY_CPYTHON_COMPAT
- sequence: fix reverse slicing of lists
- fix null pointer dereference in mpz.c, fix missing va_end in warning.c
- remove asserts that are always true in emitbc.c
- fix wrong assumption that m_renew will not move if shrinking
- change config default so m_malloc0 uses memset if GC not enabled
- add MICROPY_FLOAT_CONST macro for defining float constants
- move frozen bytecode Makefile rules from ports to common mk files
- strip leading dirs from frozen mpy files, so any path can be used
extmod:
- vfs_fat_file: check fatfs f_sync() and f_close() returns for errors
- vfs_fat_file: make file.close() a no-op if file already closed
- utime_mphal: ticks_diff(): switch arg order, return signed value
- utime_mphal: add MP_THREAD_GIL_EXIT/ENTER warppers for sleep functions
- utime_mphal: implement ticks_add(), add to all maintained ports
- utime_mphal: allow ticks functions period be configurable by a port
lib:
- utils/pyhelp.c: use mp_printf() instead of printf()
- utils/pyexec: add mp_hal_set_interrupt_char() prototype
- libm: move Thumb-specific sqrtf function to separate file
drivers:
- add "from micropython import const" when const is used
tools:
- upgrade upip to 1.1.4: fix error on unix when installing to non-existing
absolute path
- pip-micropython: remove deprecated wrapper tool
- check_code_size.sh: code size validation script for CI
- replace upip tarball with just source file, to make its inclusion as
frozen modules in multiple ports less magic
tests:
- extmod/vfs_fat: improve VFS test coverage
- basics/builtin_slice: add test for "slice" builtin name
- basics: add test for builtin "delattr"
- extmod/vfs_fat_fsusermount: improve fsusermount test coverage
- extmod/vfs_fat_oldproto: test old block device protocol
- basics/gc1: garbage collector threshold() coverage
- extmod/uhashlib_sha1: coverage for SHA1 algorithm
- extmod/uhashlib_sha256: rename sha256.py test
- btree1: fix out of memory error running on esp8266
- extmod/ticks_diff: test for new semantics of ticks_diff()
- extmod/framebuf1: test framebuffer pixel clear, and text function
minimal port:
- Makefile: split rule for firmware.bin generation
unix port:
- Makefile: remove references to deprecated pip-micropython
- modtime: use ticks_diff() implementation from extmod/utime_mphal.c
- mphalport.h: add warning of mp_hal_delay_ms() implementation
- modtime: switch ticks/sleep_ms/us() to utime_mphal
- fix symbol references for x86 Mac
- replace upip tarball with just source file
windows port:
- enable utime_mphal following unix, define mp_hal_ticks_*
- fix utime_mphal compilation for msvc
- implement mp_hal_ticks_cpu in terms of QueryPerformanceCounter
qemu-arm port:
- exclude ticks_diff test for qemu-arm port
- exclude extmod/vfs_fat_fileio.py test
- exclude new vfs_fat tests
- enable software floating point support, and float tests
stmhal port:
- modutime: refactor to use extmod's version of ticks_cpu
- refactor pin usage to use mp_hal_pin API
- led: refactor LED to use mp_hal_pin_output() init function
- Makefile: use standard rules for frozen module generation
- modutime: consistently convert to MP_ROM_QSTR/MP_ROM_PTR
- enable SD power save (disable CLK on idle)
cc3200 port:
- use mp_raise_XXX helper functions to reduce code size
- mods/pybspi: allow "write" arg of read/readinto to be positional
- enable loading of precompiled .mpy files
- fix thread mutex's so threading works with interrupts
teensy port:
- update to provide new mp_hal_pin_XXX functions following stmhal
esp8266 port:
- Makefile: use latest esptool.py flash size auto-detection
- esp_init_data: auto-initialize system params with vendor SDK 2.0.0
- esp8266.ld: move help.o to iROM
- esp8266.ld: move modmachine.o to iROM
- esp8266.ld: move main.o to iROM
- add MP_FASTCODE modifier to put a function to iRAM
- main: mark nlr_jump_fail() as MP_FASTCODE
- modules/webrepl: enforce only one concurrent WebREPL connection
- etshal.h: add few more ESP8266 vendor lib prototypes
- modesp: add flash_user_start() function
- add support for building firmware version for 512K modules
- scripts: make neopixel/apa102 handle 4bpp LEDs with common code
- modutime: consistently convert to MP_ROM_QSTR/MP_ROM_PTR
- modnetwork: config(): fix copy-paste error in setting "mac"
- scripts/port_diag: add descriptions for esf_buf types
- modnetwork.c: allows AP reconnection without WiFi credentials
- main: bump heap size to 36K
- etshal.h: add prototypes for SPIRead/SPIWrite/SPIEraseSector
- etshal.h: adjust size of MD5_CTX structure
- modules: fix negative temperature in ds18x20 driver
- rename "machine" module implementation to use contemporary naming
- rework webrepl_setup to run over wired REPL
- espneopixel.c: solve glitching LED issues with cpu at 80MHz
- include upip as a standard frozen bytecode module
- update docs for esptool 1.2.1/SDK 2.0 (--flash_size=detect)
- modnetwork.c: expose configuration for station DHCP hostname
zephyr port:
- implement utime module
- use board/SoC values for startup banner based on Zephyr config
- initial implementation of machine.Pin
- zephyr_getchar: update for recent Zephyr refactor of console hooks
- support time -> utime module "weaklink"
- README: update for the current featureset, add more info
- mpconfigport.h: move less important params to the bottom
- Makefile: allow to adjust heap size from make command line
- Makefile: update comments to the current state of affairs
- Makefile: allow to override Zephyr config from make command line
- Makefile: add minimal port
- Makefile: add -fomit-frame-pointer to reduce code size
- mphalport.h: update for new "unified" kernal API (sleep functions)
docs:
- machine.SPI: bring up to date with Hardware API, make vendor-neutral
- machine.SPI: improve descriptions of xfer methods
- library/builtins: add docs for delattr and slice
- library/network: reword intro paragraph
- library/network: typo fixes, consistent acronym capitalization
- library/index: update TOCs so builtins sorted before modules
- utime: document ticks_cpu() in more detail
- utime: describe new semantics of ticks_diff() (signed ring arithmetics)
- utime: add docs for ticks_add(), improvements for other ticks_*()
- esp8266: update for new WebREPL setup procedure
- */quickref.rst: use new semantics of ticks_diff()
- library/machine.Pin: update Pin docs to align with new HW API
travis:
- integrate tools/check_code_size.sh
- minimal: Use CROSS=1, for binary size check
examples:
- http_server_simplistic: add "not suitable for real use" note
- hwapi: example showing best practices for HW API usage in apps
- hwapi: add hwconfig for DragonBoard 410c
We allow 'exc.__traceback__ = None' assignment as a low-level optimization
of pre-allocating exception instance and raising it repeatedly - this
avoids memory allocation during raise. However, uPy will keep adding
traceback entries to such exception instance, so before throwing it,
traceback should be cleared like above.
'exc.__traceback__ = None' syntax is CPython compatible. However, unlike
it, reading that attribute or setting it to any other value is not
supported (and not intended to be supported, again, the only reason for
adding this feature is to allow zero-memalloc exception raising).
Its addition was due to an early exploration on how to add CPython-like
stream interface. It's clear that it's not needed and just takes up
bytes in all ports.
With this patch one can now do "make FROZEN_MPY_DIR=../../frozen" to
specify a directory containing scripts to be frozen (as well as absolute
paths).
The compiled .mpy files are now stored in $(BUILD)/frozen_mpy/.
Now, to use frozen bytecode all a port needs to do is define
FROZEN_MPY_DIR to the directory containing the .py files to freeze, and
define MICROPY_MODULE_FROZEN_MPY and MICROPY_QSTR_EXTRA_POOL.
In both parse.c and qstr.c, an internal chunking allocator tidies up
by calling m_renew to shrink an allocated chunk to the size used, and
assumes that the chunk will not move. However, when MICROPY_ENABLE_GC
is false, m_renew calls the system realloc, which does not guarantee
this behaviour. Environments where realloc may return a different
pointer include:
(1) mbed-os with MBED_HEAP_STATS_ENABLED (which adds a wrapper around
malloc & friends; this is where I was hit by the bug);
(2) valgrind on linux (how I diagnosed it).
The fix is to call m_renew_maybe with allow_move=false.
It will soft-reboot micropython after a burst of writes to the
file system. This means that after you save files on your computer
they will be automatically rerun.
This can be disabled in the build by unsetting AUTORESET_TIMER in
mpconfigboard.h.
Using the REPL will also prevent the soft resets until you reset
with CTRL-D manually.
Builtin functions with a fixed number of arguments (0, 1, 2 or 3) are
quite common. Before this patch the wrapper for such a function cost
3 machine words. After this patch it only takes 2, which can reduce the
code size by quite a bit (and pays off even more, the more functions are
added). It also makes function dispatch slightly more efficient in CPU
usage, and furthermore reduces stack usage for these cases. On x86 and
Thumb archs the dispatch functions are now tail-call optimised by the
compiler.
The bare-arm port has its code size increase by 76 bytes, but stmhal drops
by 904 bytes. Stack usage by these builtin functions is decreased by 48
bytes on Thumb2 archs.
In order to have more fine-grained control over how builtin functions are
constructed, the MP_DECLARE_CONST_FUN_OBJ macros are made more specific,
with suffix of _0, _1, _2, _3, _VAR, _VAR_BETEEN or _KW. These names now
match the MP_DEFINE_CONST_FUN_OBJ macros.
Conflicts:
README.md - Kept Adafruit README.md intact.
py/emitglue.c - Added xtensa architecture as an OR of the define.
zephyr/README.md - Fixed spelling mistake.
As long as a port implement mp_hal_sleep_ms(), mp_hal_ticks_ms(), etc.
functions, it can just use standard implementations of utime.sleel_ms(),
utime.ticks_ms(), etc. Python-level functions.
Now there is just one function to allocate a new vstr, namely vstr_new
(in addition to vstr_init etc). The caller of this function should know
what initial size to allocate for the buffer, or at least have some policy
or config option, instead of leaving it to a default (as it was before).
This refactors ujson.loads(s) to behave as ujson.load(StringIO(s)).
Increase in code size is: 366 bytes for unix x86-64, 180 bytes for
stmhal, 84 bytes for esp8266.
Setting emit_dent=0 is unnecessary because arriving in that part of the
if-logic will guarantee that emit_dent is already zero.
The block to check indent_top(lex)>0 is unreachable because a newline is
always inserted an the end of the input stream, and hence dedents are
always processed before EOF.
Similar to how binary op already works. Common unary operations already
have fast paths for bool so there's no need to have explicit handling of
ops in bool_unary_op, especially since they have the same behaviour as
integers.
On 32-bit archs this makes the scope_t struct 48 bytes in size, which fits
in 3 GC blocks (previously it used 4 GC blocks). This will lead to some
savings when compiling scripts because there are usually quite a few scopes,
one for each function and class.
Note that qstrs will fit in 16 bits, this assumption is made in a few other
places.
Following how other objects work, set/frozenset methods should use the
mp_check_self() macro to check the type of the self argument, because in
most cases this check can be a null operation.
Saves about 100-180 bytes of code for builds with set and frozenset
enabled.
Having a micropython.const identity function, and writing "from micropython
import const" at the start of scripts that use the const feature, allows to
write scripts which are compatible with CPython, and with uPy builds that
don't include const optimisation.
This patch adds such a function and updates the tests to do the import.
When an exception is raised and is to be handled by the VM, it is stored
on the Python value stack so the bytecode can access it. CPython stores
3 objects on the stack for each exception: exc type, exc instance and
traceback. uPy followed this approach, but it turns out not to be
necessary. Instead, it is enough to store just the exception instance on
the Python value stack. The only place where the 3 values are needed
explicitly is for the __exit__ handler of a with-statement context, but
for these cases the 3 values can be extracted from the single exception
instance.
This patch removes the need to store 3 values on the stack, and instead
just stores the exception instance.
Code size is reduced by about 50-100 bytes, the compiler and VM are
slightly simpler, generate bytecode is smaller (by 2 bytes for each try
block), and the Python value stack is reduced in size for functions that
handle exceptions.
This fixes constant substitution so that only standalone identifiers are
replaced with their constant value (if they have one). I.e. don't
replace NAME in expressions like obj.NAME or NAME = expr.
qstrs ids are restricted to fit within 2 bytes already (eg in persistent
bytecode) so it's safe to use a uint16_t to store them in mp_arg_t. And
the flags member only needs a maximum of 2 bytes so can also use uint16_t.
Savings in code size can be significant when many mp_arg_t structs are
used for argument parsing. Eg, this patch reduces stmhal by 480 bytes.
The system printf is no longer used by the core uPy code. Instead, the
platform print stream or DEBUG_printf is used. Using DEBUG_printf in the
showbc functions would mean that the code can't be tested by the test
suite, so use the normal output instead.
This patch also fixes parsing of bytecode-line-number mappings.
The vstr.had_error flag was a relic from the very early days which assumed
that the malloc functions (eg m_new, m_renew) returned NULL if they failed
to allocate. But that's no longer the case: these functions will raise an
exception if they fail.
Since it was impossible for had_error to be set, this patch introduces no
change in behaviour.
An alternative option would be to change the malloc calls to the _maybe
variants, which return NULL instead of raising, but then a lot of code
will need to explicitly check if the vstr had an error and raise if it
did.
The code-size savings for this patch are, in bytes: bare-arm:188,
minimal:456, unix(NDEBUG,x86-64):368, stmhal:228, esp8266:360.
With the previous patch combining 3 emit functions into 1, it now makes
sense to also combine the corresponding VM opcodes, which is what this
patch does. This eliminates 2 opcodes which simplifies the VM and reduces
code size, in bytes: bare-arm:44, minimal:64, unix(NDEBUG,x86-64):272,
stmhal:92, esp8266:200. Profiling (with a simple script that creates many
list/dict/set comprehensions) shows no measurable change in performance.
The 3 kinds of comprehensions are similar enough that merging their emit
functions reduces code size. Decreases in code size in bytes are:
bare-arm:24, minimal:96, unix(NDEBUG,x86-64):328, stmhal:80, esp8266:76.
bool(None) has a fast path in mp_obj_is_true so doesn't need to be
handled in none_unary_op. The only caveat is that subclassing may
bypass the mp_obj_is_true function, but actually you aren't allowed to
subclass classes that have singleton instances like NoneType (see
https://mail.python.org/pipermail/python-dev/2002-March/020822.html for
reference on this point).
py/makeqstrdefs.py declares that it works with python 2.6 however the
syntax used to initialise of a set with values was only added in python
2.7. This leads to build failures when the host system doesn't have
python 2.7 or newer.
Instead of using the new syntax pass a list of initial values through
set() to achieve the same result. This should work for python versions
from at least 2.6 onwards.
Helped-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Chris Packham <judge.packham@gmail.com>
Tested on a STM32F7DISCO at 216MHz. All tests generating code (inlineasm,
native, viper) now pass, except pybnative/while.py, but that's because
there is no LED(2).
This new config option allows to control whether MicroPython uses its own
internal printf or not (if not, an external one should be linked in).
Accompanying this new option is the inclusion of lib/utils/printf.c in the
core list of source files, so that ports no longer need to include it
themselves.
Arguments of an unknown type cannot be skipped and continuing to parse a
format string after encountering an unknown format specifier leads to
undefined behaviour. This patch helps to find use of unsupported formats.
The idea is that all ports can use these helper methods and only need to
provide initialisation of the SPI bus, as well as a single transfer
function. The coding pattern follows the stream protocol and helper
methods.
There can be stray pointers in memory blocks that are not properly zero'd
after allocation. This patch adds a new config option to always zero all
allocated memory (via gc_alloc and gc_realloc) and hence help to eliminate
stray pointers.
See issue #2195.
In current state `mp_get_stream_raise` assumes that `self_in` is an object
and always performs a pointer derefence which may cause a segfault.
This function shall throw an exception whenever `self_in` does not implement
a stream protocol, that includes qstr's and numbers.
fixes#2331
The machine_ptr_t type is long obsolete as the type of mp_obj_t is now
defined by the object representation, ie by MICROPY_OBJ_REPR. So just use
void* explicitly for the typedef of mp_obj_t.
If a port wants to use something different then they should define a new
object representation.
Only tuple, namedtuple and attrtuple use the tuple_cmp_helper function,
and they all have getiter=mp_obj_tuple_getiter, so the check here is only
to ensure that the self object is consistent. Hence use mp_check_self.
Checks for number of args removes where guaranteed by function descriptor,
self checking is replaced with mp_check_self(). In few cases, exception
is raised instead of assert.
Indended to replace raw asserts in bunch of files. Expands to empty
if MICROPY_BUILTIN_METHOD_CHECK_SELF_ARG is defined, otehrwise by
default still to assert, though a particular port may define it to
something else.
Introduce mp_raise_msg(), mp_raise_ValueError(), mp_raise_TypeError()
instead of previous pattern nlr_raise(mp_obj_new_exception_msg(...)).
Save few bytes on each call, which are many.
To filter out even prototypes of mp_stream_posix_*() functions, which
require POSIX types like ssize_t & off_t, which may be not available in
some ports.
Helpful when porting existing C libraries to MicroPython. abort()ing in
embedded environment isn't a good idea, so when compiling such library,
-Dabort=abort_ option can be given to redirect standard abort() to this
"safe" version.
Something like:
if foo == "bar":
will be always false if foo is b"bar". In CPython, warning is issued if
interpreter is started as "python3 -b". In MicroPython,
MICROPY_PY_STR_BYTES_CMP_WARN setting controls it.
Currently, MicroPython runs GC when it could not allocate a block of memory,
which happens when heap is exhausted. However, that policy can't work well
with "inifinity" heaps, e.g. backed by a virtual memory - there will be a
lot of swap thrashing long before VM will be exhausted. Instead, in such
cases "allocation threshold" policy is used: a GC is run after some number of
allocations have been made. Details vary, for example, number or total amount
of allocations can be used, threshold may be self-adjusting based on GC
outcome, etc.
This change implements a simple variant of such policy for MicroPython. Amount
of allocated memory so far is used for threshold, to make it useful to typical
finite-size, and small, heaps as used with MicroPython ports. And such GC policy
is indeed useful for such types of heaps too, as it allows to better control
fragmentation. For example, if a threshold is set to half size of heap, then
for an application which usually makes big number of small allocations, that
will (try to) keep half of heap memory in a nice defragmented state for an
occasional large allocation.
For an application which doesn't exhibit such behavior, there won't be any
visible effects, except for GC running more frequently, which however may
affect performance. To address this, the GC threshold is configurable, and
by default is off so far. It's configured with gc.threshold(amount_in_bytes)
call (can be queries without an argument).
3-arg form:
stream.write(data, offset, length)
2-arg form:
stream.write(data, length)
These allow efficient buffer writing without incurring extra memory
allocation for slicing or creating memoryview() object, what is
important for low-memory ports.
All arguments must be positional. It might be not so bad idea to standardize
on 3-arg form, but 2-arg case would need check and raising an exception
anyway then, so instead it was just made to work.
This follows source code/header file organization similar to few other
objects, and intended to be used only is special cases, where efficiency/
simplicity matters.
Previously, if there was chain of allocated blocks ending with the last
block of heap, it wasn't included in number of 1/2-block or max block
size stats.
Now only the bits that really need to be written in assembler are written
in it, otherwise C is used. This means that the assembler code no longer
needs to know about the global state structure which makes it much easier
to maintain.
GC_EXIT() can cause a pending thread (waiting on the mutex) to be
scheduled right away. This other thread may trigger a garbage
collection. If the pointer to the newly-allocated block (allocated by
the original thread) is not computed before the switch (so it's just left
as a block number) then the block will be wrongly reclaimed.
This patch makes sure the pointer is computed before allowing any thread
switch to occur.
By using a single, global mutex, all memory-related functions (alloc,
free, realloc, collect, etc) are made thread safe. This means that only
one thread can be in such a function at any one time.
This allows to define an abstract base class which would translate
C-level protocol to Python method calls, and any subclass inheriting
from it will support this feature. This in particular actually enables
recently introduced machine.PinBase class.
Allows to translate C-level pin API to Python-level pin API. In other
words, allows to implement a pin class and Python which will be usable
for efficient C-coded algorithms, like bitbanging SPI/I2C, time_pulse,
etc.
That's arbitrary restriction, in case of embedding, a source file path may
be absolute. For the purpose of filtering out system includes, checking
for ".c" suffix is enough.
Assignments of the form "_id = const(value)" are treated as private
(following a similar CPython convention) and code is no longer emitted
for the assignment to a global variable.
See issue #2111.
Using usual method of virtual method tables. Single virtual method,
ioctl, is defined currently for all operations. This universal and
extensible vtable-based method is also defined as a default MPHAL
GPIO implementation, but a specific port may override it with its
own implementation (e.g. close-ended, but very efficient, e.g. avoiding
virtual method dispatch).
Disabled by default, enabled in unix port. Need for this method easily
pops up when working with text UI/reporting, and coding workalike
manually again and again counter-productive.
Now frozen modules is treated just as a kind of VFS, and all operations
performed on it correspond to operations on normal filesystem. This allows
to support packages properly, and potentially also data files.
This change also have changes to rework frozen bytecode modules support to
use the same framework, but it's not finished (and actually may not work,
as older adhox handling of any type of frozen modules is removed).
Both read and write operations support variants where either a) a single
call is made to the undelying stream implementation and returned buffer
length may be less than requested, or b) calls are repeated until requested
amount of data is collected, shorter amount is returned only in case of
EOF or error.
These operations are available from the level of C support functions to be
used by other C modules to implementations of Python methods to be used in
user-facing objects.
The rationale of these changes is to allow to write concise and robust
code to work with *blocking* streams of types prone to short reads, like
serial interfaces and sockets. Particular object types may select "exact"
vs "once" types of methods depending on their needs. E.g., for sockets,
revc() and send() methods continue to be "once", while read() and write()
thus converted to "exactly" versions.
These changes don't affect non-blocking handling, e.g. trying "exact"
method on the non-blocking socket will return as much data as available
without blocking. No data available is continued to be signaled as None
return value to read() and write().
From the point of view of CPython compatibility, this model is a cross
between its io.RawIOBase and io.BufferedIOBase abstract classes. For
blocking streams, it works as io.BufferedIOBase model (guaranteeing
lack of short reads/writes), while for non-blocking - as io.RawIOBase,
returning None in case of lack of data (instead of raising expensive
exception, as required by io.BufferedIOBase). Such a cross-behavior
should be optimal for MicroPython needs.
Address printed was truncated anyway and in general confusing to outsider.
A line which dumps it is still left in the source, commented, for peculiar
cases when it may be needed (e.g. when running under debugger).
In some compliation enviroments (e.g. mbed online compiler) with
strict standards compliance, <math.h> does not define constants such
as M_PI. Provide fallback definitions of M_E and M_PI where needed.
If an OSError is raised with an integer argument, and that integer
corresponds to an errno, then the string for the errno is used as the
argument to the exception, instead of the integer. Only works if
the uerrno module is enabled.
These are typical consumers of large chunks of memory, so it's useful to
see at least their number (how much memory isn't clearly shown, as the data
for these objects is allocated elsewhere).
Effect measured on esp8266 port:
Before:
>>> pystone_lowmem.main(10000)
Pystone(1.2) time for 10000 passes = 44214 ms
This machine benchmarks at 226 pystones/second
>>> pystone_lowmem.main(10000)
Pystone(1.2) time for 10000 passes = 44246 ms
This machine benchmarks at 226 pystones/second
After:
>>> pystone_lowmem.main(10000)
Pystone(1.2) time for 10000 passes = 44343ms
This machine benchmarks at 225 pystones/second
>>> pystone_lowmem.main(10000)
Pystone(1.2) time for 10000 passes = 44376ms
This machine benchmarks at 225 pystones/second
vstr_null_terminated_str is almost certainly a vstr finalization operation,
so it should add the requested NUL byte, and not try to pre-allocate more.
The previous implementation could actually allocate double of the buffer
size.
Previous to this patch bignum division and modulo would temporarily
modify the RHS argument to the operation (eg x/y would modify y), but on
return the RHS would be restored to its original value. This is not
allowed because arguments to binary operations are const, and in
particular might live in ROM. The modification was to normalise the arg
(and then unnormalise before returning), and this patch makes it so the
normalisation is done on the fly and the arg is now accessed as read-only.
This change doesn't increase the order complexity of the operation, and
actually reduces code size.
When DIG_SIZE=32, a uint32_t is used to store limbs, and no normalisation
is needed because the MSB is already set, then there will be left and
right shifts (in C) by 32 of a 32-bit variable, leading to undefined
behaviour. This patch fixes this bug.
Also do that only for the first word in a line. The idea is that when you
start up interpreter, high chance that you want to do an import. With this
patch, this can be achieved with "i<tab>".
The type is an unsigned 8-bit value, since bytes objects are exactly
that. And it's also sensible for unicode strings to return unsigned
values when accessed in a byte-wise manner (CPython does not allow this).
While just a websocket is enough for handling terminal part of WebREPL,
handling file transfer operations requires demultiplexing and acting
upon, which is encapsulated in _webrepl class provided by this module,
which wraps a websocket object.
The C standard says that left-shifting a signed value (on the LHS of the
operator) is undefined. So we cast to an unsigned integer before the
shift. gcc does not issue a warning about this, but clang does.
- msvc preprocessor output contains full paths with backslashes so the
':' and '\' characters needs to be erased from the paths as well
- use a regex for extraction of filenames from preprocessor output so it
can handle both gcc and msvc preprocessor output, and spaces in paths
(also thanks to a PR from @travnicekivo for part of that regex)
- os.rename will fail on windows if the destination file already exists,
so simply attempt to delete that file first
Qstr auto-generation is now much faster so this optimisation for start-up
time is no longer needed. And passing "-s -S" breaks some things, like
stmhal's "make deploy".
E.g. for stmhal, accumulated preprocessed output may grow large due to
bloated vendor headers, and then reprocessing tens of megabytes on each
build make take couple of seconds on fast hardware (=> potentially dozens
of seconds on slow hardware). So instead, split once after each change,
and only cat repetitively (guaranteed to be fast, as there're thousands
of lines involved at most).
If make -B is run, the rule is run with $? empty. Extract fron all file in
this case. But this gets fragile, really "make clean" should be used instead
with such build complexity.
When there're C files to be (re)compiled, they're all passed first to
preprocessor. QSTR references are extracted from preprocessed output and
split per original C file. Then all available qstr files (including those
generated previously) are catenated together. Only if the resulting content
has changed, the output file is written (causing almost global rebuild
to pick up potentially renumbered qstr's). Otherwise, it's not updated
to not cause spurious rebuilds. Related make rules are split to minimize
amount of commands executed in the interim case (when some C files were
updated, but no qstrs were changed).
- any architecture may explicitely build with qstring make
QSTR_AUTOGEN_DISABLE=1 autogeneration disabled and provide its
own list of qstrings by the standard
mechanisms (qstrdefsport.h).
- add template rule that converts a specified source file into a qstring file
- add special rule for generating a central header that contains all
extracted/autogenerated strings - defined by QSTR_DEFS_COLLECTED
variable. Each platform appends a list of sources that may contain
qstrings into a new build variable: SRC_QSTR. Any autogenerated
prerequisities are should be appened to SRC_QSTR_AUTO_DEPS variable.
- remove most qstrings from py/qstrdefs, keep only qstrings that
contain special characters - these cannot be easily detected in the
sources without additional annotations
- remove most manual qstrdefs, use qstrdef autogen for: py, cc3200,
stmhal, teensy, unix, windows, pic16bit:
- remove all micropython generic qstrdefs except for the special strings that contain special characters (e.g. /,+,<,> etc.)
- remove all port specific qstrdefs except for special strings
- append sources for qstr generation in platform makefiles (SRC_QSTR)
This script will search for patterns of the form Q(...) and generate a
list of them.
The original code by Pavel Moravec has been significantly simplified to
remove the part that searched for C preprocessor directives (eg #if).
This is because all source is now run through CPP before being fed into
this script.
Small hash tables (eg those used in user class instances that only have a
few members) now only use the minimum amount of memory necessary to hold
the key/value pairs. This can reduce performance for instances that have
many members (because then there are many reallocations/rehashings of the
table), but helps to conserve memory.
See issue #1760.
Most grammar rules can optimise to the identity if they only have a single
argument, saving a lot of RAM building the parse tree. Previous to this
patch, whether a given grammar rule could be optimised was defined (mostly
implicitly) by a complicated set of logic rules. With this patch the
definition is always specified explicitly by using "and_ident" in the rule
definition in the grammar. This simplifies the logic of the parser,
making it a bit smaller and faster. RAM usage in unaffected.
The config variable MICROPY_MODULE_FROZEN is now made of two separate
parts: MICROPY_MODULE_FROZEN_STR and MICROPY_MODULE_FROZEN_MPY. This
allows to have none, either or both of frozen strings and frozen mpy
files (aka frozen bytecode).
They are sugar for marking function as generator, "yield from"
and pep492 python "semantically equivalents" respectively.
@dpgeorge was the original author of this patch, but @pohmelie made
changes to implement `async for` and `async with`.
Will call underlying C virtual methods of stream interface. This isn't
intended to be added to every stream object (it's not in CPython), but
is convenient way to expose extra operation on Python side without
adding bunch of Python-level methods.
Features inline get/put operations for the highest performance. Locking
is not part of implementation, operation should be wrapped with locking
externally as needed.
When taking the logarithm of the float to determine the exponent, there
are some edge cases that finish the log loop too large. Eg for an
input value of 1e32-epsilon, this is actually less than 1e32 from the
log-loop table and finishes as 10.0e31 when it should be 1.0e32. It
is thus rendered as :e32 (: comes after 9 in ascii).
There was the same problem with numbers less than 1.
Previous to this patch, the "**b" in "a**b" had its own parse node with
just one item (the "b"). Now, the "b" is just the last element of the
power parse-node. This saves (a tiny bit of) RAM when compiling.
Passing an mp_uint_t to a %d printf format is incorrect for builds where
mp_uint_t is larger than word size (eg a nanboxing build). This patch
adds some simple casting to int in these cases.
If the heap is locked, or memory allocation fails, then calling a bound
method will still succeed by allocating the argument state on the stack.
The new code also allocates less stack than before if less than 4
arguments are passed. It's also a tiny bit smaller in code size.
This was done as part of the ESA project.