With this patch parse nodes are allocated sequentially in chunks. This
reduces fragmentation of the heap and prevents waste at the end of
individually allocated parse nodes.
Saves roughly 20% of RAM during parse stage.
This fixes errors like these ones:
modffi.c: In function 'return_ffi_value':
modffi.c:143:29: error: cast to pointer from integer of different size
[-Werror=int-to-pointer-cast]
const char *s = (const char *)val;
^
modffi.c:162:20: error: cast to pointer from integer of different size
[-Werror=int-to-pointer-cast]
return (mp_obj_t)val;
^
modffi.c: In function 'ffifunc_call':
modffi.c:358:25: error: cast from pointer to integer of different size
[-Werror=pointer-to-int-cast]
values[i] = (ffi_arg)a;
^
modffi.c:373:25: error: cast from pointer to integer of different size
[-Werror=pointer-to-int-cast]
values[i] = (ffi_arg)s;
^
modffi.c:381:25: error: cast from pointer to integer of different size
[-Werror=pointer-to-int-cast]
values[i] = (ffi_arg)bufinfo.buf;
^
modffi.c:384:25: error: cast from pointer to integer of different size
[-Werror=pointer-to-int-cast]
values[i] = (ffi_arg)p->func;
^
These errors can be highlighted when building micropython from MIPS64
n32 because ffi_arg is 64-bit wide and the pointers on MIPS64 n32 are
32-bit wide, so it's trying to case an integer to a pointer (or
vice-versa) of a different size. We should cast first the pointer (or the
integer) to a pointer sized integer (intptr_t) to fix that problem.
Signed-off-by: Vicente Olivert Riera <Vincent.Riera@imgtec.com>
Linking against local libffi (and other libs in future) is triggered by
"make MICROPY_STANDALONE=1". Before that, dependent libs should be built
with "make deplibs".
Indeed, this flag efectively selects architecture target, and must
consistently apply to all compiles and links, including 3rd-party
libraries, unlike CFLAGS, which have MicroPython-specific setting.
inet_pton supports both ipv4 and ipv6 addresses. Interface is also extensible
for other address families, but underlying libc inet_pton() function isn't
really extensible (e.g., it doesn't return length of binary address, i.e. it's
really hardcoded to AF_INET and AF_INET6). But anyway, on Python side, we could
extend it to support other addresses.
sendto() turns out to be mandatory function to work with UDP. It may seem
that connect(addr) + send() would achieve the same effect, but what connect()
appears to do is to set source address filter on a socket to its argument.
Then everything falls apart: socket sends to a broad-/multi-cast address,
but reply is sent from real peer address, which doesn't match filter set
by connect(), so local socket never sees a reply.
This requires root access. And on recent Linux kernels, with
CONFIG_STRICT_DEVMEM option enabled, only address ranges listed in
/proc/iomem can be accessed. The above compiled-time option can be
however overriden with boot-time option "iomem=relaxed".
This also removed separate read/write paths - there unlikely would
be a case when they're different.
Previous to this patch a call such as list.append(1, 2) would lead to a
seg fault. This is because list.append is a builtin method and the first
argument to such methods is always assumed to have the correct type.
Now, when a builtin method is extracted like this it is wrapped in a
checker object which checks the the type of the first argument before
calling the builtin function.
This feature is contrelled by MICROPY_BUILTIN_METHOD_CHECK_SELF_ARG and
is enabled by default.
See issue #1216.
MicroPython doesn't come with standard library included, so it is important
to be able to easily install needed package in a seamless manner. Bundling
package manager (upip) inside an executable solves this issue.
upip is bundled only with standard executable, not "minimal" or "fast"
builds.
Using MICROPY_PY_SYS_PATH_DEFAULT macro define. A usecase is building a
distribution package, which should not have user home path by default in
sys.path. In such case, MICROPY_PY_SYS_PATH_DEFAULT can be defined on
make command-line (using CFLAGS_EXTRA).
This gets uPy readline working with unix port, with tab completion and
history. GNU readline is still supported, configure using
MICROPY_USE_READLINE variable.
The function and corresponding command-line option are only enabled for
the coverage build. They are used to exercise uPy features that can't
be properly tested by Python scripts.
From https://docs.python.org/3/library/constants.html#NotImplemented :
"Special value which should be returned by the binary special methods
(e.g. __eq__(), __lt__(), __add__(), __rsub__(), etc.) to indicate
that the operation is not implemented with respect to the other type;
may be returned by the in-place binary special methods (e.g. __imul__(),
__iand__(), etc.) for the same purpose. Its truth value is true."
Some people however appear to abuse it to mean "no value" when None is
a legitimate value (don't do that).
The implementation is very basic and non-compliant and provided solely for
CPython compatibility. The function itself is bad Python2 heritage, its
usage is discouraged.
Previous to this patch the printing mechanism was a bit of a tangled
mess. This patch attempts to consolidate printing into one interface.
All (non-debug) printing now uses the mp_print* family of functions,
mainly mp_printf. All these functions take an mp_print_t structure as
their first argument, and this structure defines the printing backend
through the "print_strn" function of said structure.
Printing from the uPy core can reach the platform-defined print code via
two paths: either through mp_sys_stdout_obj (defined pert port) in
conjunction with mp_stream_write; or through the mp_plat_print structure
which uses the MP_PLAT_PRINT_STRN macro to define how string are printed
on the platform. The former is only used when MICROPY_PY_IO is defined.
With this new scheme printing is generally more efficient (less layers
to go through, less arguments to pass), and, given an mp_print_t*
structure, one can call mp_print_str for efficiency instead of
mp_printf("%s", ...). Code size is also reduced by around 200 bytes on
Thumb2 archs.
splitlines() occurs ~179 times in CPython3 standard library, so was
deemed worthy to implement. The method has subtle semantic differences
from just .split("\n"). It is also defined as working for any end-of-line
combination, but this is currently not implemented - it works only with
LF line-endings (which should be OK for text strings on any platforms,
but not OK for bytes).
Given that there's already support for "fixed table" maps, which are
essentially ordered maps, the implementation of OrderedDict just extends
"fixed table" maps by adding an "is ordered" flag and add/remove
operations, and reuses 95% of objdict code, just making methods tolerant
to both dict and OrderedDict.
Some things are missing so far, like CPython-compatible repr and comparison.
OrderedDict is Disabled by default; enabled on unix and stmhal ports.
These allow to fine-tune the compiler to select whether it optimises
tuple assignments of the form a, b = c, d and a, b, c = d, e, f.
Sensible defaults are provided.
This is rarely used feature which takes enough code to implement, so is
controlled by MICROPY_PY_ARRAY_SLICE_ASSIGN config setting, default off.
But otherwise it may be useful, as allows to update arbitrary-sized data
buffers in-place.
Slice is yet to implement, and actually, slice assignment implemented in
such a way that RHS of assignment should be array of the exact same item
typecode as LHS. CPython has it more relaxed, where RHS can be any sequence
of compatible types (e.g. it's possible to assign list of int's to a
bytearray slice).
Overall, when all "slice write" features are implemented, it may cost ~1KB
of code.
The implementation of these functions is very large (order 4k) and they
are rarely used, so we don't enable them by default.
They are however enabled in stmhal and unix, since we have the room.
To enable parsing constants more efficiently, mp_parse should be allowed
to raise an exception, and mp_compile can already raise a MemoryError.
So these functions need to be protected by an nlr push/pop block.
This patch adds that feature in all places. This allows to simplify how
mp_parse and mp_compile are called: they now raise an exception if they
have an error and so explicit checking is not needed anymore.
Native code has GC-heap pointers in it so it must be scanned. But on
unix port memory for native functions is mmap'd, and so it must have
explicit code to scan it for root pointers.
Compiler optimises lookup of module.CONST when enabled (an existing
feature). Disabled by default; enabled for unix, windows, stmhal.
Costs about 100 bytes ROM on stmhal.
This allows to enable mem-info functions in micropython module, even if
MICROPY_MEM_STATS is not enabled. In this case, you get mem_info and
qstr_info but not mem_{total,current,peak}.
GC for unix/windows builds doesn't make use of the bss section anymore,
so we do not need the (sometimes complicated) build features and code related to it
This is a simple optimisation inspired by JITing technology: we cache in
the bytecode (using 1 byte) the offset of the last successful lookup in
a map. This allows us next time round to check in that location in the
hash table (mp_map_t) for the desired entry, and if it's there use that
entry straight away. Otherwise fallback to a normal map lookup.
Works for LOAD_NAME, LOAD_GLOBAL, LOAD_ATTR and STORE_ATTR opcodes.
On a few tests it gives >90% cache hit and greatly improves speed of
code.
Disabled by default. Enabled for unix and stmhal ports.
This patch consolidates all global variables in py/ core into one place,
in a global structure. Root pointers are all located together to make
GC tracing easier and more efficient.
This patch makes MICROPY_PY_BUILTINS_SET compile-time option fully
disable the builtin set object (when set to 0). This includes removing
set constructor/comprehension from the grammar, the compiler and the
emitters. Now, enabling set costs 8168 bytes on unix x64, and 3576
bytes on stmhal.
system() is the basic function to support automation of tasks, so have it
available builtin, for example, for bootstrapping rest of micropython
environment.
This patch adds a configuration option (MICROPY_CAN_OVERRIDE_BUILTINS)
which, when enabled, allows to override all names within the builtins
module. A builtins override dict is created the first time the user
assigns to a name in the builtins model, and then that dict is searched
first on subsequent lookups. Note that this implementation doesn't
allow deleting of names.
This patch also does some refactoring of builtins code, creating the
modbuiltins.c file.
Addresses issue #959.
The function is modeled after traceback.print_exception(), but unbloated,
and put into existing module to save overhead on adding another module.
Compliant traceback.print_exception() is intended to be implemented in
micropython-lib in terms of sys.print_exception().
This change required refactoring mp_obj_print_exception() to take pfenv_t
interface arguments.
Addresses #751.
mp_obj_int_get_truncated is used as a "fast path" int accessor that
doesn't check for overflow and returns the int truncated to the machine
word size, ie mp_int_t.
Use mp_obj_int_get_truncated to fix struct.pack when packing maximum word
sized values.
Addresses issues #779 and #998.
mp_lexer_t type is exposed, mp_token_t type is removed, and simple lexer
functions (like checking current token kind) are now inlined.
This saves 784 bytes ROM on 32-bit unix, 348 bytes on stmhal, and 460
bytes on bare-arm. It also saves a tiny bit of RAM since mp_lexer_t
is a bit smaller. Also will run a bit more efficiently.
The specifier should go after the number, before size suffix like 'k' or 'm'.
E.g.: "-X heapsize=100wk" will use 100K heap on 32-bit system and 200K - on
64-bit.
This build is primarily intended for benchmarking, and may have random
features enabled/disabled to get high scores in synthetic benchmarks.
The intent is to show/prove that MicroPython codebase can compete with
CPython, when configured appropriately. But the main MicroPython aim
still remains to optimize for memory usage (which inevitibly leads to
performance degradation in some areas on some workloads).
gc.enable/disable are now the same as CPython: they just control whether
automatic garbage collection is enabled or not. If disabled, you can
still allocate heap memory, and initiate a manual collection.