Before this patch, (x+y)*z would be parsed to a tree that contained a
redundant identity parse node corresponding to the parenthesis. With
this patch such nodes are optimised away, which reduces memory
requirements for expressions with parenthesis, and simplifies the
compiler because it doesn't need to handle this identity case.
A parenthesis parse node is still needed for tuples.
Note that even though wrapped in MICROPY_CPYTHON_COMPAT, it is not
fully compatible because the modifications to the dictionary do not
propagate to the actual instance members.
Only types whose iterator instances still fit in 4 machine words have
been changed to use the polymorphic iterator.
Reduces Thumb2 arch code size by 264 bytes.
Previously, mark operation weren't logged at all, while it's quite useful
to see cascade of marks in case of over-marking (and in other cases too).
Previously, sweep was logged for each block of object in memory, but that
doesn't make much sense and just lead to longer output, harder to parse
by a human. Instead, log sweep only once per object. This is similar to
other memory manager operations, e.g. an object is allocated, then freed.
Or object is allocated, then marked, otherwise swept (one log entry per
operation, with the same memory address in each case).
Map indicies are most commonly a qstr, and adding a fast-path for hashing
of a qstr increases overall performance of the runtime.
On pyboard there is a 4% improvement in the pystone benchmark for a cost
of 20 bytes of code size. It's about a 2% improvement on unix.
When looking up and extracting an attribute of an instance, some
attributes must bind self as the first argument to make a working method
call. Previously to this patch, any attribute that was callable had self
bound as the first argument. But Python specs require the check to be
more restrictive, and only functions, closures and generators should have
self bound as the first argument
Addresses issue #1675.
POSIX doesn't guarantee something like that to work, but it works on any
system with careful signal implementation. Roughly, the requirement is
that signal handler is executed in the context of the process, its main
thread, etc. This is true for Linux. Also tested to work without issues
on MacOSX.
This makes all tests pass again for 64bit windows builds which would
previously fail for anything printing ranges (builtin_range/unpack1)
because they were printed as range( ld, ld ).
This is done by reusing the mp_vprintf implementation for MICROPY_OBJ_REPR_D
for 64bit windows builds (both msvc and mingw-w64) since the format specifier
used for 64bit integers is also %lld, or %llu for the unsigned version.
Note these specifiers used to be fetched from inttypes.h, which is the
C99 way of working with printf/scanf in a portable way, but mingw-w64
wants to be backwards compatible with older MS C runtimes and uses
the non-portable %I64i instead of %lld in inttypes.h, so remove the use
of said header again in mpconfig.h and define the specifiers manually.
Ideally we'd use %zu for size_t args, but that's unlikely to be supported
by all runtimes, and we would then need to implement it in mp_printf.
So simplest and most portable option is to use %u and cast the argument
to uint(=unsigned int).
Note: reason for the change is that UINT_FMT can be %llu (size suitable
for mp_uint_t) which is wider than size_t and prints incorrect results.
MICROPY_ENABLE_COMPILER can be used to enable/disable the entire compiler,
which is useful when only loading of pre-compiled bytecode is supported.
It is enabled by default.
MICROPY_PY_BUILTINS_EVAL_EXEC controls support of eval and exec builtin
functions. By default they are only included if MICROPY_ENABLE_COMPILER
is enabled.
Disabling both options saves about 40k of code size on 32-bit x86.
To let unix port implement "machine" functionality on Python level, and
keep consistent naming in other ports (baremetal ports will use magic
module "symlinking" to still load it on "import machine").
Fixes#1701.
For builds where mp_uint_t is larger than size_t, it doesn't make
sense to use such a wide type for qstrs. There can only be as many
qstrs as there is address space on the machine, so size_t is the correct
type to use.
Saves about 3000 bytes of code size when building unix/ port with
MICROPY_OBJ_REPR_D.
size_t is the correct type to use to count things related to the size of
the address space. Using size_t (instead of mp_uint_t) is important for
the efficiency of ports that configure mp_uint_t to larger than the
machine word size.
This allows to have single itertaor type for various internal iterator
types (save rodata space by not having repeating almost-empty type
structures). It works by looking "iternext" method stored in particular
object instance (should be first object field after "base").
Fixes#1684 and makes "not" match Python semantics. The code is also
simplified (the separate MP_BC_NOT opcode is removed) and the patch saves
68 bytes for bare-arm/ and 52 bytes for minimal/.
Previously "not x" was implemented as !mp_unary_op(x, MP_UNARY_OP_BOOL),
so any given object only needs to implement MP_UNARY_OP_BOOL (and the VM
had a special opcode to do the ! bit).
With this patch "not x" is implemented as mp_unary_op(x, MP_UNARY_OP_NOT),
but this operation is caught at the start of mp_unary_op and dispatched as
!mp_obj_is_true(x). mp_obj_is_true has special logic to test for
truthness, and is the correct way to handle the not operation.
Oftentimes, libc, libm, etc. don't come compiled with CPU compressed code
option (Thumb, MIPS16, etc.), but we may still want to use such compressed
code for MicroPython itself.
To use, put the following in mpconfigport.h:
#define MICROPY_OBJ_REPR (MICROPY_OBJ_REPR_D)
#define MICROPY_FLOAT_IMPL (MICROPY_FLOAT_IMPL_DOUBLE)
typedef int64_t mp_int_t;
typedef uint64_t mp_uint_t;
#define UINT_FMT "%llu"
#define INT_FMT "%lld"
Currently does not work with native emitter enabled.
This allows the mp_obj_t type to be configured to something other than a
pointer-sized primitive type.
This patch also includes additional changes to allow the code to compile
when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of
mp_uint_t, and various casts.
- add mp_int_t/mp_uint_t typedefs in mpconfigport.h
- fix integer suffixes/formatting in mpconfig.h and mpz.h
- use MICROPY_NLR_SETJMP=1 in Makefile since the current nlrx64.S
implementation causes segfaults in gc_free()
- update README
This takes previous IEEE-754 single precision float implementation, and
converts it to fully portable parametrizable implementation using C99
functions like signbit(), isnan(), isinf(). As long as those functions
are available (they can be defined in adhoc manner of course), and
compiler can perform standard arithmetic and comparison operations on a
float type, this implementation will work with any underlying float type
(including types whose mantissa is larger than available intergral integer
type).
This change makes the code behave how it was supposed to work when first
written. The avail_slot variable is set to the first free slot when
looking for a key (which would come from deleting an entry). So it's
more efficient (for subsequent lookups) to insert a new key into such a
slot, rather than the very last slot that was searched.
MICROPY_PERSISTENT_CODE must be enabled, and then enabling
MICROPY_PERSISTENT_CODE_LOAD/SAVE (either or both) will allow loading
and/or saving of code (at the moment just bytecode) from/to a .mpy file.
Main changes when MICROPY_PERSISTENT_CODE is enabled are:
- qstrs are encoded as 2-byte fixed width in the bytecode
- all pointers are removed from bytecode and put in const_table (this
includes const objects and raw code pointers)
Ultimately this option will enable persistence for not just bytecode but
also native code.
Currently, the only place that clears the bit is in gc_collect.
So if a block with a finalizer is allocated, and subsequently
freed, and then the block is reallocated with no finalizer then
the bit remains set.
This could also be fixed by having gc_alloc clear the bit, but
I'm pretty sure that free is called way less than alloc, so doing
it in free is more efficient.
This patch adds/subtracts a constant from the 30-bit float representation
so that str/qstr representations are favoured: they now have all the high
bits set to zero. This makes encoding/decoding qstr strings more
efficient (and they are used more often than floats, which are now
slightly less efficient to encode/decode).
Saves about 300 bytes of code space on Thumb 2 arch.
py/mphal.h contains declarations for generic mp_hal_XXX functions, such
as stdio and delay/ticks, which ports should provide definitions for. A
port will also provide mphalport.h with further HAL declarations.
This makes format specifiers ~ fully compatible with CPython.
Adds 24 bytes for stmhal port (because previosuly we had to catch and report
it's unsupported to user).
Scenario: module1 depends on some common file from lib/, so specifies it
in its SRC_MOD, and the same situation with module2, then common file
from lib/ eventually ends up listed twice in $(OBJ), which leads to link
errors.
Make is equipped to deal with such situation easily, quoting the manual:
"The value of $^ omits duplicate prerequisites, while $+ retains them and
preserves their order." So, just use $^ consistently in all link targets.
This saves around 1000 bytes (Thumb2 arch) because in repr "C" it is
costly to check and extract a qstr. So making such check/extract a
function instead of a macro saves lots of code space.
This new object representation puts floats into the object word instead
of on the heap, at the expense of reducing their precision to 30 bits.
It only makes sense when the word size is 32-bits.
Cortex-M0, M0+ and M1 only have ARMv6-M Thumb/Thumb2 instructions. M3,
M4 and M7 have a superset of these, named ARMv7-M. This patch adds a
config option to enable support of the superset of instructions.
It makes much more sense to do constant folding in the parser while the
parse tree is being built. This eliminates the need to create parse
nodes that will just be folded away. The code is slightly simpler and a
bit smaller as well.
Constant folding now has a configuration option,
MICROPY_COMP_CONST_FOLDING, which is enabled by default.
With this patch parse nodes are allocated sequentially in chunks. This
reduces fragmentation of the heap and prevents waste at the end of
individually allocated parse nodes.
Saves roughly 20% of RAM during parse stage.
This patch adds more fine grained error message control for errors when
parsing integers (now has terse, normal and detailed). When detailed is
enabled, the error now escapes bytes when printing them so they can be
more easily seen.
When creating constant mpz's, the length of the mpz must be exactly how
many digits are used (not allocated) otherwise these numbers are not
compatible with dynamically allocated numbers.
Addresses issue #1448.
4 spaces are added at start of line to match previous indent, and if
previous line ended in colon.
Backspace deletes 4 space if only spaces begin a line.
Configurable via MICROPY_REPL_AUTO_INDENT. Disabled by default.
This optimises (in speed and code size) for the common case where the
binary op for the bool object is supported. Unsupported binary ops
still behave the same.
Function annotations are only needed when the native emitter is enabled
and when the current scope is emitted in viper mode. All other times
the annotations can be skipped completely.
Fetch the current usb mode and return a string representation when
pyb.usb_mode() is called with no args. The possible string values are interned
as qstr's. None will be returned if an incorrect mode is set.
Indeed, this flag efectively selects architecture target, and must
consistently apply to all compiles and links, including 3rd-party
libraries, unlike CFLAGS, which have MicroPython-specific setting.
unix-cpy was originally written to get semantic equivalent with CPython
without writing functional tests. When writing the initial
implementation of uPy it was a long way between lexer and functional
tests, so the half-way test was to make sure that the bytecode was
correct. The idea was that if the uPy bytecode matched CPython 1-1 then
uPy would be proper Python if the bytecodes acted correctly. And having
matching bytecode meant that it was less likely to miss some deep
subtlety in the Python semantics that would require an architectural
change later on.
But that is all history and it no longer makes sense to retain the
ability to output CPython bytecode, because:
1. It outputs CPython 3.3 compatible bytecode. CPython's bytecode
changes from version to version, and seems to have changed quite a bit
in 3.5. There's no point in changing the bytecode output to match
CPython anymore.
2. uPy and CPy do different optimisations to the bytecode which makes it
harder to match.
3. The bytecode tests are not run. They were never part of Travis and
are not run locally anymore.
4. The EMIT_CPYTHON option needs a lot of extra source code which adds
heaps of noise, especially in compile.c.
5. Now that there is an extensive test suite (which tests functionality)
there is no need to match the bytecode. Some very subtle behaviour is
tested with the test suite and passing these tests is a much better
way to stay Python-language compliant, rather than trying to match
CPy bytecode.