Allows optimisation of cases like:
import micropython
_DEBUG = micropython.const(False)
if _DEBUG:
print('Debugging info')
Previously the 'if' statement was only optimised out if the type of the
const() argument was integer.
The change is implemented in a way that makes the compiler slightly smaller
(-16 bytes on PYBV11) but compilation will also be very slightly slower.
As a bonus, if const support is enabled then the compiler can now optimise
const truthy/falsey expressions of other types, like:
while "something":
pass
... unclear if that is useful, but perhaps it could be.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
Now that constant tuples are supported in the parser, eg (1, True, "str"),
it's a small step to allow anything that is a constant to be used with the
pattern:
from micropython import const
X = const(obj)
This commit makes the required changes to allow the following types of
constants:
from micropython import const
_INT = const(123)
_FLOAT = const(1.2)
_COMPLEX = const(3.4j)
_STR = const("str")
_BYTES = const(b"bytes")
_TUPLE = const((_INT, _STR, _BYTES))
_TUPLE2 = const((None, False, True, ..., (), _TUPLE))
Prior to this, only integers could be used in const(...).
Signed-off-by: Damien George <damien@micropython.org>
This commit adds support to the parser so that tuples which contain only
constant elements (bool, int, str, bytes, etc) are immediately converted to
a tuple object. This makes it more efficient to use tuples containing
constant data because they no longer need to be created at runtime by the
bytecode (or native code).
Furthermore, with this improvement constant tuples that are part of frozen
code are now able to be stored fully in ROM (this will be implemented in
later commits).
Code size is increased by about 400 bytes on Cortex-M4 platforms.
See related issue #722.
Signed-off-by: Damien George <damien@micropython.org>
This means that all constants for EMIT_ARG(load_const_obj, obj) are created
in the parser (rather than some in the compiler).
Signed-off-by: Damien George <damien@micropython.org>
This commit simplifies and optimises the parse tree in-memory
representation of lists of expressions, for tuples and lists, and when
tuples are used on the left-hand-side of assignments and within del
statements. This reduces memory usage of the parse tree when such code is
compiled, and also reduces the size of the compiler.
For example, (1,) was previously the following parse tree:
expr_stmt(5) (n=2)
atom_paren(45) (n=1)
testlist_comp(146) (n=2)
int(1)
testlist_comp_3b(149) (n=1)
NULL
NULL
and with this commit is now:
expr_stmt(5) (n=2)
atom_paren(45) (n=1)
testlist_comp(146) (n=1)
int(1)
NULL
Similarly, (1, 2, 3) was previously:
expr_stmt(5) (n=2)
atom_paren(45) (n=1)
testlist_comp(146) (n=2)
int(1)
testlist_comp_3c(150) (n=2)
int(2)
int(3)
NULL
and is now:
expr_stmt(5) (n=2)
atom_paren(45) (n=1)
testlist_comp(146) (n=3)
int(1)
int(2)
int(3)
NULL
Signed-off-by: Damien George <damien@micropython.org>
This implements (most of) the PEP-498 spec for f-strings and is based on
https://github.com/micropython/micropython/pull/4998 by @klardotsh.
It is implemented in the lexer as a syntax translation to `str.format`:
f"{a}" --> "{}".format(a)
It also supports:
f"{a=}" --> "a={}".format(a)
This is done by extracting the arguments into a temporary vstr buffer,
then after the string has been tokenized, the lexer input queue is saved
and the contents of the temporary vstr buffer are injected into the lexer
instead.
There are four main limitations:
- raw f-strings (`fr` or `rf` prefixes) are not supported and will raise
`SyntaxError: raw f-strings are not supported`.
- literal concatenation of f-strings with adjacent strings will fail
"{}" f"{a}" --> "{}{}".format(a) (str.format will incorrectly use
the braces from the non-f-string)
f"{a}" f"{a}" --> "{}".format(a) "{}".format(a) (cannot concatenate)
- PEP-498 requires the full parser to understand the interpolated
argument, however because this entirely runs in the lexer it cannot
resolve nested braces in expressions like
f"{'}'}"
- The !r, !s, and !a conversions are not supported.
Includes tests and cpydiffs.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Constant expression like "2 ** 3" will now be folded, and the special form
"X = const(2 ** 3)" will now compile because the argument to the const is
now a constant.
Fixes issue #5865.
Note: the uncrustify configuration is explicitly set to 'add' instead of
'force' in order not to alter the comments which use extra spaces after //
as a means of indenting text for clarity.
In this part of the code there is no way to get the ** operator, so no need
to check for it.
This commit also adds tests for this, and other related, invalid const
operations.
This string is recognised by uncrustify, to disable formatting in the
region marked by these comments. This is necessary in the qstrdef*.h files
to prevent modification of the strings within the Q(...). In other places
it is used to prevent excessive reformatting that would make the code less
readable.
To make progress towards MicroPython supporting Python 3.5, adding the
matmul operator is important because it's a really "low level" part of the
language, being a new token and modifications to the grammar.
It doesn't make sense to make it configurable because 1) it would make the
grammar and lexer complicated/messy; 2) no other operators are
configurable; 3) it's not a feature that can be "dynamically plugged in"
via an import.
And matmul can be useful as a general purpose user-defined operator, it
doesn't have to be just for numpy use.
Based on work done by Jim Mussared.
These macros could in principle be (inline) functions so it makes sense to
have them lower case, to match the other C API functions.
The remaining macros that are upper case are:
- MP_OBJ_TO_PTR, MP_OBJ_FROM_PTR
- MP_OBJ_NEW_SMALL_INT, MP_OBJ_SMALL_INT_VALUE
- MP_OBJ_NEW_QSTR, MP_OBJ_QSTR_VALUE
- MP_OBJ_FUN_MAKE_SIG
- MP_DECLARE_CONST_xxx
- MP_DEFINE_CONST_xxx
These must remain macros because they are used when defining const data (at
least, MP_OBJ_NEW_SMALL_INT is so it makes sense to have
MP_OBJ_SMALL_INT_VALUE also a macro).
For those macros that have been made lower case, compatibility macros are
provided for the old names so that users do not need to change their code
immediately.
Empty __VA_ARGS__ are not allowed in the C preprocessor so adjust the rule
arg offset calculation to not use them. Also, some compilers (eg MSVC)
require an extra layer of macro expansion.
This is the sixth and final patch in a series of patches to the parser that
aims to reduce code size by compressing the data corresponding to the rules
of the grammar.
Prior to this set of patches the rules were stored as rule_t structs with
rule_id, act and arg members. And then there was a big table of pointers
which allowed to lookup the address of a rule_t struct given the id of that
rule.
The changes that have been made are:
- Breaking up of the rule_t struct into individual components, with each
component in a separate array.
- Removal of the rule_id part of the struct because it's not needed.
- Put all the rule arg data in a big array.
- Change the table of pointers to rules to a table of offsets within the
array of rule arg data.
The last point is what is done in this patch here and brings about the
biggest decreases in code size, because an array of pointers is now an
array of bytes.
Code size changes for the six patches combined is:
bare-arm: -644
minimal x86: -1856
unix x64: -5408
unix nanbox: -2080
stm32: -720
esp8266: -812
cc3200: -712
For the change in parser performance: it was measured on pyboard that these
six patches combined gave an increase in script parse time of about 0.4%.
This is due to the slightly more complicated way of looking up the data for
a rule (since the 9th bit of the offset into the rule arg data table is
calculated with an if statement). This is an acceptable increase in parse
time considering that parsing is only done once per script (if compiled on
the target).
Instead of each rule being stored in ROM as a struct with rule_id, act and
arg, the act and arg parts are now in separate arrays and the rule_id part
is removed because it's not needed. This reduces code size, by roughly one
byte per grammar rule, around 150 bytes.
The rule name is only used for debugging, and this patch makes things a bit
cleaner by completely separating out the rule name from the rest of the
rule data.
The nan-boxing representation has an extra 16-bits of space to store
small-int values, and making use of it allows to create and manipulate full
32-bit positive integers (ie up to 0xffffffff) without using the heap.
The function mp_obj_new_str_of_type is a general str object constructor
used in many places in the code to create either a str or bytes object.
When creating a str it should first check if the string data already exists
as an interned qstr, and if so then return the qstr object. This patch
makes the function have such behaviour, which helps to reduce heap usage by
reusing existing interned data where possible.
The old behaviour of mp_obj_new_str_of_type (which didn't check for
existing interned data) is made available through the function
mp_obj_new_str_copy, but should only be used in very special cases.
One consequence of this patch is that the following expression is now True:
'abc' is ' abc '.split()[0]
Header files that are considered internal to the py core and should not
normally be included directly are:
py/nlr.h - internal nlr configuration and declarations
py/bc0.h - contains bytecode macro definitions
py/runtime0.h - contains basic runtime enums
Instead, the top-level header files to include are one of:
py/obj.h - includes runtime0.h and defines everything to use the
mp_obj_t type
py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
and defines everything to use the general runtime support functions
Additional, specific headers (eg py/objlist.h) can be included if needed.
The parser was originally written to work without raising any exceptions
and instead return an error value to the caller. But it's now required
that a call to the parser be wrapped in an nlr handler, so we may as well
make use of that fact and simplify the parser so that it doesn't need to
keep track of any memory errors that it had. The parser anyway explicitly
raises an exception at the end if there was an error.
This patch simplifies the parser by letting the underlying memory
allocation functions raise an exception if they fail to allocate any
memory. And if there is an error parsing the "<id> = const(<val>)" pattern
then that also raises an exception right away instead of trying to recover
gracefully and then raise.
Previous to this patch any non-interned str/bytes objects would create a
special parse node that held a copy of the str/bytes data. Then in the
compiler this data would be turned into a str/bytes object. This actually
lead to 2 copies of the data, one in the parse node and one in the object.
The parse node's copy of the data would be freed at the end of the compile
stage but nevertheless it meant that the peak memory usage of the
parse/compile stage was higher than it needed to be (by an amount equal to
the number of bytes in all the non-interned str/bytes objects).
This patch changes the behaviour so that str/bytes objects are created
directly in the parser and the object stored in a const-object parse node
(which already exists for bignum, float and complex const objects). This
reduces peak RAM usage of the parse/compile stage, simplifies the parser
and compiler, and reduces code size by about 170 bytes on Thumb2 archs,
and by about 300 bytes on Xtensa archs.
This patch allows uPy consts to be bignums, eg:
X = const(1 << 100)
The infrastructure for consts to be a bignum (rather than restricted to
small integers) has been in place for a while, ever since constant folding
was upgraded to allow bignums. It just required a small change (in this
patch) to enable it.
Grammar rules have 2 variants: ones that are attached to a specific
compile function which is called to compile that grammar node, and ones
that don't have a compile function and are instead just inspected to see
what form they take.
In the compiler there is a table of all grammar rules, with each entry
having a pointer to the associated compile function. Those rules with no
compile function have a null pointer. There are 120 such rules, so that's
120 words of essentially wasted code space.
By grouping together the compile vs no-compile rules we can put all the
no-compile rules at the end of the list of rules, and then we don't need
to store the null pointers. We just have a truncated table and it's
guaranteed that when indexing this table we only index the first half,
the half with populated pointers.
This patch implements such a grouping by having a specific macro for the
compile vs no-compile grammar rules (DEF_RULE vs DEF_RULE_NC). It saves
around 460 bytes of code on 32-bit archs.