This is not fully correct re: error handling, because we should check that
that types are used consistently (only str's or only bytes), but magically
makes lot of functions support bytes.
Two things are handled here: allow to compare native subtypes of tuple,
e.g. namedtuple (TODO: should compare type too, currently compared
duck-typedly by content). Secondly, allow user sunclasses of tuples
(and its subtypes) be compared either. "Magic" I did previously in
objtype.c covers only one argument (lhs is many), so we're in trouble
when lhs is native type - there's no other option besides handling
rhs in special manner. Fortunately, this patch outlines approach with
fast path for native types.
This was hit when trying to make urlparse.py from stdlib run. Took
quite some time to debug.
TODO: Reconsile bound method creation process better, maybe callable is
to generic type to bind at all?
"object" type in MicroPython currently doesn't implement any methods, and
hopefully, we'll try to stay like that for as long as possible. Even if we
have to add something eventually, look up from there might be handled in
adhoc manner, as last resort (that's not compliant with Python3 MRO, but
we're already non-compliant). Hence: 1) no need to spend type trying to
lookup anything in object; 2) no need to allocate subobject when explicitly
inheriting from object; 3) and having multiple bases inheriting from object
is not a case of incompatible multiple inheritance.
You can now do:
X = const(123)
Y = const(456 + X)
and the compiler will replace X and Y with their values.
See discussion in issue #266 and issue #573.
... and we have not that bad mapping type after all - lookup time is ~ the
same as in one-attr instance. My namedtuple implementation on the other
hand degrades awfully.
So, need to rework it. First observation is that named tuple fields are
accessed as attributes, so all names are interned at the program start.
Then, really should store field array as qstr[], and do quick 32/64 bit
scan thru it.
Need to have a policy as to how far we go adding keyword support to
built ins. It's nice to have, and gets better CPython compatibility,
but hurts the micro nature of uPy.
Addresses issue #577.
Motivation is optimizing handling of various constructs as well as
understanding which constructs are more efficient in MicroPython.
More info: http://forum.micropython.org/viewtopic.php?f=3&t=77
Results are wildly unexpected. For example, "optimization" of range
iteration into while loop makes it twice as slow. Generally, the more
bytecodes, the slower the code.
In tests/pyb is now a suite of tests that tests the pyb module on the
pyboard. They include expected output files because we can't run
CPython on the pyboard to compare against.
run-tests script has now been updated to allow pyboard tests to be run.
Just pass the option --pyboard. This runs all basic, float and pyb
tests. Note that float/math-fun.py currently fails because not all math
functions are implemented in stmhal/.
Tests in basics (which should probably be renamed to core) should not
rely on float, or import any non-built-in files. This way these tests
can be run when those features are not available.
All test in basics now pass on the pyboard using stmhal port, except for
string-repr which has some issues with character hex printing.
This is necessary to catch all cases where locals are referenced before
assignment. We still keep the _0, _1, _2 versions of LOAD_FAST to help
reduced the byte code size in RAM.
Addresses issue #457.
I'm pretty sure these are never reached, since NOT_EQUAL is always
converted into EQUAL in mp_binary_op. No one should call
type.binary_op directly, they should always go through mp_binary_op
(or mp_obj_is_equal).
Only calcsize() and unpack() functions provided so far, for little-endian
byte order. Format strings don't support repition spec (like "2b3i").
Unfortunately, dealing with all the various binary type sizes and alignments
will lead to quite a bloated "binary" helper functions - if optimizing for
speed. Need to think if using dynamic parametrized algos makes more sense.
These two are apprerently the most concise and efficient way to convert
int to/from bytes in Python. The alternatives are struct and array modules,
but methods using them are more verbose in Python code and less efficient
in memory/cycles.