Native emitter can now compile try/except blocks using nlr_push/nlr_pop.
It probably only works for 1 level of exception handling. It doesn't
work on Thumb (only x64).
Native emitter can also handle some additional op codes.
With this patch, 198 tests now pass using "-X emit=native" option to
micropython.
- rearrange/add definitions that were not there so it's easier to compare both
- use MICROPY_PY_SYS_PLATFORM in main.c since it's available anyway
- define EWOULDBLOCK, it is missing from ingw32
As stack checking is enabled by default, ports which don't call
stack_ctrl_init() are broken now (report RuntimeError on startup). Save
them trouble and just init stack control framework in interpreter init.
Squashed commit of the following:
commit 99dc21b67a895dc10d3c846bc158d27c839cee48
Author: Chris Angelico <rosuav@gmail.com>
Date: Thu Jun 12 02:18:54 2014 +1000
Optimize as per TODO (thanks Damien!)
commit 5bf0153ecad8348443058d449d74504fc458fe51
Author: Chris Angelico <rosuav@gmail.com>
Date: Tue Jun 10 08:42:06 2014 +1000
Test a default (= UTF-8) encode and decode
commit c962057ac340832c4fde60896f656a3fe3ad78a9
Merge: e2c9782 195de32
Author: Chris Angelico <rosuav@gmail.com>
Date: Tue Jun 10 05:23:03 2014 +1000
Merge branch 'master' into unicode, resolving conflict on py/obj.h
commit e2c9782a65eb57f481d441d40161de427e1940ba
Author: Chris Angelico <rosuav@gmail.com>
Date: Tue Jun 10 05:05:57 2014 +1000
More whitespace fixups
commit 086a2a0f57afbc1f731697fd5d3a0cbbb80e5418
Author: Chris Angelico <rosuav@gmail.com>
Date: Tue Jun 10 05:04:20 2014 +1000
Properly implement string slicing
commit 0d339a143e2b6442366145e7f3d64aada293eaa0
Author: Chris Angelico <rosuav@gmail.com>
Date: Tue Jun 10 02:24:11 2014 +1000
Support slicing in str_index_to_ptr, and fix a bounds error
commit 24371c7267d360e77cf5eabc2e8ce9a73d2ee0da
Author: Chris Angelico <rosuav@gmail.com>
Date: Tue Jun 10 02:10:22 2014 +1000
Break out index-to-pointer calculation into a function
commit 616c24ac014c3ca56008428c506034dd1bfff7a8
Author: Chris Angelico <rosuav@gmail.com>
Date: Tue Jun 10 02:03:11 2014 +1000
Add tests of string slicing, which currently fail
commit a24d19f676fe8cc21dad512d91b826892e162a5b
Author: Chris Angelico <rosuav@gmail.com>
Date: Tue Jun 10 01:56:53 2014 +1000
Change string indexing to not precalculate the charlen, and add test for neg indexing
commit 0bcc7ab89eafb2ae53195e94c9bea42a4e886b64
Author: Chris Angelico <rosuav@gmail.com>
Date: Sun Jun 8 22:09:17 2014 +1000
Clean up constant qstr declarations now that charlen isn't needed
commit 5473e1a1dba2124b7b0c207f2964293cfbe80167
Author: Chris Angelico <rosuav@gmail.com>
Date: Sun Jun 8 07:18:42 2014 +1000
Remove the charlen field from strings, calculating it when required
commit 5c1658ec71aefbdc88c261ce2e57dc7670cdc6ef
Author: Chris Angelico <rosuav@gmail.com>
Date: Sun Jun 8 07:11:27 2014 +1000
Get rid of mp_obj_str_get_data_len() which was used in only one place
commit a019ba968b4e8daf7f3674f63c5cc400e304c509
Author: Chris Angelico <rosuav@gmail.com>
Date: Sun Jun 8 06:58:26 2014 +1000
Add a unichar_charlen() function to calculate length-in-characters from length-in-bytes
commit 44b0d5cff846ba487c526ed95be1b3d1cd3d762a
Author: Chris Angelico <rosuav@gmail.com>
Date: Sun Jun 8 06:32:44 2014 +1000
Use utf8_get/next_char in building up a string's repr
commit 30d1bad33f7af90f1971987c39864c8fcf3f5c21
Author: Chris Angelico <rosuav@gmail.com>
Date: Sun Jun 8 06:10:45 2014 +1000
Make utf8_get_char() and utf8_next_char() actually do what their names say
commit bc990dad9afb8ec112f5e7f7f79d5ab415da0e72
Author: Chris Angelico <rosuav@gmail.com>
Date: Sun Jun 8 02:10:59 2014 +1000
Revert "Add PEP 393-flags to strings and stub usage."
This reverts commit c239f509521d1a0f9563bf9c5de0c4fb9a6a33ba.
commit f9bebb28ad52467f2f2d7a752bb033296b6c2f9b
Author: Chris Angelico <rosuav@gmail.com>
Date: Sat Jun 7 15:41:48 2014 +1000
Whitespace fixes
commit 279de0c8eb3cb186914799ccc5ee94ea97f56de4
Author: Chris Angelico <rosuav@gmail.com>
Date: Sat Jun 7 15:28:35 2014 +1000
Formatting/layout improvements - introduce macros for UTF-8 byte detection, add braces. No functional changes.
commit f1911f53d56da809c97b07245f5728a419e8fb30
Author: Chris Angelico <rosuav@gmail.com>
Date: Sat Jun 7 11:56:02 2014 +1000
Make chr() Unicode-aware
commit f51ad737b48ac04c161197a4012821d50885c4c7
Author: Chris Angelico <rosuav@gmail.com>
Date: Sat Jun 7 11:44:07 2014 +1000
Make a string's repr Unicode-aware
commit 01bd68684611585d437982dccdf05b33cbedc630
Author: Chris Angelico <rosuav@gmail.com>
Date: Sat Jun 7 11:33:43 2014 +1000
Expand the Unicode tests
commit 7bc91904f899f8012089fc14a06495680a51e590
Author: Chris Angelico <rosuav@gmail.com>
Date: Sat Jun 7 11:27:30 2014 +1000
Record byte lengths for byte strings
commit bb132120717cf176dcfb26f87fa309378f76ab5f
Author: Chris Angelico <rosuav@gmail.com>
Date: Sat Jun 7 11:25:06 2014 +1000
Make ord() Unicode-aware
commit 03f0cbe9051b62192be97b59f84f63f9216668bf
Author: Chris Angelico <rosuav@gmail.com>
Date: Sat Jun 7 10:24:35 2014 +1000
Retain characters as UTF-8 encoded Unicode
commit e924659b85c001916a5ff7f4d1d8b3ebe2bf0c2f
Author: Chris Angelico <rosuav@gmail.com>
Date: Sat Jun 7 08:37:27 2014 +1000
Add support for \u and \U escapes, but not \N (with explanatory comment)
commit 231031ac5f0346e4ffcf9c4abec2bd33f566232c
Author: Chris Angelico <rosuav@gmail.com>
Date: Sat Jun 7 05:09:35 2014 +1000
Add character length to qstr
commit 6df1b946fb17d8d5df3d91b21cde627c3d4556a8
Author: Chris Angelico <rosuav@gmail.com>
Date: Fri Jun 6 13:48:36 2014 +1000
Add test of UTF-8 encoded source file resulting in properly formed string
commit 16429b81a8483cf25865ed11afd81a7d9c253c26
Author: Chris Angelico <rosuav@gmail.com>
Date: Fri Jun 6 13:44:15 2014 +1000
Make len(s) return character length (even though creation's still buggy)
commit cd2cf6663cc47831dbc97819ad5c50ad33f939d3
Author: Chris Angelico <rosuav@gmail.com>
Date: Fri Jun 6 13:15:36 2014 +1000
HACK - When indexing a qstr, count its charlen. Stupidly inefficient but POC.
All tests pass now, though string creation is still buggy.
commit 47c234584d3358dfa6b4003d5e7264105d17b8f7
Author: Chris Angelico <rosuav@gmail.com>
Date: Fri Jun 6 13:15:32 2014 +1000
objstr: Record character length separately from byte length
CAUTION: Buggy, may crash stuff - qstr needs equivalent functionality too
commit b0f41c72af27d3b361027146025877b3d7e8785c
Author: Chris Angelico <rosuav@gmail.com>
Date: Fri Jun 6 05:37:36 2014 +1000
Beginnings of UTF-8 support - construct strings from that many UTF-8-encoded chars, and subscript bytes the same way
commit 89452be641674601e9bfce86dc71c17c3140a6cf
Author: Chris Angelico <rosuav@gmail.com>
Date: Fri Jun 6 05:28:47 2014 +1000
Update comments - now aiming for UTF-8 rather than PEP 393 strings
commit c239f509521d1a0f9563bf9c5de0c4fb9a6a33ba
Author: Chris Angelico <rosuav@gmail.com>
Date: Wed Jun 4 05:28:12 2014 +1000
Add PEP 393-flags to strings and stub usage.
The test suite all passes, but nothing has actually been changed.
Such mechanism is important to get stable Python functioning, because Python
function calling is handled with C stack. The idea is to sprinkle
STACK_CHECK() calls in places where there can be C recursion.
TODO: Add more STACK_CHECK()'s.
Expected to be set on command line, with the idea being that for different
targets, there're different smartass ABIs which strive to put unneeded
sections into executables, etc., so let people have flexible way to
strip that.
The option name is similar to previously introduced CLFAGS_EXTRA &
LDFLAGS_EXTRA.
char can be signedness, and using signedness types is dangerous - it can
lead to negative offsets when doing table lookups. We apparently should just
ban char usage.
This will allow roughly the same behavior as Python3 for non-ASCII strings,
for example, print("<phrase in non-Latin script>".split()) will print list
of words, not weird hex dump (like Python2 behaves). (Of course, that it
will print list of words, if there're "words" in that phrase at all, separated
by ASCII-compatible whitespace; that surely won't apply to every human
language in existence).
Functionality we provide in builtin io module is fairly minimal. Some
code, including CPython stdlib, depends on more functionality. So, there's
a choice to either implement it in C, or move it _io, and let implement other
functionality in Python. 2nd choice is pursued. This setup matches CPython
too (_io is builtin, io is Python-level).
Benefits: won't crash baremetal targets, will provide Python source location
when not implemented feature used (it will no longer provide C source
location, but just grep for error message).
there are special tweaks and paths to be considered. Just provide some
defaults, in case the values are undefined.
- py-version.sh does not need any bash specific features.
- Use libdl only on Linux for now. FreeBSD provides dl*() calls from libc.
Some small fixed:
- Combine 'x' and 'X' cases in str format code.
- Remove trailing spaces from some lines.
- Make exception messages consistently begin with lower case (then
needed to change those in objarray and objtuple so the same
constant string data could be used).
- Fix bug with exception message having %c instead of %%c.
Add keyword args to dict.update(), and ability to take a dictionary as
argument.
dict() class constructor can now use dict.update() directly.
This patch loses fast path for dict(other_dict), but is that really
needed? Any anyway, this idiom will now re-hash the dictionary, so is
arguably more memory efficient.
Addresses issue #647.
This may seem a bit of a risky change, in that it may introduce crazy
bugs with respect to volatile variables in the VM loop. But, I think it
should be fine: code_state points to some external memory, so the
compiler should always read/write to that memory when accessing the
ip/sp variables (ie not put them in registers).
Anyway, it passes all tests and improves on all efficiency fronts: about
2-4% faster (64-bit unix), 16 bytes less stack space per call (64-bit
unix) and slightly less executable size (unix and stmhal).
The reason it's more efficient is save_ip and save_sp were volatile
variables, so were anyway stored on the stack (in memory, not regs).
Thus converting them to code_state->{ip, sp} doesn't cost an extra
memory dereference (except maybe to get code_state, but that can be put
in a register and then made more efficient for other uses of it).
Conflicts:
py/vm.c
Fixed stack underflow check. Use UINT_FMT/INT_FMT where necessary.
Specify maximum VM-stack byte size by multiple of machine word size, so
that on 64 bit machines it has same functionality as 32 bit.
This improves stack usage in callers to mp_execute_bytecode2, and is step
forward towards unifying execution interface for function and generators
(which is important because generators don't even support full forms
of arguments passing (keywords, etc.)).
Needed to pop the iterator object when breaking out of a for loop. Need
also to be careful to unwind exception handler before popping iterator.
Addresses issue #635.
This helps the compiler do its optimisation, makes it clear which
variables are local per opcode and which global, and makes it consistent
when extra variables are needed in an opcode (in addition to old obj1,
obj2 pair, for example).
Could also make unum local, but that's for another time.
This completes non-automatic interning of strings in the parser, so that
doc strings don't take up RAM. It complicates the parser and compiler,
and bloats stmhal by about 300 bytes. It's complicated because now
there are 2 kinds of parse-nodes that can be strings: interned leaves
and non-interned structs.
io.FileIO is binary I/O, ans actually optional. Default file type is
io.TextIOWrapper, which provides str results. CPython3 explicitly describes
io.TextIOWrapper as buffered I/O, but we don't have buffering support yet
anyway.
Now schedule is: for native types, we call ->make_new() C-level method, which
should perform actions of __new__ and __init__ (note that this is not
compliant, but is efficient), but for user types, __new__ and __init__ are
called as expected.
Also, make sure we convert scalar attribute value to a bound-pair tight in
mp_obj_class_lookup() method, which avoids converting it again and again in
its callers.
__debug__ now resolves to True or False. Its value needs to be set by
mp_set_debug().
TODO: call mp_set_debug in unix/ port.
TODO: optimise away "if False:" statements in compiler.
Updated functions now do proper checking that n_kw==0, and are simpler
because they don't have to explicitly raise an exception. Down side is
that the error messages no longer include the function name, but that's
acceptable.
Saves order 300 text bytes on x64 and ARM.
This is not fully correct re: error handling, because we should check that
that types are used consistently (only str's or only bytes), but magically
makes lot of functions support bytes.
Two things are handled here: allow to compare native subtypes of tuple,
e.g. namedtuple (TODO: should compare type too, currently compared
duck-typedly by content). Secondly, allow user sunclasses of tuples
(and its subtypes) be compared either. "Magic" I did previously in
objtype.c covers only one argument (lhs is many), so we're in trouble
when lhs is native type - there's no other option besides handling
rhs in special manner. Fortunately, this patch outlines approach with
fast path for native types.
This was hit when trying to make urlparse.py from stdlib run. Took
quite some time to debug.
TODO: Reconsile bound method creation process better, maybe callable is
to generic type to bind at all?
Parser shouldn't raise exceptions, so needs to check when memory
allocation fails. This patch does that for the initial set up of the
parser state.
Also, we now put the parser object on the stack. It's small enough to
go there instead of on the heap.
This partially addresses issue #558.
"object" type in MicroPython currently doesn't implement any methods, and
hopefully, we'll try to stay like that for as long as possible. Even if we
have to add something eventually, look up from there might be handled in
adhoc manner, as last resort (that's not compliant with Python3 MRO, but
we're already non-compliant). Hence: 1) no need to spend type trying to
lookup anything in object; 2) no need to allocate subobject when explicitly
inheriting from object; 3) and having multiple bases inheriting from object
is not a case of incompatible multiple inheritance.
This patch simplifies the glue between native emitter and runtime,
and handles viper code like inline assember: return values are
converted to Python objects.
Fixes issue #531.
You can now do:
X = const(123)
Y = const(456 + X)
and the compiler will replace X and Y with their values.
See discussion in issue #266 and issue #573.
In case of empty non-blocking read()/write(), both return None. read()
cannot return 0, as that means EOF, so returns another value, and then
write() just follows. This is still pretty unexpected, and typical
"if not len:" check would treat this as EOF. Well, non-blocking files
require special handling!
This also kind of makes it depending on POSIX, but well, anything else
should emulate POSIX anyway ;-).
Need to have a policy as to how far we go adding keyword support to
built ins. It's nice to have, and gets better CPython compatibility,
but hurts the micro nature of uPy.
Addresses issue #577.
There are 2 locations in parser, and 1 in compiler, where memory
allocation is not precise. In the parser it's the rule stack and result
stack, in the compiler it's the array for the identifiers in the current
scope. All other mallocs are exact (ie they don't allocate more than is
needed).
This patch adds tuning options (MP_ALLOC_*) to mpconfig.h for these 3
inexact allocations.
The inexact allocations in the parser should actually be close to
logarithmic: you need an exponentially larger script (absent pathological
cases) to use up more room on the rule and result stacks. As such, the
default allocation policy for these is now to start with a modest sized
stack, but grow only in small increments.
For the identifier arrays in the compiler, these now start out quite
small (4 entries, since most functions don't have that many ids), and
grow incrementally by 6 (since if you have more ids than 4, you probably
have quite a few more, but it wouldn't be exponentially more).
Partially addresses issue #560.
This will work if MICROPY_DEBUG_PRINTERS is defined, which is only for
unix/windows ports. This makes it convenient to user uPy normally, but
easily get bytecode dump on the spot if needed, without constant recompiles
back and forth.
TODO: Add more useful debug output, adjust verbosity level on which
specifically bytecode dump happens.
Blanket wide to all .c and .h files. Some files originating from ST are
difficult to deal with (license wise) so it was left out of those.
Also merged modpyb.h, modos.h, modstm.h and modtime.h in stmhal/.
By default mingw outputs 3 digits instead of the standard 2 so all float
tests using printf fail. Using setenv at the start of the program fixes this.
To accomodate calling platform specific initialization a
MICROPY_MAIN_INIT_FUNC macro is used which is called in mp_init()
The original parsing would error out on any C declarations that are not typedefs
or extern variables. This limits what can go in mpconfig.h and mpconfigport.h,
as they are included in qstr.h. For instance even a function declaration would be
rejected and including system headers is a complete no-go.
That seems too limiting for a global config header, so makeqstrdata now
ignores everything that does not match a qstr definition.
alloca() is declared in alloca.h which als happens to be included by stdlib.h.
On mingw however it resides in malloc.h only.
So if we include alloca.h directly, and add an alloca.h for mingw in it's port
directory we can get rid of the mingw-specific define to include malloc.h
and the other ports are happy as well.
Biggest part of this support is refactoring mp_obj_class_lookup() to return
standard "bound member" pair (mp_obj_t[2]). Actual support of inherited
native methods is 3 lines then. Some inherited features may be not supported
yet (e.g. native class methods, native properties, etc., etc.). There may
be opportunities for further optimization too.
This implements checking of base types, allocation and basic initialization,
and optimized support for special method lookups. Other features are not yet
supported.
Of course, keywords are turned into lexer tokens in the lexer, so will
never need to be interned (unless you do something like x="def").
As it is now, the following on pyboard makes no new qstrs:
import pyb
pyb.info()
New way uses slightly less ROM and RAM, should be slightly faster, and,
most importantly, allows to catch the error "non-keyword arg following
keyword arg".
Addresses issue #466.
Also add some more debugging output to gc_dump_alloc_table().
Now that newly allocated heap is always zero'd, maybe we just make this
a policy for the uPy API to keep it simple (ie any new implementation of
memory allocation must zero all allocations). This follows the D
language philosophy.
Before this patch, a previously used memory block which had pointers in
it may still retain those pointers if the new user of that block does
not actually use the entire block. Eg, if I want 5 blocks worth of
heap, I actually get 8 (round up to nearest 4). Then I never use the
last 3, so they keep their old values, which may be pointers pointing to
the heap, hence preventing GC.
In rare (or maybe not that rare) cases, this leads to long, unintentional
"linked lists" within the GC'd heap, filling it up completely. It's
pretty rare, because you have to reuse exactly that memory which is part
of this "linked list", and reuse it in just the right way.
This should fix issue #522, and might have something to do with
issue #510.
3 emitter functions are needed only for emitcpy, and so we can #if them
out when compiling with emitcpy support.
Also remove unused SETUP_LOOP bytecode.
Closed over variables are now passed on the stack, instead of creating a
tuple and passing that. This way memory for the closed over variables
can be allocated within the closure object itself. See issue #510 for
background.
There were typos, various rounding errors trying to do concurrent counting
in bytes vs blocks, complex conditional paths, superfluous variables, etc.,
etc., all leading to obscure segfaults.
These are to assist in writing native C functions that take positional
and keyword arguments. mp_arg_check_num is for just checking the
number of arguments is correct. mp_arg_parse_all is for parsing
positional and keyword arguments with default values.
When querying an object that supports the buffer protocol, that object
must now return a typecode (as per binary.[ch]). This does not have to
be honoured by the caller, but can be useful for determining element
size.
Test usecase I used is print(time.time()) and print(time.time() - time.time()).
On Linux/Glibc they now give the same output as CPython 3.3. Specifically,
time.time() gives non-exponential output with 7 decimal digits, and subtraction
gives exponential output e-06/e-07.
On stmhal, computed gotos make the binary about 1k bigger, but makes it
run faster, and we have the room, so why not. All tests pass on
pyboard using computed gotos.
This follows pattern already used for objtuple, etc.: objfun.h's content
is not public - each and every piece of code should not have access to it.
It's not private either - with out architecture and implementation language
(C) it doesn't make sense to keep implementation of each object strictly
private and maintain cumbersome accessors. It's "local" - intended to be
used by a small set of "friend" (in C++ terms) objects.
Things get tricky when using the nlr code to catch exceptions. Need to
ensure that the variables (stack layout) in the exception handler are
the same as in the bit protected by the exception handler.
Prior to this patch there were a few bugs. 1) The constant
mp_const_MemoryError_obj was being preloaded to a specific location on
the stack at the start of the function. But this location on the stack
was being overwritten in the opcode loop (since it didn't think that
variable would ever be referenced again), and so when an exception
occurred, the variable holding the address of MemoryError was corrupt.
2) The FOR_ITER opcode detection in the exception handler used sp, which
may or may not contain the right value coming out of the main opcode
loop.
With this patch there is a clear separation of variables used in the
opcode loop and in the exception handler (should fix issue (2) above).
Furthermore, nlr_raise is no longer used in the opcode loop. Instead,
it jumps directly into the exception handler. This tells the C compiler
more about the possible code flow, and means that it should have the
same stack layout for the exception handler. This should fix issue (1)
above. Indeed, the generated (ARM) assembler has been checked explicitly,
and with 'goto exception_handler', the problem with &MemoryError is
fixed.
This may now fix problems with rge-sm, and probably many other subtle
bugs yet to show themselves. Incidentally, rge-sm now passes on
pyboard (with a reduced range of integration)!
Main lesson: nlr is tricky. Don't use nlr_push unless you know what you
are doing! Luckily, it's not used in many places. Using nlr_raise/jump
is fine.
The autogenerated header files have been moved about, and an extra
include dir has been added, which means you can give a custom
BUILD=newbuilddir option to make, and everything "just works"
Also tidied up the way the different Makefiles build their include-
directory flags
That was easy - just avoid erroring out on seeing candidate dir for namespace
package. That's far from being complete though - namespace packages should
support importing portions of package from different sys.path entries, here
we require first matching entry to contain all namespace package's portions.
And yet, that's a way to put parts of the same Python package into multiple
installable package - something we really need for *Micro*Python.
The logic appears to be that (at least beginning of) sys.versions is the
version of reference Python language implemented, not version of particular
implementation.
Also, bump set versions at 3.4.0, based on @dpgeorge preference.
Attempt to address issue #386. unique_code_id's have been removed and
replaced with a pointer to the "raw code" information. This pointer is
stored in the actual byte code (aligned, so the GC can trace it), so
that raw code (ie byte code, native code and inline assembler) is kept
only for as long as it is needed. In memory it's now like a tree: the
outer module's byte code points directly to its children's raw code. So
when the outer code gets freed, if there are no remaining functions that
need the raw code, then the children's code gets freed as well.
This is pretty much like CPython does it, except that CPython stores
indexes in the byte code rather than machine pointers. These indices
index the per-function constant table in order to find the relevant
code.
Improved the Thumb assembler back end. Added many more Thumb
instructions to the inline assembler. Improved parsing of assembler
instructions and arguments. Assembler functions can now be passed the
address of any object that supports the buffer protocol (to get the
address of the buffer). Added an example of how to sum numbers from
an array in assembler.
This is necessary to catch all cases where locals are referenced before
assignment. We still keep the _0, _1, _2 versions of LOAD_FAST to help
reduced the byte code size in RAM.
Addresses issue #457.
I'm pretty sure these are never reached, since NOT_EQUAL is always
converted into EQUAL in mp_binary_op. No one should call
type.binary_op directly, they should always go through mp_binary_op
(or mp_obj_is_equal).
Per https://docs.python.org/3.3/reference/import.html , this is the way to
tell module from package: "Specifically, any module that contains a __path__
attribute is considered a package." And it for sure will be needed to
implement relative imports.
This simplifies the compiler a little, since now it can do 1 pass over
a function declaration, to determine default arguments. I would have
done this originally, but CPython 3.3 somehow had the default keyword
args compiled before the default position args (even though they appear
in the other order in the text of the script), and I thought it was
important to have the same order of execution when evaluating default
arguments. CPython 3.4 has changed the order to the more obvious one,
so we can also change.
It has (again) a fast path for ints, and a simplified "slow" path for
everything else.
Also simplify the way str indexing is done (now matches tuple and list).
A specific target can define either MP_ENDIANNESS_LITTLE or MP_ENDIANNESS_BIG
to 1. Default is MP_ENDIANNESS_LITTLE.
TODO: Autodetect based on compiler predefined macros?
Working towards trying to support compile-time constants (see discussion
in issue #227), this patch allows the compiler to look inside arbitrary
uPy objects at compile time. The objects to search are given by the
macro MICROPY_EXTRA_CONSTANTS (so they must be constant/ROM objects),
and the constant folding occures on forms base.attr (both base and attr
must be id's).
It works, but it breaks strict CPython compatibility, since the lookup
will succeed even without importing the namespace.
Previously, a failed malloc/realloc would throw an exception, which was
not caught. I think it's better to keep the parser free from NLR
(exception throwing), hence this patch.
Only calcsize() and unpack() functions provided so far, for little-endian
byte order. Format strings don't support repition spec (like "2b3i").
Unfortunately, dealing with all the various binary type sizes and alignments
will lead to quite a bloated "binary" helper functions - if optimizing for
speed. Need to think if using dynamic parametrized algos makes more sense.
With the implementation of proper string formatting, code to print a
small int was delegated to mpz_as_str_inpl (after first converting the
small int to an mpz using stack memory). But mpz_as_str_inpl allocates
heap memory to do the conversion, so small ints needed heap memory just
to be printed.
This fix has a separate function to print small ints, which does not
allocate heap, and allocates less stack.
String formatting, printf and pfenv are now large beasts, with some
semi-duplicated code.
These two are apprerently the most concise and efficient way to convert
int to/from bytes in Python. The alternatives are struct and array modules,
but methods using them are more verbose in Python code and less efficient
in memory/cycles.
Full CPython compatibility with this requires actually parsing the
input so far collected, and if it fails parsing due to lack of tokens,
then continue collecting input. It's not worth doing it this way. Not
having compatibility at this level does not hurt the goals of Micro
Python.
stmhal relies on pfenv_* to implement its printf. Thus, it needs a
pfenv_print_int which prints a proper 32-bit integer. With latest
change to pfenv, this function became one that took mp_obj_t, and
extracted the integer value from that object.
To fix temporarily, pfenv_print_int has been renamed to
pfenv_print_mp_int (to indicate it takes a mp_obj_t for the int), and
pfenv_print_int has been added (which takes a normal C int). Currently,
pfenv_print_int proxies to pfenv_print_mp_int, but this means it looses
the MSB. Need to find a way to fix this, but the only way I can think
of will duplicate lots of code.