This way, the native glue code is only compiled if native code is
enabled (which makes complete sense; thanks to Paul Sokolovsky for
the idea).
Should fix issue #834.
The heap allocation is now exactly as it was before the "faster gc
alloc" patch, but it's still nearly as fast. It is fixed by being
careful to always update the "last free block" pointer whenever the heap
changes (eg free or realloc).
Tested on all tests by enabling EXTENSIVE_HEAP_PROFILING in py/gc.c:
old and new allocator have exactly the same behaviour, just the new one
is much faster.
Recent speed up of GC allocation made the GC have a fragmented heap.
This patch restores "original fragmentation behaviour" whilst still
retaining relatively fast allocation. This patch works because there is
always going to be a single block allocated now and then, which advances
the gc_last_free_atb_index pointer often enough so that the whole heap
doesn't need scanning.
Should address issue #836.
With a file with 1 line (and an error on that line), used to show the
line as number 0. Now shows it correctly as line number 1.
But, when line numbers are disabled, it now prints line number 1 for any
line that has an error (instead of 0 as previously). This might end up
being confusing, but requires extra RAM and/or hack logic to make it
print something special in the case of no line numbers.
These functions are generally 1 machine instruction, and are used in
critical code, so makes sense to have them inline.
Also leave these functions uninverted (ie 0 means enable, 1 means
disable) and provide macro constants if you really need to distinguish
the states. This makes for smaller code as well (combined with
inlining).
Applied to teensy port as well.
Because (for Thumb) a function pointer has the LSB set, pointers to
dynamic functions in RAM (eg native, viper or asm functions) were not
being traced by the GC. This patch is a comprehensive fix for this.
Addresses issue #820.
This simple patch gives a very significant speed up for memory allocation
with the GC.
Eg, on PYBv1.0:
tests/basics/dict_del.py: 3.55 seconds -> 1.19 seconds
tests/misc/rge_sm.py: 15.3 seconds -> 2.48 seconds
Multiplication of a tuple, list, str or bytes now yields an empty
sequence (instead of crashing). Addresses issue #799
Also added ability to mult bytes on LHS by integer.
Can now index ranges with integers and slices, and reverse ranges
(although reversing is not very efficient).
Not sure how useful this stuff is, but gets us closer to having all of
Python's builtins.
reversed function now implemented, and works for tuple, list, str, bytes
and user objects with __len__ and __getitem__.
Renamed mp_builtin_len to mp_obj_len to make it publically available (eg
for reversed).
This happens for example for zero-size arrays. As .get_buffer() method now
has explicit return value, it's enough to distinguish success vs failure
of getting buffer.
This was a nasty bug to track down. It only had consequences when the
heap size was just the right size to expose the rounding error in the
calculation of the finaliser table size. And, a script had to allocate
a small (1 or 2 cell) object at the very end of the heap. And, this
object must not have a finaliser. And, the initial state of the heap
must have been all bits set to 1. All these conspire on the pyboard,
but only if your run the script fresh (so unused memory is all 1's),
and if your script allocates a lot of small objects (eg 2-char strings
that are not interned).
qstr_init is always called exactly before mp_init, so makes sense to
just have mp_init call it. Similarly with
mp_init_emergency_exception_buf. Doing this makes the ports simpler and
less error prone (ie they can no longer forget to call these).
Reduces by about a factor of 10 on average the amount of RAM needed to
store the line-number to bytecode map in the bytecode prelude.
Using CPython3.4's stdlib for statistics: previously, an average of
13 bytes were used per (bytecode offset, line-number offset) pair, and
now with this improvement, that's down to 1.3 bytes on average.
Large RAM usage before was due to some very large steps in line numbers,
both from the start of the first line in a function way down in the
file, and also functions that have big comments and/or big strings in
them (both cases were significant).
Although the savings are large on average for the CPython stdlib, it
won't have such a big effect for small scripts used in embedded
programming.
Addresses issue #648.
This removes mpz_as_int, since that was a terrible function (it
implemented saturating conversion).
Use mpz_as_int_checked and mpz_as_uint_checked. These now work
correctly (they previously had wrong overflow checking, eg
print(chr(10000000000000)) on 32-bit machine would incorrectly convert
this large number to a small int).
Many OSes/CPUs have affinity to put "user" data into lower half of address
space. Take advantage of that and remap such addresses into full small int
range (including negative part).
If address is from upper half, long int will be used. Previously, small
int was returned for lower quarter of address space, and upper quarter. For
2 middle quarters, long int was used, which is clearly worse schedule than
the above.
The user code should call micropython.alloc_emergency_exception_buf(size)
where size is the size of the buffer used to print the argument
passed to the exception.
With the test code from #732, and a call to
micropython.alloc_emergenncy_exception_buf(100) the following error is
now printed:
```python
>>> import heartbeat_irq
Uncaught exception in Timer(4) interrupt handler
Traceback (most recent call last):
File "0://heartbeat_irq.py", line 14, in heartbeat_cb
NameError: name 'led' is not defined
```
With unicode enabled, this patch allows reading a fixed number of
characters from text-mode streams; eg file.read(5) will read 5 unicode
chars, which can made of more than 5 bytes.
For an ASCII stream (ie no chars > 127) it only needs to do 1 read. If
there are lots of non-ASCII chars in a stream, then it needs multiple
reads of the underlying object.
Adds a new test for this case. Enables unicode support by default on
unix and stmhal ports.
dummy_data field is accessed as uint value (e.g.
in emit_write_bytecode_byte_ptr), but is not aligned as such, which causes
bus errors or incorrect behavior on any arch requiring strictly aligned
data (ARM pre-v7, MIPS, etc, etc).
Conflicts:
stmhal/pin_named_pins.c
stmhal/readline.c
Renamed HAL_H to MICROPY_HAL_H. Made stmhal/mphal.h which intends to
define the generic Micro Python HAL, which in stmhal sits above the ST
HAL.
Native emitter can now compile try/except blocks using nlr_push/nlr_pop.
It probably only works for 1 level of exception handling. It doesn't
work on Thumb (only x64).
Native emitter can also handle some additional op codes.
With this patch, 198 tests now pass using "-X emit=native" option to
micropython.
- rearrange/add definitions that were not there so it's easier to compare both
- use MICROPY_PY_SYS_PLATFORM in main.c since it's available anyway
- define EWOULDBLOCK, it is missing from ingw32