Commit Graph

21 Commits

Author SHA1 Message Date
Dan Halbert
0d4bc8c163 initial v1.19.1 merge; not compiled yet 2023-08-01 13:50:05 -04:00
Jeff Epler
87d3740c64 Merge tag 'v1.16' 2021-06-18 10:54:19 -05:00
Bob Abeles
7ceccad4e2 py/nlrx64: Correct the detection of Darwin ABI.
__APPLE__ tests for an Apple OS and __MACH__ tests that it is based on CMU
Mach.  Using both tests ensures that just Darwin is recognized.
2021-05-18 11:52:00 +10:00
Scott Shawcroft
966d25c6a5
Merge MicroPython v1.15 into CircuitPython 2021-05-12 17:51:42 -07:00
microDev
a52eb88031
run code formatting script 2021-03-15 19:27:36 +05:30
Yonatan Goldschmidt
e196cb762e py/nlrx64: Fix typo in comment. 2021-03-11 12:51:10 +11:00
Diego Elio Pettenò
34b4993d63 Add license to some obvious files. 2020-07-06 19:16:25 +01:00
Damien George
69661f3343 all: Reformat C and Python source code with tools/codeformat.py.
This is run with uncrustify 0.70.1, and black 19.10b0.
2020-02-28 10:33:03 +11:00
Damien George
1cf0ce094a
py/nlr: Use MP_UNREACHABLE at the end of arch-specific nlr_jump funcs.
Recent versions of gcc perform optimisations which can lead to the
following code from the MP_NLR_JUMP_HEAD macro being omitted:

    top->ret_val = val; \
    MP_NLR_RESTORE_PYSTACK(top); \
    *_top_ptr = top->prev; \

This is noticeable (at least) in the unix coverage on x86-64 built with gcc
9.1.0.  This is because the nlr_jump function is marked as no-return, so
gcc deduces that the above code has no effect.

Adding MP_UNREACHABLE tells the compiler that the asm code may branch
elsewhere, and so it cannot optimise away the code.
2019-12-12 11:01:40 -08:00
Damien George
11ecdf2ec6 py/nlr: Use MP_UNREACHABLE at the end of arch-specific nlr_jump funcs.
Recent versions of gcc perform optimisations which can lead to the
following code from the MP_NLR_JUMP_HEAD macro being omitted:

    top->ret_val = val; \
    MP_NLR_RESTORE_PYSTACK(top); \
    *_top_ptr = top->prev; \

This is noticeable (at least) in the unix coverage on x86-64 built with gcc
9.1.0.  This is because the nlr_jump function is marked as no-return, so
gcc deduces that the above code has no effect.

Adding MP_UNREACHABLE tells the compiler that the asm code may branch
elsewhere, and so it cannot optimise away the code.
2019-08-19 16:14:57 +10:00
Dan Halbert
0d27f4d9a6 continued WIP: almost compiling 2018-07-12 14:13:51 -04:00
stijn
b184b6ae53 py/nlr: Fix nlr functions for 64bit ports built with gcc on Windows
The number of registers used should be 10, not 12, to match the assembly
code in nlrx64.c. With this change the 64bit mingw builds don't need to
use the setjmp implementation, and this fixes miscellaneous crashes and
assertion failures as reported in #1751 for instance.

To avoid mistakes in the future where something gcc-related for Windows
only gets fixed for one particular compiler/environment combination,
make use of a MICROPY_NLR_OS_WINDOWS macro.

To make sure everything nlr-related is now ok when built with gcc this
has been verified with:
- unix port built with gcc on Cygwin (i686-pc-cygwin-gcc and
  x86_64-pc-cygwin-gcc, version 6.4.0)
- windows port built with mingw-w64's gcc from Cygwin
 (i686-w64-mingw32-gcc and x86_64-w64-mingw32-gcc, version 6.4.0)
 and MSYS2 (like the ones on Cygwin but version 7.2.0)
2017-12-29 22:24:46 +11:00
Damien George
b25f92160b py/nlr: Factor out common NLR code to macro and generic funcs in nlr.c.
Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot
of the NLR code, specifically that dealing with pushing and popping the NLR
pointer to maintain the linked-list of NLR buffers.  This patch factors all
of that code out of the specific implementations into generic functions in
nlr.c, along with a helper macro in nlr.h.  This eliminates duplicated
code.
2017-12-28 16:46:30 +11:00
Damien George
5bf8e85fc8 py/nlr: Clean up selection and config of NLR implementation.
If MICROPY_NLR_SETJMP is not enabled and the machine is auto-detected then
nlr.h now defines some convenience macros for the individual NLR
implementations to use (eg MICROPY_NLR_THUMB).  This keeps nlr.h and the
implementation in sync, and also makes the nlr_buf_t struct easier to read.
2017-12-28 16:18:39 +11:00
Paul Sokolovsky
096e967aad Revert "py/nlr: Factor out common NLR code to generic functions."
This reverts commit 6a3a742a6c.

The above commit has number of faults starting from the motivation down
to the actual implementation.

1. Faulty implementation.

The original code contained functions like:

NORETURN void nlr_jump(void *val) {
    nlr_buf_t **top_ptr = &MP_STATE_THREAD(nlr_top);
    nlr_buf_t *top = *top_ptr;
...
     __asm volatile (
    "mov    %0, %%edx           \n" // %edx points to nlr_buf
    "mov    28(%%edx), %%esi    \n" // load saved %esi
    "mov    24(%%edx), %%edi    \n" // load saved %edi
    "mov    20(%%edx), %%ebx    \n" // load saved %ebx
    "mov    16(%%edx), %%esp    \n" // load saved %esp
    "mov    12(%%edx), %%ebp    \n" // load saved %ebp
    "mov    8(%%edx), %%eax     \n" // load saved %eip
    "mov    %%eax, (%%esp)      \n" // store saved %eip to stack
    "xor    %%eax, %%eax        \n" // clear return register
    "inc    %%al                \n" // increase to make 1, non-local return
     "ret                        \n" // return
    :                               // output operands
    : "r"(top)                      // input operands
    :                               // clobbered registers
     );
}

Which clearly stated that C-level variable should be a parameter of the
assembly, whcih then moved it into correct register.

Whereas now it's:

NORETURN void nlr_jump_tail(nlr_buf_t *top) {
    (void)top;

    __asm volatile (
    "mov    28(%edx), %esi      \n" // load saved %esi
    "mov    24(%edx), %edi      \n" // load saved %edi
    "mov    20(%edx), %ebx      \n" // load saved %ebx
    "mov    16(%edx), %esp      \n" // load saved %esp
    "mov    12(%edx), %ebp      \n" // load saved %ebp
    "mov    8(%edx), %eax       \n" // load saved %eip
    "mov    %eax, (%esp)        \n" // store saved %eip to stack
    "xor    %eax, %eax          \n" // clear return register
    "inc    %al                 \n" // increase to make 1, non-local return
    "ret                        \n" // return
    );

    for (;;); // needed to silence compiler warning
}

Which just tries to perform operations on a completely random register (edx
in this case). The outcome is the expected: saving the pure random luck of
the compiler putting the right value in the random register above, there's
a crash.

2. Non-critical assessment.

The original commit message says "There is a small overhead introduced
(typically 1 machine instruction)". That machine instruction is a call
if a compiler doesn't perform tail optimization (happens regularly), and
it's 1 instruction only with the broken code shown above, fixing it
requires adding more. With inefficiencies already presented in the NLR
code, the overhead becomes "considerable" (several times more than 1%),
not "small".

The commit message also says "This eliminates duplicated code.". An
obvious way to eliminate duplication would be to factor out common code
to macros, not introduce overhead and breakage like above.

3. Faulty motivation.

All this started with a report of warnings/errors happening for a niche
compiler. It could have been solved in one the direct ways: a) fixing it
just for affected compiler(s); b) rewriting it in proper assembly (like
it was before BTW); c) by not doing anything at all, MICROPY_NLR_SETJMP
exists exactly to address minor-impact cases like thar (where a) or b) are
not applicable). Instead, a backwards "solution" was put forward, leading
to all the issues above.

The best action thus appears to be revert and rework, not trying to work
around what went haywire in the first place.
2017-12-26 19:27:58 +02:00
Damien George
6a3a742a6c py/nlr: Factor out common NLR code to generic functions.
Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot
of the NLR code, specifically that dealing with pushing and popping the NLR
pointer to maintain the linked-list of NLR buffers.  This patch factors all
of that code out of the specific implementations into generic functions in
nlr.c.  This eliminates duplicated code.

The factoring also allows to make the machine-specific NLR code pure
assembler code, thus allowing nlrthumb.c to use naked function attributes
in the correct way (naked functions can only have basic inline assembler
code in them).

There is a small overhead introduced (typically 1 machine instruction)
because now the generic nlr_jump() must call nlr_jump_tail() rather than
them being one combined function.
2017-12-20 15:42:06 +11:00
Damien George
02d830c035 py: Introduce a Python stack for scoped allocation.
This patch introduces the MICROPY_ENABLE_PYSTACK option (disabled by
default) which enables a "Python stack" that allows to allocate and free
memory in a scoped, or Last-In-First-Out (LIFO) way, similar to alloca().

A new memory allocation API is introduced along with this Py-stack.  It
includes both "local" and "nonlocal" LIFO allocation.  Local allocation is
intended to be equivalent to using alloca(), whereby the same function must
free the memory.  Nonlocal allocation is where another function may free
the memory, so long as it's still LIFO.

Follow-up patches will convert all uses of alloca() and VLA to the new
scoped allocation API.  The old behaviour (using alloca()) will still be
available, but when MICROPY_ENABLE_PYSTACK is enabled then alloca() is no
longer required or used.

The benefits of enabling this option are (or will be once subsequent
patches are made to convert alloca()/VLA):
- Toolchains without alloca() can use this feature to obtain correct and
  efficient scoped memory allocation (compared to using the heap instead
  of alloca(), which is slower).
- Even if alloca() is available, enabling the Py-stack gives slightly more
  efficient use of stack space when calling nested Python functions, due to
  the way that compilers implement alloca().
- Enabling the Py-stack with the stackless mode allows for even more
  efficient stack usage, as well as retaining high performance (because the
  heap is no longer used to build and destroy stackless code states).
- With Py-stack and stackless enabled, Python-calling-Python is no longer
  recursive in the C mp_execute_bytecode function.

The micropython.pystack_use() function is included to measure usage of the
Python stack.
2017-12-11 13:49:09 +11:00
Damien George
a3dc1b1957 all: Remove inclusion of internal py header files.
Header files that are considered internal to the py core and should not
normally be included directly are:
    py/nlr.h - internal nlr configuration and declarations
    py/bc0.h - contains bytecode macro definitions
    py/runtime0.h - contains basic runtime enums

Instead, the top-level header files to include are one of:
    py/obj.h - includes runtime0.h and defines everything to use the
        mp_obj_t type
    py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
        and defines everything to use the general runtime support functions

Additional, specific headers (eg py/objlist.h) can be included if needed.
2017-10-04 12:37:50 +11:00
Damien George
be8e5744e6 py/nlrx86,x64: Replace #define of defined() with portable macro usage.
Using gcc -Wpedantic will warn that #define of defined() is non-portable
and this patch fixes this.
2017-08-29 12:52:18 +10:00
Damien George
52b6764894 py/nlrx64: Fixes to support Mac OS.
Two independent fixes:
- need to prefix symbols referenced from asm with underscore;
- need to undo the C-function prelude.
2017-03-08 22:36:02 +11:00
Damien George
321848470c py/nlrx64: Convert from assembler to C file with inline asm. 2017-03-06 17:13:35 +11:00