The idea behind decrease is: bytecode and other static data is also kept on
heap, and can easily become half of heap, then setting threshold to half of
heap will have null effect - GC will happen on complete heap exhaustion like
before. But exactly in such config maintaining heap defragmented is very
important, so lower threshold to accommodate that.
This is a fix for https://github.com/micropython/micropython/issues/2209:
by default a file created using open() uses text translation mode so writing
\n to it will result in the file having \r\n. This is obviously problematic
for binary .mpy files, so provide functions for setting the open mode
and use binary mode in mpy-cross' main().
Something like:
if foo == "bar":
will be always false if foo is b"bar". In CPython, warning is issued if
interpreter is started as "python3 -b". In MicroPython,
MICROPY_PY_STR_BYTES_CMP_WARN setting controls it.
Currently, MicroPython runs GC when it could not allocate a block of memory,
which happens when heap is exhausted. However, that policy can't work well
with "inifinity" heaps, e.g. backed by a virtual memory - there will be a
lot of swap thrashing long before VM will be exhausted. Instead, in such
cases "allocation threshold" policy is used: a GC is run after some number of
allocations have been made. Details vary, for example, number or total amount
of allocations can be used, threshold may be self-adjusting based on GC
outcome, etc.
This change implements a simple variant of such policy for MicroPython. Amount
of allocated memory so far is used for threshold, to make it useful to typical
finite-size, and small, heaps as used with MicroPython ports. And such GC policy
is indeed useful for such types of heaps too, as it allows to better control
fragmentation. For example, if a threshold is set to half size of heap, then
for an application which usually makes big number of small allocations, that
will (try to) keep half of heap memory in a nice defragmented state for an
occasional large allocation.
For an application which doesn't exhibit such behavior, there won't be any
visible effects, except for GC running more frequently, which however may
affect performance. To address this, the GC threshold is configurable, and
by default is off so far. It's configured with gc.threshold(amount_in_bytes)
call (can be queries without an argument).
3-arg form:
stream.write(data, offset, length)
2-arg form:
stream.write(data, length)
These allow efficient buffer writing without incurring extra memory
allocation for slicing or creating memoryview() object, what is
important for low-memory ports.
All arguments must be positional. It might be not so bad idea to standardize
on 3-arg form, but 2-arg case would need check and raising an exception
anyway then, so instead it was just made to work.
The minimum thread stack size is set by pthreads (16k bytes) so we must
use that value for our minimum. The stack limit check is also adjusted
to work correctly for 32-bit builds.
Since "read-exactly" stream refactor, where stream.read(N) will read
exactly N bytes (unless EOF), http_server* examples can't any longer do
client_socket.read(4096) and expect to get full request (it will block
on HTTP/1.1 client). Instead, read request line by line, as the HTTP
protocol requires.
Threading support is still very new so stay conservative at this point
and enable threading without the GIL. This requires users to protect
concurrent access of mutatable Python objects (eg lists) with locks at
the Python level (something you should probably do anyway). The
advantage is that there is less of a performance hit for non-threaded
code, because the VM does not need to constantly release/acquire the GIL.
In the future the GIL will be made more efficient. There is also room to
improve the efficiency of non-GIL code by not using mutex's if there is
only one thread active.
Due to the way modern compilers work (allocating space for stack vars once
at tha start of function, and deallocating once on exit from), using
intermediate stack buffer of big size caused blockage of 4K (PATH_MAX)
on stack for the entire duration of MicroPython execution.
This follows source code/header file organization similar to few other
objects, and intended to be used only is special cases, where efficiency/
simplicity matters.