SA_SIGINFO allows the signal handler to access more information about
the signal, especially useful in a threaded environment. The extra
information is not currently used but it may prove useful in the future.
The linker flag --gc-sections is not available on the linker used on
Mac OS X which results in an error when linking micropython on Mac OS X.
Therefore move this option to the LDFLAGS_ARCH variable on non Darwin
systems. According to http://stackoverflow.com/a/17710056 the equivalent
to --gc-sections is -dead_strip thus this option is used for the
LDFLAGS_ARCH on Darwin systems.
When built for Linux, libffi includes very bloated and workaround exec-alloc
implementation required to work around SELinux and other "sekuritee" features
which real people don't use. MicroPython has own alloc-exec implementation,
used to alloc memory for @micropython.native code. With this option enabled,
uPy's implementation will override libffi's. This saves 11K on x86_64 (and
that accounts for more than half of the libffi code size).
TODO: Possibly, we want to refactor this option to allow either use uPy's
implementation even for libffi, or allow to use libffi's implementation even
for uPy.
This actually saves "only" 6K for x86_64 build, as we're still more or less
careful to #ifdef unneeded code. But relying on --gc-sections in a "lazy"
manner would allow to make #ifdef'ing less pervasive (not suggested right
away, but an option for the future).
MicroPython own readline implementation is superior now by providing
automatic indentation and completion (completion for GNU Readline was
never implemented). MICROPY_USE_READLINE=2 also wasn't build for a long
time and probably broken.
If GNU Readline is still beneficial for some cases, it can be achieved
with external wrappers like "rlwrap" (there will be the same level of
functionality, as again, there never was deep integration, like completion
support).
The call to stat() returns a 10 element tuple consistent to the os.stat()
call. At the moment, the only relevant information returned are file
type and file size.
Avoid using system libraries, use copies bundled with MicroPython as
submodules (currently affects only libffi, other dependencies either
already used as bundled-only (axtls), or can't be bundled (so far),
like libjni).
Disabled by default, enabled in unix port. Need for this method easily
pops up when working with text UI/reporting, and coding workalike
manually again and again counter-productive.
To use frozen bytecode make a subdirectory under the unix/ directory
(eg frozen/), put .py files there, then run:
make FROZEN_MPY_DIR=frozen
Be sure to build from scratch. The .py files will then be available for
importing.
The current install command uses the flag -D which is specific to the
install command from GNU coreutils, but isn't available for the BSD
version. This solution uses the -d flag which should be commonly
available to create the target directory. Afterwards the target files
are installed to this directory seperately.
- add template rule that converts a specified source file into a qstring file
- add special rule for generating a central header that contains all
extracted/autogenerated strings - defined by QSTR_DEFS_COLLECTED
variable. Each platform appends a list of sources that may contain
qstrings into a new build variable: SRC_QSTR. Any autogenerated
prerequisities are should be appened to SRC_QSTR_AUTO_DEPS variable.
- remove most qstrings from py/qstrdefs, keep only qstrings that
contain special characters - these cannot be easily detected in the
sources without additional annotations
- remove most manual qstrdefs, use qstrdef autogen for: py, cc3200,
stmhal, teensy, unix, windows, pic16bit:
- remove all micropython generic qstrdefs except for the special strings that contain special characters (e.g. /,+,<,> etc.)
- remove all port specific qstrdefs except for special strings
- append sources for qstr generation in platform makefiles (SRC_QSTR)
The config variable MICROPY_MODULE_FROZEN is now made of two separate
parts: MICROPY_MODULE_FROZEN_STR and MICROPY_MODULE_FROZEN_MPY. This
allows to have none, either or both of frozen strings and frozen mpy
files (aka frozen bytecode).
See https://github.com/micropython/micropython/issues/1736 for the
list of complications. This workaround instead of duplicating REPL
to another stream, switches to it, because read(STDIN) we use otherwise
is blocking call, so it and custom REPL stream can't be used together.
Calling it from mp_init() is too late for some ports (like Unix), and leads
to incomplete stack frame being captured, with following GC issues. So, now
each port should call mp_stack_ctrl_init() on its own, ASAP after startup,
and taking special precautions so it really was called before stack variables
get allocated (because if such variable with a pointer is missed, it may lead
to over-collecting (typical symptom is segfaulting)).
When using newer glibc's the compiler automatically sets
_FORTIFY_SOURCE when building with -O1 and this causes
a special inlined version of printf to be declared which
then bypasses our version of printf.
Functions added are:
- randint
- randrange
- choice
- random
- uniform
They are enabled with configuration variable
MICROPY_PY_URANDOM_EXTRA_FUNCS, which is disabled by default. It is
enabled for unix coverage build and stmhal.
Seedable and reproducible pseudo-random number generator. Implemented
functions are getrandbits(n) (n <= 32) and seed().
The algorithm used is Yasmarang by Ilya Levin:
http://www.literatecode.com/yasmarang
The first argument to the type.make_new method is naturally a uPy type,
and all uses of this argument cast it directly to a pointer to a type
structure. So it makes sense to just have it a pointer to a type from
the very beginning (and a const pointer at that). This patch makes
such a change, and removes all unnecessary casting to/from mp_obj_t.
This patch changes the type signature of .make_new and .call object method
slots to use size_t for n_args and n_kw (was mp_uint_t. Makes code more
efficient when mp_uint_t is larger than a machine word. Doesn't affect
ports when size_t and mp_uint_t have the same size.
POSIX doesn't guarantee something like that to work, but it works on any
system with careful signal implementation. Roughly, the requirement is
that signal handler is executed in the context of the process, its main
thread, etc. This is true for Linux. Also tested to work without issues
on MacOSX.
This basically introduces the MICROPY_MACHINE_MEM_GET_READ_ADDR
and MICROPY_MACHINE_MEM_GET_WRITE_ADDR macros. If one of them is
not defined, then a default identity function is provided.
To let unix port implement "machine" functionality on Python level, and
keep consistent naming in other ports (baremetal ports will use magic
module "symlinking" to still load it on "import machine").
Fixes#1701.
This solves long-standing non-deterministic bug, which manifested itself
on x86 32-bit (at least of reported cases) - segfault on Ctrl+C (i.e.
SIGINT).
ilistdir() returns iterator which yields triples of (name, type, ino)
where ino is inode number for entry's data, type of entry (file/dir/etc.),
and name of file/dir. listdir() can be easily implemented in terms of this
iterator (which is otherwise more efficient in terms of memory use and may
save expensive call to stat() for each returned entry).
CPython has os.scandir() which also returns an iterator, but it yields
more complex objects of DirEntry type. scandir() can also be easily
implemented in terms of ilistdir().
After an I/O event is triggered for fd, event flags are automatically reset,
so no further events are reported until new event flags are set. This is
an optimization for uasyncio, required to account for coroutine semantics:
each coroutine issues explicit read/write async call, and once that trigger,
no events should be reported to coroutine, unless it again explicitly
requests it. One-shot mode saves one linear scan over the poll array.