This extra forward slash for the starting-point directory is unnecessary
and leads to additional slashes on Max OS X which mean that the frozen
files cannot be imported.
Fixes#2374.
This type was used only for the typedef of mp_obj_t, which is now defined
by the object representation. So we can now remove this unused typedef,
to simplify the mpconfigport.h file.
This includes file and socket objects, backed by Unix file descriptor.
This improves compatibility with stmhal's uselect (and convenience of
use), though not completely: return value from poll.poll() is still
raw file descriptor.
To filter out even prototypes of mp_stream_posix_*() functions, which
require POSIX types like ssize_t & off_t, which may be not available in
some ports.
The minimum thread stack size is set by pthreads (16k bytes) so we must
use that value for our minimum. The stack limit check is also adjusted
to work correctly for 32-bit builds.
Threading support is still very new so stay conservative at this point
and enable threading without the GIL. This requires users to protect
concurrent access of mutatable Python objects (eg lists) with locks at
the Python level (something you should probably do anyway). The
advantage is that there is less of a performance hit for non-threaded
code, because the VM does not need to constantly release/acquire the GIL.
In the future the GIL will be made more efficient. There is also room to
improve the efficiency of non-GIL code by not using mutex's if there is
only one thread active.
Due to the way modern compilers work (allocating space for stack vars once
at tha start of function, and deallocating once on exit from), using
intermediate stack buffer of big size caused blockage of 4K (PATH_MAX)
on stack for the entire duration of MicroPython execution.
SA_SIGINFO allows the signal handler to access more information about
the signal, especially useful in a threaded environment. The extra
information is not currently used but it may prove useful in the future.
The linker flag --gc-sections is not available on the linker used on
Mac OS X which results in an error when linking micropython on Mac OS X.
Therefore move this option to the LDFLAGS_ARCH variable on non Darwin
systems. According to http://stackoverflow.com/a/17710056 the equivalent
to --gc-sections is -dead_strip thus this option is used for the
LDFLAGS_ARCH on Darwin systems.
When built for Linux, libffi includes very bloated and workaround exec-alloc
implementation required to work around SELinux and other "sekuritee" features
which real people don't use. MicroPython has own alloc-exec implementation,
used to alloc memory for @micropython.native code. With this option enabled,
uPy's implementation will override libffi's. This saves 11K on x86_64 (and
that accounts for more than half of the libffi code size).
TODO: Possibly, we want to refactor this option to allow either use uPy's
implementation even for libffi, or allow to use libffi's implementation even
for uPy.
This actually saves "only" 6K for x86_64 build, as we're still more or less
careful to #ifdef unneeded code. But relying on --gc-sections in a "lazy"
manner would allow to make #ifdef'ing less pervasive (not suggested right
away, but an option for the future).
MicroPython own readline implementation is superior now by providing
automatic indentation and completion (completion for GNU Readline was
never implemented). MICROPY_USE_READLINE=2 also wasn't build for a long
time and probably broken.
If GNU Readline is still beneficial for some cases, it can be achieved
with external wrappers like "rlwrap" (there will be the same level of
functionality, as again, there never was deep integration, like completion
support).
The call to stat() returns a 10 element tuple consistent to the os.stat()
call. At the moment, the only relevant information returned are file
type and file size.
Avoid using system libraries, use copies bundled with MicroPython as
submodules (currently affects only libffi, other dependencies either
already used as bundled-only (axtls), or can't be bundled (so far),
like libjni).
Disabled by default, enabled in unix port. Need for this method easily
pops up when working with text UI/reporting, and coding workalike
manually again and again counter-productive.
To use frozen bytecode make a subdirectory under the unix/ directory
(eg frozen/), put .py files there, then run:
make FROZEN_MPY_DIR=frozen
Be sure to build from scratch. The .py files will then be available for
importing.
The current install command uses the flag -D which is specific to the
install command from GNU coreutils, but isn't available for the BSD
version. This solution uses the -d flag which should be commonly
available to create the target directory. Afterwards the target files
are installed to this directory seperately.
- add template rule that converts a specified source file into a qstring file
- add special rule for generating a central header that contains all
extracted/autogenerated strings - defined by QSTR_DEFS_COLLECTED
variable. Each platform appends a list of sources that may contain
qstrings into a new build variable: SRC_QSTR. Any autogenerated
prerequisities are should be appened to SRC_QSTR_AUTO_DEPS variable.
- remove most qstrings from py/qstrdefs, keep only qstrings that
contain special characters - these cannot be easily detected in the
sources without additional annotations
- remove most manual qstrdefs, use qstrdef autogen for: py, cc3200,
stmhal, teensy, unix, windows, pic16bit:
- remove all micropython generic qstrdefs except for the special strings that contain special characters (e.g. /,+,<,> etc.)
- remove all port specific qstrdefs except for special strings
- append sources for qstr generation in platform makefiles (SRC_QSTR)
The config variable MICROPY_MODULE_FROZEN is now made of two separate
parts: MICROPY_MODULE_FROZEN_STR and MICROPY_MODULE_FROZEN_MPY. This
allows to have none, either or both of frozen strings and frozen mpy
files (aka frozen bytecode).
See https://github.com/micropython/micropython/issues/1736 for the
list of complications. This workaround instead of duplicating REPL
to another stream, switches to it, because read(STDIN) we use otherwise
is blocking call, so it and custom REPL stream can't be used together.
Calling it from mp_init() is too late for some ports (like Unix), and leads
to incomplete stack frame being captured, with following GC issues. So, now
each port should call mp_stack_ctrl_init() on its own, ASAP after startup,
and taking special precautions so it really was called before stack variables
get allocated (because if such variable with a pointer is missed, it may lead
to over-collecting (typical symptom is segfaulting)).
When using newer glibc's the compiler automatically sets
_FORTIFY_SOURCE when building with -O1 and this causes
a special inlined version of printf to be declared which
then bypasses our version of printf.
Functions added are:
- randint
- randrange
- choice
- random
- uniform
They are enabled with configuration variable
MICROPY_PY_URANDOM_EXTRA_FUNCS, which is disabled by default. It is
enabled for unix coverage build and stmhal.
Seedable and reproducible pseudo-random number generator. Implemented
functions are getrandbits(n) (n <= 32) and seed().
The algorithm used is Yasmarang by Ilya Levin:
http://www.literatecode.com/yasmarang
The first argument to the type.make_new method is naturally a uPy type,
and all uses of this argument cast it directly to a pointer to a type
structure. So it makes sense to just have it a pointer to a type from
the very beginning (and a const pointer at that). This patch makes
such a change, and removes all unnecessary casting to/from mp_obj_t.
This patch changes the type signature of .make_new and .call object method
slots to use size_t for n_args and n_kw (was mp_uint_t. Makes code more
efficient when mp_uint_t is larger than a machine word. Doesn't affect
ports when size_t and mp_uint_t have the same size.
POSIX doesn't guarantee something like that to work, but it works on any
system with careful signal implementation. Roughly, the requirement is
that signal handler is executed in the context of the process, its main
thread, etc. This is true for Linux. Also tested to work without issues
on MacOSX.
This basically introduces the MICROPY_MACHINE_MEM_GET_READ_ADDR
and MICROPY_MACHINE_MEM_GET_WRITE_ADDR macros. If one of them is
not defined, then a default identity function is provided.
To let unix port implement "machine" functionality on Python level, and
keep consistent naming in other ports (baremetal ports will use magic
module "symlinking" to still load it on "import machine").
Fixes#1701.
This solves long-standing non-deterministic bug, which manifested itself
on x86 32-bit (at least of reported cases) - segfault on Ctrl+C (i.e.
SIGINT).
ilistdir() returns iterator which yields triples of (name, type, ino)
where ino is inode number for entry's data, type of entry (file/dir/etc.),
and name of file/dir. listdir() can be easily implemented in terms of this
iterator (which is otherwise more efficient in terms of memory use and may
save expensive call to stat() for each returned entry).
CPython has os.scandir() which also returns an iterator, but it yields
more complex objects of DirEntry type. scandir() can also be easily
implemented in terms of ilistdir().
After an I/O event is triggered for fd, event flags are automatically reset,
so no further events are reported until new event flags are set. This is
an optimization for uasyncio, required to account for coroutine semantics:
each coroutine issues explicit read/write async call, and once that trigger,
no events should be reported to coroutine, unless it again explicitly
requests it. One-shot mode saves one linear scan over the poll array.
Per CPython docs, "Registering a file descriptor that’s already registered
is not an error, and has the same effect as registering the descriptor
exactly once."
https://docs.python.org/3/library/select.html#select.poll.register
That's somewhat ambiguous, what's implemented here is that if fd si not
yet registered, it is registered. Otherwise, the effect is equivalent to
modify() method.
Usually this checking is done by VM on jump instructions, but for linear
sequences of instructions and builtin functions this won't happen. Particular
target of this change is long-running builtin functions like time.sleep().
As set by signal handler. This assumes that exception will be raised
somewhere else, which so far doesn't happen for single function call.
Still, it makes sense to handle that in some common place.
This allows the mp_obj_t type to be configured to something other than a
pointer-sized primitive type.
This patch also includes additional changes to allow the code to compile
when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of
mp_uint_t, and various casts.
THis is required to deal well with signals, signals being the closest
analogue of hardware interrupts for POSIX. This is also CPython 3.5
compliant behavior (PEP 475).
The main problem implementing this is to figure out how much time was
spent in waiting so far/how much is remaining. It's well-known fact that
Linux updates select()'s timeout value when returning with EINTR to the
remaining wait time. Here's what POSIX-based standards say about this:
(http://pubs.opengroup.org/onlinepubs/9699919799/functions/pselect.html):
"Upon successful completion, the select() function may modify the object
pointed to by the timeout argument."
I.e. it allows to modify timeout value, but doesn't say how exactly it is
modified. And actually, it allows such modification only "upon successful
completion", which returning with EINTR error hardly is.
POSIX also allows to request automatic EINTR restart for system calls using
sigaction call with SA_RESTART flag, but here's what the same document says
about it:
"If SA_RESTART has been set for the interrupting signal, it is
implementation-defined whether the function restarts or returns with
[EINTR]."
In other words, POSIX doesn't leave room for both portable and efficient
handling of this matter, so the code just allows to manually select
Linux-compatible behavior with MICROPY_SELECT_REMAINING_TIME option,
or otherwise will just raise OSError. When systems with non-Linux behavior
are found, they can be handled separately.
In other words, unix port now uses overriden printf(), instead of using
libc's. This should remove almost all dependency on libc stdio (which
is bloated).