By using a single, global mutex, all memory-related functions (alloc,
free, realloc, collect, etc) are made thread safe. This means that only
one thread can be in such a function at any one time.
The linker flag --gc-sections is not available on the linker used on
Mac OS X which results in an error when linking micropython on Mac OS X.
Therefore move this option to the LDFLAGS_ARCH variable on non Darwin
systems. According to http://stackoverflow.com/a/17710056 the equivalent
to --gc-sections is -dead_strip thus this option is used for the
LDFLAGS_ARCH on Darwin systems.
In particular, the WeMOS D1 Mini board comes with a shield that has a
64x48 OLED display. This patch makes it display properly, with the upper
left pixel being at (0, 0) and not (32, 0).
I tried to do this with the configuration commands, but there doesn't
seem to be a command that would set the column offset (there is one for
the line offset, though).
gcc 6.1.1 warns when indentation is misleading, and in this case the
formatting of the code really is misleading. So adjust the formatting
to be clear of the meaning of the code.
Storing a chain of pbuf was an original design of @pfalcon's lwIP socket
module. The problem with storing just one, like modlwip does is that
"peer closed connection" notification is completely asynchronous and out of
band. So, there may be following sequence of actions:
1. pbuf #1 arrives, and stored in a socket.
2. pbuf #2 arrives, and rejected, which causes lwIP to put it into a
queue to re-deliver later.
3. "Peer closed connection" is signaled, and socket is set at such status.
4. pbuf #1 is processed.
5. There's no stored pbufs in teh socket, and socket status is "peer closed
connection", so EOF is returned to a client.
6. pbuf #2 gets redelivered.
Apparently, there's no easy workaround for this, except to queue all
incoming pbufs in a socket. This may lead to increased memory pressure,
as number of pending packets would be regulated only by TCP/IP flow
control, whereas with previous setup lwIP had a global overlook of number
packets waiting for redelivery and could regulate them centrally.
This allows to define an abstract base class which would translate
C-level protocol to Python method calls, and any subclass inheriting
from it will support this feature. This in particular actually enables
recently introduced machine.PinBase class.
Allows to translate C-level pin API to Python-level pin API. In other
words, allows to implement a pin class and Python which will be usable
for efficient C-coded algorithms, like bitbanging SPI/I2C, time_pulse,
etc.
When built for Linux, libffi includes very bloated and workaround exec-alloc
implementation required to work around SELinux and other "sekuritee" features
which real people don't use. MicroPython has own alloc-exec implementation,
used to alloc memory for @micropython.native code. With this option enabled,
uPy's implementation will override libffi's. This saves 11K on x86_64 (and
that accounts for more than half of the libffi code size).
TODO: Possibly, we want to refactor this option to allow either use uPy's
implementation even for libffi, or allow to use libffi's implementation even
for uPy.
This actually saves "only" 6K for x86_64 build, as we're still more or less
careful to #ifdef unneeded code. But relying on --gc-sections in a "lazy"
manner would allow to make #ifdef'ing less pervasive (not suggested right
away, but an option for the future).
The time stamp is taken from the RTC for all newly generated
or changed files. RTC must be maintained separately.
The dummy time stamp of Jan 1, 2000 is set in vfs.stat() for the
root directory, avoiding invalid time values.