A signal is like a pin, but ca also be inverted (active low). As such, it
abstracts properties of various physical devices, like LEDs, buttons,
relays, buzzers, etc. To instantiate a Signal:
pin = machine.Pin(...)
signal = machine.Signal(pin, inverted=True)
signal has the same .value() and __call__() methods as a pin.
This provides mp_vfs_XXX functions (eg mount, open, listdir) which are
agnostic to the underlying filesystem type, and just require an object with
the relevant filesystem-like methods (eg .mount, .open, .listidr) which can
then be mounted.
These mp_vfs_XXX functions would typically be used by a port to implement
the "uos" module, and mp_vfs_open would be the builtin open function.
This feature is controlled by MICROPY_VFS, disabled by default.
If MICROPY_VFS_FAT is enabled by a port then the port must switch to using
MICROPY_FATFS_OO. Otherwise a port can continue to use the FatFs code
without any changes.
import utimeq, utime
# Max queue size, the queue allocated statically on creation
q = utimeq.utimeq(10)
q.push(utime.ticks_ms(), data1, data2)
res = [0, 0, 0]
# Items in res are filled up with results
q.pop(res)
So long as a port defines relevant mp_hal_pin_xxx functions (and delay) it
can make use of this software SPI class without the need for additional
code.
These are basic drawing primitives. They work in a generic way on all
framebuf formats by calling the underlying setpixel or fill_rect C-level
primitives.
If you have longish operations on the db (such as logging data) it may
be desirable to periodically sync the database to the disk. The added
btree.sync() method merely exposes the berkley __bt_sync function to the
user.
The constants MP_IOCTL_POLL_xxx, which were stmhal-specific, are moved
from stmhal/pybioctl.h (now deleted) to py/stream.h. And they are renamed
to MP_STREAM_POLL_xxx to be consistent with other such constants.
All uses of these constants have been updated.
If the destination of os.rename() exists then it will be overwritten if it
is a file. This is the POSIX behaviour, which is also the CPython
behaviour, and so we follow suit.
See issue #2598 for discussion.
Fill is a very common operation (eg to clear the screen) and it is worth
optimising it, by providing a specialised fill_rect function for each
framebuffer format.
This patch improved the speed of fill by 10 times for a 16-bit display
with 160*128 pixels.
Rename FrameBuffer1 into FrameBuffer and make it handle different bit
depths via a method table that has getpixel and setpixel. Currently
supported formats are MVLSB (monochrome, vertical, LSB) and RGB565.
Also add blit() and fill_rect() methods.
If a port defines MICROPY_READER_POSIX or MICROPY_READER_FATFS then
lexer.c now provides an implementation of mp_lexer_new_from_file using
the mp_reader_new_file function.
Implementations of persistent-code reader are provided for POSIX systems
and systems using FatFS. Macros to use these are MICROPY_READER_POSIX and
MICROPY_READER_FATFS respectively. If an alternative implementation is
needed then a port can define the function mp_reader_new_file.
Its addition was due to an early exploration on how to add CPython-like
stream interface. It's clear that it's not needed and just takes up
bytes in all ports.
As required for further elaboration of uasyncio, like supporting baremetal
systems with wraparound timesources. This is not intended to be public
interface, and likely will be further refactored in the future.
Now the function properly uses ring arithmetic to return signed value
in range (inclusive):
[-MICROPY_PY_UTIME_TICKS_PERIOD/2, MICROPY_PY_UTIME_TICKS_PERIOD/2-1].
That means that function can properly process 2 time values away from
each other within MICROPY_PY_UTIME_TICKS_PERIOD/2 ticks, but away in
both directions. For example, if tick value 'a' predates tick value 'b',
ticks_diff(a, b) will return negative value, and positive value otherwise.
But at positive value of MICROPY_PY_UTIME_TICKS_PERIOD/2-1, the result
of the function will wrap around to negative -MICROPY_PY_UTIME_TICKS_PERIOD/2,
in other words, if a follows b in more than MICROPY_PY_UTIME_TICKS_PERIOD/2 - 1
ticks, the function will "consider" a to actually predate b.
Based on the earlier discussed RFC. Practice showed that the most natural
order for arguments corresponds to mathematical subtraction:
ticks_diff(x, y) <=> x - y
Also, practice showed that in real life, it's hard to order events by time
of occurance a priori, events tend to miss deadlines, etc. and the expected
order breaks. And then there's a need to detect such cases. And ticks_diff
can be used exactly for this purpose, if it returns a signed, instead of
unsigned, value. E.g. if x is scheduled time for event, and y is the current
time, then if ticks_diff(x, y) < 0 then event has missed a deadline (and e.g.
needs to executed ASAP or skipped). Returning in this case a large unsigned
number (like ticks_diff behaved previously) doesn't make sense, and such
"large unsigned number" can't be reliably detected per our definition of
ticks_* function (we don't expose to user level maximum value, it can be
anything, relatively small or relatively large).
In order to have more fine-grained control over how builtin functions are
constructed, the MP_DECLARE_CONST_FUN_OBJ macros are made more specific,
with suffix of _0, _1, _2, _3, _VAR, _VAR_BETEEN or _KW. These names now
match the MP_DEFINE_CONST_FUN_OBJ macros.
As long as a port implement mp_hal_sleep_ms(), mp_hal_ticks_ms(), etc.
functions, it can just use standard implementations of utime.sleel_ms(),
utime.ticks_ms(), etc. Python-level functions.
This refactors ujson.loads(s) to behave as ujson.load(StringIO(s)).
Increase in code size is: 366 bytes for unix x86-64, 180 bytes for
stmhal, 84 bytes for esp8266.
As per discussion in #2449, using write requests instead of read requests
for I2C.scan() seems to support a larger number of devices, especially
ones that are write-only. Even a read-only I2C device has to implement
writes in order to be able to receive the address of the register to read.
Adds check that LZ offsets fall into the sliding dictionary used. This
catches a case when uzlib.DecompIO with a smaller dictionary is used
to decompress data which was compressed with a larger dictionary.
Previously, this would lead to producing invalid data or crash, now
an exception will be thrown.
The delay_half parameter must be specified by the port to set up the
timing of the software SPI. This allows the port to adjust the timing
value to better suit its timing characteristics, as well as provide a
more accurate printing of the baudrate.
There is no need to take src_len and dest_len arguments. The case of
reading-only with a single output byte (originally src_len=1, dest_len>1)
is now handled by using the output buffer as the input buffer, and using
memset to fill the output byte into this buffer. This simplifies the
implementations of the spi_transfer protocol function.
The memory read/write I2C functions now take an optional keyword-only
parameter that specifies the number of bits in the memory address.
Only mem-addrs that are a multiple of 8-bits are supported (otherwise
the behaviour is undefined).
Due to the integer type used for the address, for values larger than 32
bits, only 32 bits of address will be sent, and the rest will be padded
with 0s. Right now no exception is raised when that happens. For values
smaller than 8, no address is sent. Also no exception then.
Tested with a VL6180 sensor, which has 16-bit register addresses.
Due to code refactoring, this patch reduces stmhal and esp8266 builds
by about 50 bytes.
When the clock is too fast for the i2c slave, it can temporarily hold
down the scl line to signal to the master that it needs to wait. The
master should check the scl line when it is releasing it after
transmitting data, and wait for it to be released.
This change has been tested with a logic analyzer and an i2c slace
implemented on an atmega328p using its twi peripheral, clocked at 8Mhz.
Without the change, the i2c communication works up to aboy 150kHz
frequency, and above that results in the slave stuck in an unresponsive
state. With this change, communication has been tested to work up to
400kHz.
Adds horizontal scrolling. Right now, I'm just leaving the margins
created by the scrolling as they were -- so they will repeat the
edge of the framebuf. This is fast, and the user can always fill
the margins themselves.
There was a bug in `framebuf1_fill` function, that makes it leave a few
lines unfilled at the bottom if the height is not divisible by 8.
A similar bug is fixed in the scroll method.
The idea is that all ports can use these helper methods and only need to
provide initialisation of the SPI bus, as well as a single transfer
function. The coding pattern follows the stream protocol and helper
methods.
This is an object-oriented approach, where uos is only a proxy for the
methods on the vfs object. Some internals had to be exposed (the STATIC
keyword removed) for this to work.
Fixes#2338.
In `btree_seq()`, when `__bt_seq()` gets called with invalid
`flags` argument it will return `RET_ERROR` and it won't
initialize `val`. If field `data` of uninitialized `val`
is passed to `mp_obj_new_bytes()` it causes a segfault.
This goes bit against websocket nature (message-based communication),
as it ignores boundaries bertween messages, but may be very practical
to do simple things with websockets.
In the sense that while GET_FILE transfers its data, REPL still works.
This is done by requiring client to send 1-byte block before WebREPL
server transfers next block of data.
Storing a chain of pbuf was an original design of @pfalcon's lwIP socket
module. The problem with storing just one, like modlwip does is that
"peer closed connection" notification is completely asynchronous and out of
band. So, there may be following sequence of actions:
1. pbuf #1 arrives, and stored in a socket.
2. pbuf #2 arrives, and rejected, which causes lwIP to put it into a
queue to re-deliver later.
3. "Peer closed connection" is signaled, and socket is set at such status.
4. pbuf #1 is processed.
5. There's no stored pbufs in teh socket, and socket status is "peer closed
connection", so EOF is returned to a client.
6. pbuf #2 gets redelivered.
Apparently, there's no easy workaround for this, except to queue all
incoming pbufs in a socket. This may lead to increased memory pressure,
as number of pending packets would be regulated only by TCP/IP flow
control, whereas with previous setup lwIP had a global overlook of number
packets waiting for redelivery and could regulate them centrally.
Allows to translate C-level pin API to Python-level pin API. In other
words, allows to implement a pin class and Python which will be usable
for efficient C-coded algorithms, like bitbanging SPI/I2C, time_pulse,
etc.
The time stamp is taken from the RTC for all newly generated
or changed files. RTC must be maintained separately.
The dummy time stamp of Jan 1, 2000 is set in vfs.stat() for the
root directory, avoiding invalid time values.
The call to stat() returns a 10 element tuple consistent to the os.stat()
call. At the moment, the only relevant information returned are file
type and file size.
Using usual method of virtual method tables. Single virtual method,
ioctl, is defined currently for all operations. This universal and
extensible vtable-based method is also defined as a default MPHAL
GPIO implementation, but a specific port may override it with its
own implementation (e.g. close-ended, but very efficient, e.g. avoiding
virtual method dispatch).
Make dupterm subsystem close a term stream object when EOF or error occurs.
There's no other party than dupterm itself in a better position to do this,
and this is required to properly reclaim stream resources, especially if
multiple dupterm sessions may be established (e.g. as networking
connections).
Both read and write operations support variants where either a) a single
call is made to the undelying stream implementation and returned buffer
length may be less than requested, or b) calls are repeated until requested
amount of data is collected, shorter amount is returned only in case of
EOF or error.
These operations are available from the level of C support functions to be
used by other C modules to implementations of Python methods to be used in
user-facing objects.
The rationale of these changes is to allow to write concise and robust
code to work with *blocking* streams of types prone to short reads, like
serial interfaces and sockets. Particular object types may select "exact"
vs "once" types of methods depending on their needs. E.g., for sockets,
revc() and send() methods continue to be "once", while read() and write()
thus converted to "exactly" versions.
These changes don't affect non-blocking handling, e.g. trying "exact"
method on the non-blocking socket will return as much data as available
without blocking. No data available is continued to be signaled as None
return value to read() and write().
From the point of view of CPython compatibility, this model is a cross
between its io.RawIOBase and io.BufferedIOBase abstract classes. For
blocking streams, it works as io.BufferedIOBase model (guaranteeing
lack of short reads/writes), while for non-blocking - as io.RawIOBase,
returning None in case of lack of data (instead of raising expensive
exception, as required by io.BufferedIOBase). Such a cross-behavior
should be optimal for MicroPython needs.
Calling it from lwIP accept callback will lead incorrect functioning
and/or packet leaks if Python callback has any networking calls, due
to lwIP non-reentrancy. So, instead schedule "poll" callback to do
that, which will be called by lwIP when it does not perform networking
activities. "Poll" callback is called infrequently though (docs say
every 0.5s by default), so for better performance, lwIP needs to be
patched to call poll callback soon after accept callback, but when
current packet is already processed.
While just a websocket is enough for handling terminal part of WebREPL,
handling file transfer operations requires demultiplexing and acting
upon, which is encapsulated in _webrepl class provided by this module,
which wraps a websocket object.
To use: .setsockopt(SOL_SOCKET, 20, lambda sock: print(sock)). There's a
single underlying callback slot. For normal sockets, it serves as data
received callback, for listening sockets - connection arrived callback.
The idea is that if dupterm object can handle exceptions, it will handle
them itself. Otherwise, object state can be compromised and it's better
to terminate dupterm session. For example, disconnected socket will keep
throwing exceptions and dump messages about that.
When lwIP creates a incoming connection socket of a listen socket, it
sets its recv callback to one which discards incoming data. We set
proper callback only in accept() call, when we allocate Python-level
socket where we can queue incoming data. So, in lwIP accept callback
be sure to set recv callback to one which tells lwIP to not discard
incoming data.
This is strange asymmetry which is sometimes needed, e.g. for WebREPL: we
want to process only available input and no more; but for output, we want
to get rid of all of it, because there's no other place to buffer/store
it. This asymmetry is akin to CPython's asyncio asymmetry, where reads are
asynchronous, but writes are synchronous (asyncio doesn't expect them to
block, instead expects there to be (unlimited) buffering for any sync write
to completely immediately).
Per POSIX http://pubs.opengroup.org/onlinepubs/9699919799/functions/send.html :
"If space is not available at the sending socket to hold the message to be
transmitted, and the socket file descriptor does not have O_NONBLOCK set,
send() shall block until space is available. If space is not available at the
sending socket to hold the message to be transmitted, and the socket file
descriptor does have O_NONBLOCK set, send() shall fail [with EAGAIN]."
The code is based on Damien George's implementation for esp8266 port,
avoids use of global variables and associated re-entrancy issues, and
fixes returning stale data in some cases.
It can happen that a socket gets closed while the pbuf is not completely
drained by the application. It can also happen that a new pbuf comes in
via the recv callback, and then a "peer closed" event comes via the same
callback (pbuf=NULL) before the previous event has been handled. In both
cases the socket is closed but there is remaining data. This patch makes
sure such data is passed to the application.
This implements OO interface based on existing fsusermount code and with
minimal changes to it, to serve as a proof of concept of OO interface.
Examle of usage:
bdev = RAMFS(48)
uos.VfsFat.mkfs(bdev)
vfs = uos.VfsFat(bdev, "/ramdisk")
f = vfs.open("foo", "w")
f.write("hello!")
f.close()
This patch adds support to fsusermount for multiple block devices
(instead of just one). The maximum allowed is fixed at compile time by
the size of the fs_user_mount array accessed via MP_STATE_PORT, which
in turn is set by MICROPY_FATFS_VOLUMES.
With this patch, stmhal (which is still tightly coupled to fsusermount)
is also modified to support mounting multiple devices And the flash and
SD card are now just two block devices that are mounted at start up if
they exist (and they have special native code to make them more
efficient).
The new block protocol is:
- readblocks(self, n, buf)
- writeblocks(self, n, buf)
- ioctl(self, cmd, arg)
The new ioctl method handles the old sync and count methods, as well as
a new "get sector size" method.
The old protocol is still supported, and used if the device doesn't have
the ioctl method.
Per the previously discussed plan. mount() still stays backward-compatible,
and new mkfs() is rought and takes more args than needed. But is a step
in a forward direction.
Functions added are:
- randint
- randrange
- choice
- random
- uniform
They are enabled with configuration variable
MICROPY_PY_URANDOM_EXTRA_FUNCS, which is disabled by default. It is
enabled for unix coverage build and stmhal.
SHA1 is used in a number of protocols and algorithm originated 5 years ago
or so, in other words, it's in "wide use", and only newer protocols use
SHA2.
The implementation depends on axTLS enabled. TODO: Make separate config
option specifically for sha1().
Seedable and reproducible pseudo-random number generator. Implemented
functions are getrandbits(n) (n <= 32) and seed().
The algorithm used is Yasmarang by Ilya Levin:
http://www.literatecode.com/yasmarang
The first argument to the type.make_new method is naturally a uPy type,
and all uses of this argument cast it directly to a pointer to a type
structure. So it makes sense to just have it a pointer to a type from
the very beginning (and a const pointer at that). This patch makes
such a change, and removes all unnecessary casting to/from mp_obj_t.
This patch changes the type signature of .make_new and .call object method
slots to use size_t for n_args and n_kw (was mp_uint_t. Makes code more
efficient when mp_uint_t is larger than a machine word. Doesn't affect
ports when size_t and mp_uint_t have the same size.
Everyone loves to names similar things the same, then there're conflicts
between different libraries. The namespace prefix used is "CRYAL_", which
is weird, and that's good, as that minimizes chance of another conflict.
This basically introduces the MICROPY_MACHINE_MEM_GET_READ_ADDR
and MICROPY_MACHINE_MEM_GET_WRITE_ADDR macros. If one of them is
not defined, then a default identity function is provided.
Previously, sizeof() blindly assumed LAYOUT_NATIVE and tried to align
size even for packed LAYOUT_LITTLE_ENDIAN & LAYOUT_BIG_ENDIAN. As sizeof()
is implemented on a strucuture descriptor dictionary (not an structure
object), resolving this required passing layout type around.
This allows the mp_obj_t type to be configured to something other than a
pointer-sized primitive type.
This patch also includes additional changes to allow the code to compile
when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of
mp_uint_t, and various casts.
Contains implementation of ?: (non-capturing groups), ?? (non-greedy ?),
as well as much improved robustness, and edge cases and error handling by
Amir Plivatsky (@ampli).
These MPHAL functions are intended to replace previously used HAL_Delay(),
HAL_GetTick() to provide better naming and MPHAL separation (they are
fully equivalent otherwise).
Also, refactor extmod/modlwip to use them.
This requires root access. And on recent Linux kernels, with
CONFIG_STRICT_DEVMEM option enabled, only address ranges listed in
/proc/iomem can be accessed. The above compiled-time option can be
however overriden with boot-time option "iomem=relaxed".
This also removed separate read/write paths - there unlikely would
be a case when they're different.
Now address comes first, and args related to struct type are groupped next.
Besides clear groupping, should help catch errors eagerly (e.g. forgetting
to pass address will error out).
Also, improve args number checking/reporting overall.
mp_obj_get_int_truncated will raise a TypeError if the argument is not
an integral type. Use mp_obj_int_get_truncated only when you know the
argument is a small or big int.
Previous to this patch the printing mechanism was a bit of a tangled
mess. This patch attempts to consolidate printing into one interface.
All (non-debug) printing now uses the mp_print* family of functions,
mainly mp_printf. All these functions take an mp_print_t structure as
their first argument, and this structure defines the printing backend
through the "print_strn" function of said structure.
Printing from the uPy core can reach the platform-defined print code via
two paths: either through mp_sys_stdout_obj (defined pert port) in
conjunction with mp_stream_write; or through the mp_plat_print structure
which uses the MP_PLAT_PRINT_STRN macro to define how string are printed
on the platform. The former is only used when MICROPY_PY_IO is defined.
With this new scheme printing is generally more efficient (less layers
to go through, less arguments to pass), and, given an mp_print_t*
structure, one can call mp_print_str for efficiency instead of
mp_printf("%s", ...). Code size is also reduced by around 200 bytes on
Thumb2 archs.
This simplifies the API for objects and reduces code size (by around 400
bytes on Thumb2, and around 2k on x86). Performance impact was measured
with Pystone score, but change was barely noticeable.
Previous to this patch, a big-int, float or imag constant was interned
(made into a qstr) and then parsed at runtime to create an object each
time it was needed. This is wasteful in RAM and not efficient. Now,
these constants are parsed straight away in the parser and turned into
objects. This allows constants with large numbers of digits (so
addresses issue #1103) and takes us a step closer to #722.
This cleans up vstr so that it's a pure "variable buffer", and the user
can decide whether they need to add a terminating null byte. In most
places where vstr is used, the vstr did not need to be null terminated
and so this patch saves code size, a tiny bit of RAM, and makes vstr
usage more efficient. When null termination is needed it must be
done explicitly using vstr_null_terminate.
With this patch str/bytes construction is streamlined. Always use a
vstr to build a str/bytes object. If the size is known beforehand then
use vstr_init_len to allocate only required memory. Otherwise use
vstr_init and the vstr will grow as needed. Then use
mp_obj_new_str_from_vstr to create a str/bytes object using the vstr
memory.
Saves code ROM: 68 bytes on stmhal, 108 bytes on bare-arm, and 336 bytes
on unix x64.
mp_obj_int_get_truncated is used as a "fast path" int accessor that
doesn't check for overflow and returns the int truncated to the machine
word size, ie mp_int_t.
Use mp_obj_int_get_truncated to fix struct.pack when packing maximum word
sized values.
Addresses issues #779 and #998.
Before, sizeof() could be applied to a structure field only if that field
was itself a structure. Now it can be applied to PTR and ARRAY fields too.
It's not possible to apply it to scalar fields though, because as soon as
scalar field (int or float) is dereferenced, its value is converted into
Python int/float value, and all original type info is lost. Moreover, we
allow sizeof of type definitions too, and there int is used to represent
(scalar) types. So, we have ambiguity what int may be - either dereferenced
scalar structure field, or encoded scalar type. So, rather throw an error
if user tries to apply sizeof() to int.
Teensy doesn't need to worry about overflows since all of
its timers are only 16-bit.
For PWM, the pulse width needs to be able to vary from 0..period+1
(pulse-width == period+1 corresponds to 100% PWM)
I couldn't test the 0xffffffff cases since we can't currently get a
period that big in python. With a prescaler of 0, that corresponds
to a freq of 0.039 (i.e. cycle every 25.56 seconds), and we can't
set that using freq or period.
I also tested both stmhal and teensy with floats disabled, which
required a few other code changes to compile.