The new function factory_reset_make_files() populates the given filesystem
with the default factory files. It is defined with weak linkage so it can
be overridden by a board.
This commit also brings some minor user-facing changes:
- boot.py is now no longer created unconditionally if it doesn't exist, it
is now only created when the filesystem is formatted and the other files
are populated (so, before, if the user deleted boot.py it would be
recreated at next boot; now it won't be).
- pybcdc.inf and README.txt are only created if the board has USB, because
they only really make sense if the filesystem is exposed via USB.
It's more common to need non-blocking behaviour when reading from a UART,
rather than having a large timeout like 1000ms (the original behaviour).
With a large timeout it's 1) likely that the function will read forever if
characters keep trickling it; or 2) the function will unnecessarily wait
when characters come sporadically, eg at a REPL prompt.
Prior to this commit, building the unix port with `DEBUG=1` and
`-finstrument-functions` the compilation would fail with an error like
"control reaches end of non-void function". This change fixes this by
removing the problematic "if (0)" branches. Not all branches affect
compilation, but they are all removed for consistency.
- IBK-BLYST-NANO: Breakout board
- IDK-BLYST-NANO: DevKit board with builtin IDAP-M CMSIS-DAP Debug JTAG,
RGB led
- BLUEIO-TAG-EVIM: Sensor tag board (environmental sensor
(T, H, P, Air quality) + 9 axis motion sensor)
Also, the LED module has been updated to support individual base level
configuration of each LED. If set, this will be used instead of the
common configuration, MICROPY_HW_LED_PULLUP. The new configuration,
MICROPY_HW_LEDX_LEVEL, where X is the LED number can be used to set
the base level of the specific LED.
The alternate function pin allocations are different to other NUCLEO-144
boards. This is because the STM32F413 has a very high peripheral count:
10x UART, 5x SPI, 3x I2C, 3x CAN. The pinout was chosen to expose all
these devices on separate pins except CAN3 which shares a pin with UART1
and SPI1 which shares pins with DAC.
Includes:
- Support for CAN3.
- Support for UART9 and UART10.
- stm32f413xg.ld and stm32f413xh.ld linker scripts.
- stm32f413_af.csv alternate function mapping.
- startup_stm32f413xx.s because F413 has different interrupt vector table.
- Memory configuration with: 240K filesystem, 240K heap, 16K stack.
This patch makes pllvalues.py generate two tables: one for when HSI is used
and one for when HSE is used. The correct table is then selected at
compile time via the existing MICROPY_HW_CLK_USE_HSI.
With this change, @micropython.asm_thumb functions will work on standard
ARM processors (that are in ARM state by default), in scripts and
precompiled .mpy files.
Addresses issue #4675.
When building with link time optimization enabled it is possible both
gc_collect() and gc_collect_regs_and_stack() get inlined into gc_alloc()
which can result in the regs variable being pushed on the stack earlier
than some of the registers. Depending on the calling convention, those
registers might however contain pointers to blocks which have just been
allocated in the caller of gc_alloc(). Then those pointers end up higher on
the stack than regs, aren't marked by gc_collect_root() and hence get
sweeped, even though they're still in use.
As reported in #4652 this happened for in 32-bit msvc release builds:
mp_lexer_new() does two consecutive allocations and the latter triggered a
gc_collect() which would sweep the memory of the first allocation again.
Otherwise mp_interrupt_char will have a value of zero on start up (because
it's in the BSS) and a KeyboardInterrupt may be raised during start up.
For example this can occur if there is a UART attached to the REPL which
sends spurious null bytes when the device turns on.
So that boot.py and/or main.py can be frozen (either as STR or MPY) in the
same way that other scripts are frozen. Frozen scripts have preference to
scripts in the VFS.
It consists of:
1. "do_handhake" param (default True) to wrap_socket(). If it's False,
handshake won't be performed by wrap_socket(), as it would be done in
blocking way normally. Instead, SSL socket can be set to non-blocking mode,
and handshake would be performed before the first read/write request (by
just returning EAGAIN to these requests, while instead reading/writing/
processing handshake over the connection). Unfortunately, axTLS doesn't
really support non-blocking handshake correctly. So, while framework for
this is implemented on MicroPython's module side, in case of axTLS, it
won't work reliably.
2. Implementation of .setblocking() method. It must be called on SSL socket
for blocking vs non-blocking operation to be handled correctly (for
example, it's not enough to wrap non-blocking socket with wrap_socket()
call - resulting SSL socket won't be itself non-blocking). Note that
.setblocking() propagates call to the underlying socket object, as
expected.
For this, add wrap_socket(do_handshake=False) param. CPython doesn't have
such a param at a module's global function, and at SSLContext.wrap_socket()
it has do_handshake_on_connect param, but that uselessly long.
Beyond that, make write() handle not just MBEDTLS_ERR_SSL_WANT_WRITE, but
also MBEDTLS_ERR_SSL_WANT_READ, as during handshake, write call may be
actually preempted by need to read next handshake message from peer.
Likewise, for read(). And even after the initial negotiation, situations
like that may happen e.g. with renegotiation. Both
MBEDTLS_ERR_SSL_WANT_READ and MBEDTLS_ERR_SSL_WANT_WRITE are however mapped
to the same None return code. The idea is that if the same read()/write()
method is called repeatedly, the progress will be made step by step anyway.
The caveat is if user wants to add the underlying socket to uselect.poll().
To be reliable, in this case, the socket should be polled for both POLL_IN
and POLL_OUT, as we don't know the actual expected direction. But that's
actually problematic. Consider for example that write() ends with
MBEDTLS_ERR_SSL_WANT_READ, but gets converted to None. We put the
underlying socket on pull using POLL_IN|POLL_OUT but that probably returns
immediately with POLL_OUT, as underlyings socket is writable. We call the
same ussl write() again, which again results in MBEDTLS_ERR_SSL_WANT_READ,
etc. We thus go into busy-loop.
So, the handling in this patch is temporary and needs fixing. But exact way
to fix it is not clear. One way is to provide explicit function for
handshake (CPython has do_handshake()), and let *that* return distinct
codes like WANT_READ/WANT_WRITE. But as mentioned above, past the initial
handshake, such situation may happen again with at least renegotiation. So
apparently, the only robust solution is to return "out of bound" special
sentinels like WANT_READ/WANT_WRITE from read()/write() directly. CPython
throws exceptions for these, but those are expensive to adopt that way for
efficiency-conscious implementation like MicroPython.
The machine.WDT() now accepts the "timeout" keyword argument to select the
WDT interval. And the WDT is changed to panic mode which means it will
reset the device if the interval expires (instead of just printing an error
message).
On the STM32F722 (at least, but STM32F767 is not affected) the CK48MSEL bit
must be deselected before PLLSAION is turned off, or else the 48MHz
peripherals (RNG, SDMMC, USB) may get stuck without a clock source.
In such "lock up" cases it seems that these peripherals are still being
clocked from the PLLSAI even though the CK48MSEL bit is turned off. A hard
reset does not get them out of this stuck state. Enabling the PLLSAI and
then disabling it does get them out. A test case to see this is:
import machine, pyb
for i in range(100):
machine.freq(122_000000)
machine.freq(120_000000)
print(i, [pyb.rng() for _ in range(4)])
On occasion the RNG will just return 0's, but will get fixed again on the
next loop (when PLLSAI is enabled by the change to a SYSCLK of 122MHz).
Fixes issue #4696.
The stm32 and nrf ports already had the behaviour that they would first
check if the script exists before executing it, and this patch makes all
other ports work the same way. This helps when developing apps because
it's hard to tell (when unconditionally trying to execute the scripts) if
the resulting OSError at boot up comes from missing boot.py or main.py, or
from some other error. And it's not really an error if these scripts don't
exist.
Prior to this patch, when a lot of data was output by a running script
pyboard.py would try to capture all of this output into the "data"
variable, which would gradually slow down pyboard.py to the point where it
would have large CPU and memory usage (on the host) and potentially lose
data.
This patch fixes this problem by not accumulating the data in the case that
the data is not needed, which is when "data_consumer" is used.
This patch makes the DAC driver simpler and removes the need for the ST
HAL. As part of it, new helper functions are added to the DMA driver,
which also use direct register access instead of the ST HAL.
Main changes to the DAC interface are:
- The DAC uPy object is no longer allocated dynamically on the heap,
rather it's statically allocated and the same object is retrieved for
subsequent uses of pyb.DAC(<id>). This allows to access the DAC objects
without resetting the DAC peripheral. It also means that the DAC is only
reset if explicitly passed initialisation parameters, like "bits" or
"buffering".
- The DAC.noise() and DAC.triangle() methods now output a signal which is
full scale (previously it was a fraction of the full output voltage).
- The DAC.write_timed() method is fixed so that it continues in the
background when another peripheral (eg SPI) uses the DMA (previously the
DAC would stop if another peripheral finished with the DMA and shut the
DMA peripheral off completely).
Based on the above, the following backwards incompatibilities are
introduced:
- pyb.DAC(id) will now only reset the DAC the first time it is called,
whereas previously each call to create a DAC object would reset the DAC.
To get the old behaviour pass the bits parameter like: pyb.DAC(id, bits).
- DAC.noise() and DAC.triangle() are now full scale. To get previous
behaviour (to change the amplitude and offset) write to the DAC_CR (MAMP
bits) and DAC_DHR12Rx registers manually.
In CPython the random module is seeded differently on each import, and so
this new macro option MICROPY_PY_URANDOM_SEED_INIT_FUNC allows to implement
such a behaviour.