I instrumented RTC_Handler and determined that on SAMD51 it was possible
for the interrupt to be delivered well before the actual overflow of the
RTC COUNT register (e.g., a value as small as 0xffff_fffd could be seen
at the time of overflow)
Rather than depending on the overflow interrupt coming in at the same time
as COUNT overflows (exactly), rely only on observed values of COUNT in
_get_count, overflowing when it wraps around from a high value to a low
one.
With this change, PLUS a second change so that it is possible to warp
the RTC counter close to an overflow and test in 20ms instead of 3 days,
there was no problem detected over 20000+ overflows. Before, a substantial
fraction (much greater than 10%) of overflows failed.
Fixes#5985
Change to common-hal/rtc/RTC.c for time warping (plus make rtc_old_count non-static):
```patch
void common_hal_rtc_set_calibration(int calibration) {
+
+ common_hal_mcu_disable_interrupts();
+
+ RTC->MODE0.COUNT.reg = 0xffffff00;
+ rtc_old_count = 0;
+ do {
+ while ((RTC->MODE0.SYNCBUSY.reg & (RTC_MODE0_SYNCBUSY_COUNTSYNC | RTC_MODE0_SYNCBUSY_COUNT)) != 0) { }
+ }
+ while(RTC->MODE0.COUNT.reg < 0xffffff00);
+ common_hal_mcu_enable_interrupts();
+
+ mp_printf(&mp_plat_print, "Warping RTC in calibration setter count=%08x rtc_old_count=%08x\n", RTC->MODE0.COUNT.reg, rtc_old_count);
```
Test program:
```python
import time
from rtc import RTC
i = 0
while True:
RTC().calibration = 1 # Warps to ~16ms before overflow, with patch to RTC code
t0 = time.monotonic_ns()
et = t0 + 20_000_000 # 20ms
while (t1 := time.monotonic_ns()) < et: pass
i += 1
print(f"{i:6d}: duration {t1-t0}")
if t1-t0 > 200_000_000: break
print()
```
To me, it made more sense to track which boards go together in a cluster;
when reviewing a request to actually use a duplicate vid/pid, you want
to know what board(s) it is aliasing.
I also revamped the detection of non-USB boards so that a board .mk file
that couldn't be parsed by the code here would raise a problem instead
of just being skipped for the purposes of checking.
There were some lines with comments on the end, and some variation in
capitalization of the IDs. These are all normalized and a (sometimes
unfriendly!) error printed when it's incorrect.
Before this, here were some ways to trick the duplicate vid/pid checker:
```
USB_PID = 0XABCD
USB_PID = 0xAbCd
USB_PID = 0xABCD # harmless comment?
```
None of these things were ever done on purpose.
Removes:
* AUTORESET_DELAY_MS which never did anything but was introduced
somehow.
* CIRCUITPY_BOOT_BUTTON in all but one ESP board because they all have
them. There is a default based on the strapping pins.
* BOARD_USER_SAFE_MODE_ACTION because it was all the same for boards
with boot buttons. Now the safe mode code manages the message.
- Add reset for autoreload. De-request ticks.
- Separate state a little more in autoreload.c
- Rename some routines.
- Remove redundant settings of CIRCUITPY_AUTORELOAD_DELAY_MS.
This allows you to list and explore connected USB devices. It
only stubs out the methods to communicate to endpoints. That will
come in a follow up once TinyUSB has it. (It's in progress.)
Related to #5986
There may be several reasons why we might want to remove the logo form
the REPL: a fork of CircuitPython that doesn't have the right to use the
logo, an especially small display that needs all the room it has to be
useful, displays that are especially vulnerable to burn-in, maybe even
the smaller chips where we want to save as much flash memory as
possible.
This tweaks the RMT timing to better match the 1/3 and 2/3 of 800khz
guideline for timing. It also ensures a delay of 300 microseconds
with the line low before reset.
Pin reset is now changed to the IDF default which pulls the pin up
rather than CircuitPython's old behavior of floating the pin.
Fixes#5679
Initially enabled for samd51, this enables reading raw flux data as well
as DOS/MFM formatted media.
This is only the low-level code for reading & decoding flux pulses from a floppy drive.
high level details will live in a Python library.
adafruit-circuitpython-floppy will take care of details like stepping
from track to track, etc.
The port is free to return NULL for any/all of these, and the caller has
to check.
This will be used in the floppy code, because aside from getting the
registers, it looks like all is independent of MCU.
This targets the 64-bit CPU Raspberry Pis. The BCM2711 on the Pi 4
and the BCM2837 on the Pi 3 and Zero 2W. There are 64-bit fixes
outside of the ports directory for it.
There are a couple other cleanups that were incidental:
* Use const mcu_pin_obj_t instead of omitting the const. The structs
themselves are const because they are in ROM.
* Use PTR <-> OBJ conversions in more places. They were found when
mp_obj_t was set to an integer type rather than pointer.
* Optimize submodule checkout because the Pi submodules are heavy
and unnecessary for the vast majority of builds.
Fixes#4314
It's intended that the actual timeout always be at least the requested
timeout. However, due to multiplying by the wrong factor to get from
seconds to cycles, a timeout request of e.g., 8.1s (which is less than
8.192s) would give an actual timeout of 8, not 16 as it should.
By having a pair of buffers, the capture hardware can fill one buffer while
Python code (including displayio, etc) operates on the other buffer. This
increases the responsiveness of camera-using code.
On the Kaluga it makes the following improvements:
* 320x240 viewfinder at 30fps instead of 15fps using directio
* 240x240 animated gif capture at 10fps instead of 7.5fps
As discussed at length on Discord, the "usual end user" code will look like
this:
camera = ...
with camera.continuous_capture(buffer1, buffer2) as capture:
for frame in capture:
# Do something with frame
However, rather than presenting a context manager, the core code consists of
three new functions to start & stop continuous capture, and to get the next
frame. The reason is twofold. First, it's simply easier to implement the
context manager object in pure Python. Second, for more advanced usage, the
context manager may be too limiting, and it's easier to iterate on the right
design in Python code. In particular, I noticed that adapting the
JPEG-capturing programs to use continuous capture mode needed a change in
program structure.
The camera app was structured as
```python
while True:
if shutter button was just pressed:
capture a jpeg frame
else:
update the viewfinder
```
However, "capture a jpeg frame" needs to (A) switch the camera settings and (B)
capture into a different, larger buffer then (C) return to the earlier
settings. This can't be done during continuous capture mode. So just
restructuring it as follows isn't going to work:
```python
with camera.continuous_capture(buffer1, buffer2) as capture:
for frame in capture:
if shutter button was just pressed:
capture a jpeg frame, without disturbing continuous capture mode
else:
update the viewfinder
```
The continuous mode is only implemented in the espressif port; others
will throw an exception if the associated methods are invoked. It's not
impossible to implement there, just not a priority, since these micros don't
have enough RAM for two framebuffer copies at any resonable sizes.
The capture code, including single-shot capture, now take mp_obj_t in the
common-hal layer, instead of a buffer & length. This was done for the
continuous capture mode because it has to identify & return to the user the
proper Python object representing the original buffer. In the Espressif port,
it was convenient to implement single capture in terms of a multi-capture,
which is why I changed the singleshot routine's signature too.