By having a pair of buffers, the capture hardware can fill one buffer while
Python code (including displayio, etc) operates on the other buffer. This
increases the responsiveness of camera-using code.
On the Kaluga it makes the following improvements:
* 320x240 viewfinder at 30fps instead of 15fps using directio
* 240x240 animated gif capture at 10fps instead of 7.5fps
As discussed at length on Discord, the "usual end user" code will look like
this:
camera = ...
with camera.continuous_capture(buffer1, buffer2) as capture:
for frame in capture:
# Do something with frame
However, rather than presenting a context manager, the core code consists of
three new functions to start & stop continuous capture, and to get the next
frame. The reason is twofold. First, it's simply easier to implement the
context manager object in pure Python. Second, for more advanced usage, the
context manager may be too limiting, and it's easier to iterate on the right
design in Python code. In particular, I noticed that adapting the
JPEG-capturing programs to use continuous capture mode needed a change in
program structure.
The camera app was structured as
```python
while True:
if shutter button was just pressed:
capture a jpeg frame
else:
update the viewfinder
```
However, "capture a jpeg frame" needs to (A) switch the camera settings and (B)
capture into a different, larger buffer then (C) return to the earlier
settings. This can't be done during continuous capture mode. So just
restructuring it as follows isn't going to work:
```python
with camera.continuous_capture(buffer1, buffer2) as capture:
for frame in capture:
if shutter button was just pressed:
capture a jpeg frame, without disturbing continuous capture mode
else:
update the viewfinder
```
The continuous mode is only implemented in the espressif port; others
will throw an exception if the associated methods are invoked. It's not
impossible to implement there, just not a priority, since these micros don't
have enough RAM for two framebuffer copies at any resonable sizes.
The capture code, including single-shot capture, now take mp_obj_t in the
common-hal layer, instead of a buffer & length. This was done for the
continuous capture mode because it has to identify & return to the user the
proper Python object representing the original buffer. In the Espressif port,
it was convenient to implement single capture in terms of a multi-capture,
which is why I changed the singleshot routine's signature too.
New design:
* capture output to a vstr
* compare the complete vstr to boot_out.txt
* rewrite if not a complete match
This is resilient against future changes to the automatic
text written to boot_out.txt.
This also fixes rewriting boot_out.txt in the case where
boot.py prints something.
Perhaps it also saves a bit of code space. Some tricks:
* no need to close a file in read mode
* no need to switch on/off USB write access, going down to the
oofatfs layer doesn't check it anyway
The easiest thing to implement was to use the i/j numbers, but they were not
directly related to image x/y coordinates. This may slow things down a tiny
little bit, but it looks much better.
* Increase colorspace conversion efficiency.
This not only avoids a function call, it avoids the time-consuming
switch statement in conver_pixel (replacing it with a single
conditional on the byteswap flag + accounting for BGR/RGB during
palette creation)
* Buffer all the bytes of a single frame together. By reducing
low level write calls we get a decent speed increase even though
it increases data-shuffling a bit.
Together with some other changes that enable "double buffered" camera
capture, this gets me to 8.8fps capturing QVGA (320x240) gifs and
11fps capturing 240x240 square gifs.