Commit Graph

9 Commits

Author SHA1 Message Date
Dan Halbert
f2ebe6839c Initial MicroPython v1.21.0 merge; not compiled yet 2023-10-18 17:49:14 -04:00
Dan Halbert
2c0fa0f7dc initial merge from v1.20.0; just satisifying conflicts 2023-09-19 11:10:12 -04:00
Jim Mussared
4216bc7d13 tests: Replace umodule with module everywhere.
This work was funded through GitHub Sponsors.

Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
2023-06-08 17:54:24 +10:00
Scott Shawcroft
bdf592089a
Fix .bin, .hex and .uf2 with new linker sections
Also, format perfbench output in table with reference timing from
the host.
2023-03-20 14:02:57 -07:00
Scott Shawcroft
5bb8a7a7c6
Improve iMX RT performance
* Enable dcache for OCRAM where the VM heap lives.
* Add CIRCUITPY_SWO_TRACE for pushing program counters out over the
  SWO pin via the ITM module in the CPU. Exempt some functions from
  instrumentation to reduce traffic and allow inlining.
* Place more functions in ITCM to handle errors using code in RAM-only
  and speed up CP.
* Use SET and CLEAR registers for digitalio. The SDK does read, mask
  and write.
* Switch to 2MiB reserved for CircuitPython code. Up from 1MiB.
* Run USB interrupts during flash erase and write.
* Allow storage writes from CP if the USB drive is disabled.
* Get perf bench tests running on CircuitPython and increase timeouts
  so it works when instrumentation is active.
2023-03-14 12:30:58 -07:00
Angus Gratton
e024a4c59c tests: Fix run-perfbench parsing "no matching params" case.
Signed-off-by: Angus Gratton <gus@projectgus.com>
2022-06-28 14:22:06 +10:00
Scott Shawcroft
b35fa44c8a
Merge MicroPython 1.12 into CircuitPython 2021-05-03 14:01:18 -07:00
David Lechner
3dc324d3f1 tests: Format all Python code with black, except tests in basics subdir.
This adds the Python files in the tests/ directory to be formatted with
./tools/codeformat.py.  The basics/ subdirectory is excluded for now so we
aren't changing too much at once.

In a few places `# fmt: off`/`# fmt: on` was used where the code had
special formatting for readability or where the test was actually testing
the specific formatting.
2020-03-30 13:21:58 +11:00
Damien George
e92c9aa9c9 tests: Add performance benchmarking test-suite framework.
This benchmarking test suite is intended to be run on any MicroPython
target.  As such all tests are parameterised with N and M: N is the
approximate CPU frequency (in MHz) of the target and M is the approximate
amount of heap memory (in kbytes) available on the target.  When running
the benchmark suite these parameters must be specified and then each test
is tuned to run on that target in a reasonable time (<1 second).

The test scripts are not standalone: they require adding some extra code at
the end to run the test with the appropriate parameters.  This is done
automatically by the run-perfbench.py script, in such a way that imports
are minimised (so the tests can be run on targets without filesystem
support).

To interface with the benchmarking framework, each test provides a
bm_params dict and a bm_setup function, with the later taking a set of
parameters (chosen based on N, M) and returning a pair of functions, one to
run the test and one to get the results.

When running the test the number of microseconds taken by the test are
recorded.  Then this is converted into a benchmark score by inverting it
(so higher number is faster) and normalising it with an appropriate factor
(based roughly on the amount of work done by the test, eg number of
iterations).

Test outputs are also compared against a "truth" value, computed by running
the test with CPython.  This provides a basic way of making sure the test
actually ran correctly.

Each test is run multiple times and the results averaged and standard
deviation computed.  This is output as a summary of the test.

To make comparisons of performance across different runs the
run-perfbench.py script also includes a diff mode that reads in the output
of two previous runs and computes the difference in performance.  Reports
are given as a percentage change in performance with a combined standard
deviation to give an indication if the noise in the benchmarking is less
than the thing that is being measured.

Example invocations for PC, pyboard and esp8266 targets respectively:

    $ ./run-perfbench.py 1000 1000
    $ ./run-perfbench.py --pyboard 100 100
    $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-28 16:29:23 +10:00