circuitpython/tests/run-perfbench.py

326 lines
10 KiB
Python
Raw Normal View History

tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
#!/usr/bin/env python3
# This file is part of the MicroPython project, http://micropython.org/
# The MIT License (MIT)
# Copyright (c) 2019 Damien P. George
import os
import subprocess
import sys
import time
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
import argparse
from glob import glob
from rich.live import Live
from rich.console import Console
from rich.table import Table
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
sys.path.append("../tools")
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
import pyboard
# Paths for host executables
if os.name == "nt":
CPYTHON3 = os.getenv("MICROPY_CPYTHON3", "python3.exe")
MICROPYTHON = os.getenv("MICROPY_MICROPYTHON", "../ports/windows/micropython.exe")
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
else:
CPYTHON3 = os.getenv("MICROPY_CPYTHON3", "python3")
MICROPYTHON = os.getenv("MICROPY_MICROPYTHON", "../ports/unix/micropython")
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
PYTHON_TRUTH = CPYTHON3
BENCH_SCRIPT_DIR = "perf_bench/"
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
def compute_stats(lst):
avg = 0
var = 0
for x in lst:
avg += x
var += x * x
avg /= len(lst)
var = max(0, var / len(lst) - avg**2)
return avg, var**0.5
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
def run_script_on_target(target, script, run_command=None):
output = b""
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
err = None
if isinstance(target, pyboard.Pyboard):
# Run via pyboard interface
try:
target.enter_raw_repl()
start_ts = time.monotonic_ns()
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
output = target.exec_(script)
if run_command:
start_ts = time.monotonic_ns()
output = target.exec_(run_command)
end_ts = time.monotonic_ns()
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
except pyboard.PyboardError as er:
end_ts = time.monotonic_ns()
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
err = er
finally:
target.exit_raw_repl()
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
else:
# Run local executable
try:
if run_command:
script += run_command
start_ts = time.monotonic_ns()
p = subprocess.run(
target, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, input=script
)
end_ts = time.monotonic_ns()
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
output = p.stdout
except subprocess.CalledProcessError as er:
end_ts = time.monotonic_ns()
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
err = er
return str(output.strip(), "ascii"), err, (end_ts - start_ts) // 1000
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
def run_feature_test(target, test):
with open("feature_check/" + test + ".py", "rb") as f:
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
script = f.read()
output, err, _ = run_script_on_target(target, script)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
if err is None:
return output
else:
return "CRASH: %r" % err
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
def run_benchmark_on_target(target, script, run_command=None):
output, err, runtime_us = run_script_on_target(target, script, run_command)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
if err is None:
time, norm, result = output.split(None, 2)
try:
return int(time), int(norm), result, runtime_us
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
except ValueError:
return -1, -1, "CRASH: %r" % output, runtime_us
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
else:
return -1, -1, "CRASH: %r" % err, runtime_us
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
def run_benchmarks(console, target, param_n, param_m, n_average, test_list):
skip_complex = run_feature_test(target, "complex") != "complex"
skip_native = run_feature_test(target, "native_check") != "native"
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
table = Table(show_header=True)
table.add_column("Test")
table.add_column("Time", justify="right")
table.add_column("Score", justify="right")
table.add_column("Ref Time", justify="right")
live = Live(table, console=console)
live.start()
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
for test_file in sorted(test_list):
# print(test_file + ": ", end="")
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
# Check if test should be skipped
skip = (
skip_complex
and test_file.find("bm_fft") != -1
or skip_native
and test_file.find("viper_") != -1
)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
if skip:
print("skip")
table.add_row(test_file, *(["skip"] * 6))
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
continue
# Create test script
with open(test_file, "rb") as f:
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
test_script = f.read()
with open(BENCH_SCRIPT_DIR + "benchrun.py", "rb") as f:
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
test_script += f.read()
bm_run = b"bm_run(%u, %u)\n" % (param_n, param_m)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
# Write full test script if needed
if 0:
with open("%s.full" % test_file, "wb") as f:
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
f.write(test_script)
# Run MicroPython a given number of times
times = []
runtimes = []
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
scores = []
error = None
result_out = None
for _ in range(n_average):
self_time, norm, result, runtime_us = run_benchmark_on_target(
target, test_script, bm_run
)
if self_time < 0 or norm < 0:
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
error = result
break
if result_out is None:
result_out = result
elif result != result_out:
error = "FAIL self"
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
break
times.append(self_time)
runtimes.append(runtime_us)
scores.append(1e6 * norm / self_time)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
# Check result against truth if needed
if error is None and result_out != "None":
_, _, result_exp, _ = run_benchmark_on_target(PYTHON_TRUTH, test_script, bm_run)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
if result_out != result_exp:
error = "FAIL truth"
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
if error is not None:
print(test_file, error)
if error == "no matching params":
table.add_row(test_file, *([None] * 3))
else:
table.add_row(test_file, *(["error"] * 3))
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
else:
t_avg, t_sd = compute_stats(times)
r_avg, r_sd = compute_stats(runtimes)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
s_avg, s_sd = compute_stats(scores)
# print(
# "{:.2f} {:.4f} {:.2f} {:.4f} {:.2f} {:.4f}".format(
# t_avg, 100 * t_sd / t_avg, s_avg, 100 * s_sd / s_avg, r_avg, 100 * r_sd / r_avg
# )
# )
table.add_row(
test_file,
f"{t_avg:.2f}±{100 * t_sd / t_avg:.1f}%",
f"{s_avg:.2f}±{100 * s_sd / s_avg:.1f}%",
f"{r_avg:.2f}±{100 * r_sd / r_avg:.1f}%",
)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
if 0:
print(" times: ", times)
print(" scores:", scores)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
live.update(table, refresh=True)
live.stop()
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
def parse_output(filename):
with open(filename) as f:
params = f.readline()
n, m, _ = params.strip().split()
n = int(n.split("=")[1])
m = int(m.split("=")[1])
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
data = []
for l in f:
if l.find(": ") != -1 and l.find(": skip") == -1 and l.find("CRASH: ") == -1:
name, values = l.strip().split(": ")
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
values = tuple(float(v) for v in values.split())
data.append((name,) + values)
return n, m, data
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
def compute_diff(file1, file2, diff_score):
# Parse output data from previous runs
n1, m1, d1 = parse_output(file1)
n2, m2, d2 = parse_output(file2)
# Print header
if diff_score:
print("diff of scores (higher is better)")
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
else:
print("diff of microsecond times (lower is better)")
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
if n1 == n2 and m1 == m2:
hdr = "N={} M={}".format(n1, m1)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
else:
hdr = "N={} M={} vs N={} M={}".format(n1, m1, n2, m2)
print(
"{:24} {:>10} -> {:>10} {:>10} {:>7}% (error%)".format(
hdr, file1, file2, "diff", "diff"
)
)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
# Print entries
while d1 and d2:
if d1[0][0] == d2[0][0]:
# Found entries with matching names
entry1 = d1.pop(0)
entry2 = d2.pop(0)
name = entry1[0].rsplit("/")[-1]
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
av1, sd1 = entry1[1 + 2 * diff_score], entry1[2 + 2 * diff_score]
av2, sd2 = entry2[1 + 2 * diff_score], entry2[2 + 2 * diff_score]
sd1 *= av1 / 100 # convert from percent sd to absolute sd
sd2 *= av2 / 100 # convert from percent sd to absolute sd
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
av_diff = av2 - av1
sd_diff = (sd1**2 + sd2**2) ** 0.5
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
percent = 100 * av_diff / av1
percent_sd = 100 * sd_diff / av1
print(
"{:24} {:10.2f} -> {:10.2f} : {:+10.2f} = {:+7.3f}% (+/-{:.2f}%)".format(
name, av1, av2, av_diff, percent, percent_sd
)
)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
elif d1[0][0] < d2[0][0]:
d1.pop(0)
else:
d2.pop(0)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
def main():
cmd_parser = argparse.ArgumentParser(description="Run benchmarks for MicroPython")
cmd_parser.add_argument(
"-t", "--diff-time", action="store_true", help="diff time outputs from a previous run"
)
cmd_parser.add_argument(
"-s", "--diff-score", action="store_true", help="diff score outputs from a previous run"
)
cmd_parser.add_argument(
"-p", "--pyboard", action="store_true", help="run tests via pyboard.py"
)
cmd_parser.add_argument(
"-d", "--device", default="/dev/ttyACM0", help="the device for pyboard.py"
)
cmd_parser.add_argument("-a", "--average", default="8", help="averaging number")
cmd_parser.add_argument(
"--emit", default="bytecode", help="MicroPython emitter to use (bytecode or native)"
)
cmd_parser.add_argument("N", nargs=1, help="N parameter (approximate target CPU frequency)")
cmd_parser.add_argument("M", nargs=1, help="M parameter (approximate target heap in kbytes)")
cmd_parser.add_argument("files", nargs="*", help="input test files")
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
args = cmd_parser.parse_args()
if args.diff_time or args.diff_score:
compute_diff(args.N[0], args.M[0], args.diff_score)
sys.exit(0)
# N, M = 50, 25 # esp8266
# N, M = 100, 100 # pyboard, esp32
# N, M = 1000, 1000 # PC
N = int(args.N[0])
M = int(args.M[0])
n_average = int(args.average)
if args.pyboard:
target = pyboard.Pyboard(args.device)
target.enter_raw_repl()
else:
target = [MICROPYTHON, "-X", "emit=" + args.emit]
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
if len(args.files) == 0:
tests_skip = ("benchrun.py",)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
if M <= 25:
# These scripts are too big to be compiled by the target
tests_skip += ("bm_chaos.py", "bm_hexiom.py", "misc_raytrace.py")
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
tests = sorted(
BENCH_SCRIPT_DIR + test_file
for test_file in os.listdir(BENCH_SCRIPT_DIR)
if test_file.endswith(".py") and test_file not in tests_skip
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
)
else:
tests = sorted(args.files)
console = Console()
print("N={} M={} n_average={}".format(N, M, n_average))
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
run_benchmarks(console, target, N, M, n_average, tests)
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
if isinstance(target, pyboard.Pyboard):
target.exit_raw_repl()
target.close()
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 00:23:03 -04:00
if __name__ == "__main__":
main()