Nicko van Someren 3aab54bf43 py: Support non-boolean results for equality and inequality tests.
This commit implements a more complete replication of CPython's behaviour
for equality and inequality testing of objects.  This addresses the issues
discussed in #5382 and a few other inconsistencies.  Improvements over the
old code include:

- Support for returning non-boolean results from comparisons (as used by
  numpy and others).
- Support for non-reflexive equality tests.
- Preferential use of __ne__ methods and MP_BINARY_OP_NOT_EQUAL binary
  operators for inequality tests, when available.
- Fallback to op2 == op1 or op2 != op1 when op1 does not implement the
  (in)equality operators.

The scheme here makes use of a new flag, MP_TYPE_FLAG_NEEDS_FULL_EQ_TEST,
in the flags word of mp_obj_type_t to indicate if various shortcuts can or
cannot be used when performing equality and inequality tests.  Currently
four built-in classes have the flag set: float and complex are
non-reflexive (since nan != nan) while bytearray and frozenszet instances
can equal other builtin class instances (bytes and set respectively).  The
flag is also set for any new class defined by the user.

This commit also includes a more comprehensive set of tests for the
behaviour of (in)equality operators implemented in special methods.
2020-01-30 14:53:07 +11:00
..
2017-05-29 11:36:05 +03:00

This directory contains tests for various functionality areas of MicroPython.
To run all stable tests, run "run-tests" script in this directory.

Tests of capabilities not supported on all platforms should be written
to check for the capability being present. If it is not, the test
should merely output 'SKIP' followed by the line terminator, and call
sys.exit() to raise SystemExit, instead of attempting to test the
missing capability. The testing framework (run-tests in this
directory, test_main.c in qemu_arm) recognizes this as a skipped test.

There are a few features for which this mechanism cannot be used to
condition a test. The run-tests script uses small scripts in the
feature_check directory to check whether each such feature is present,
and skips the relevant tests if not.

Tests are generally verified by running the test both in MicroPython and
in CPython and comparing the outputs. If the output differs the test fails
and the outputs are saved in a .out and a .exp file respectively.
For tests that cannot be run in CPython, for example because they use
the machine module, a .exp file can be provided next to the test's .py
file. A convenient way to generate that is to run the test, let it fail
(because CPython cannot run it) and then copy the .out file (but not
before checking it manually!)

When creating new tests, anything that relies on float support should go in the
float/ subdirectory.  Anything that relies on import x, where x is not a built-in
module, should go in the import/ subdirectory.