Also do that only for the first word in a line. The idea is that when you
start up interpreter, high chance that you want to do an import. With this
patch, this can be achieved with "i<tab>".
The type is an unsigned 8-bit value, since bytes objects are exactly
that. And it's also sensible for unicode strings to return unsigned
values when accessed in a byte-wise manner (CPython does not allow this).
While just a websocket is enough for handling terminal part of WebREPL,
handling file transfer operations requires demultiplexing and acting
upon, which is encapsulated in _webrepl class provided by this module,
which wraps a websocket object.
The C standard says that left-shifting a signed value (on the LHS of the
operator) is undefined. So we cast to an unsigned integer before the
shift. gcc does not issue a warning about this, but clang does.
- msvc preprocessor output contains full paths with backslashes so the
':' and '\' characters needs to be erased from the paths as well
- use a regex for extraction of filenames from preprocessor output so it
can handle both gcc and msvc preprocessor output, and spaces in paths
(also thanks to a PR from @travnicekivo for part of that regex)
- os.rename will fail on windows if the destination file already exists,
so simply attempt to delete that file first
Qstr auto-generation is now much faster so this optimisation for start-up
time is no longer needed. And passing "-s -S" breaks some things, like
stmhal's "make deploy".
E.g. for stmhal, accumulated preprocessed output may grow large due to
bloated vendor headers, and then reprocessing tens of megabytes on each
build make take couple of seconds on fast hardware (=> potentially dozens
of seconds on slow hardware). So instead, split once after each change,
and only cat repetitively (guaranteed to be fast, as there're thousands
of lines involved at most).
If make -B is run, the rule is run with $? empty. Extract fron all file in
this case. But this gets fragile, really "make clean" should be used instead
with such build complexity.
When there're C files to be (re)compiled, they're all passed first to
preprocessor. QSTR references are extracted from preprocessed output and
split per original C file. Then all available qstr files (including those
generated previously) are catenated together. Only if the resulting content
has changed, the output file is written (causing almost global rebuild
to pick up potentially renumbered qstr's). Otherwise, it's not updated
to not cause spurious rebuilds. Related make rules are split to minimize
amount of commands executed in the interim case (when some C files were
updated, but no qstrs were changed).
- any architecture may explicitely build with qstring make
QSTR_AUTOGEN_DISABLE=1 autogeneration disabled and provide its
own list of qstrings by the standard
mechanisms (qstrdefsport.h).
- add template rule that converts a specified source file into a qstring file
- add special rule for generating a central header that contains all
extracted/autogenerated strings - defined by QSTR_DEFS_COLLECTED
variable. Each platform appends a list of sources that may contain
qstrings into a new build variable: SRC_QSTR. Any autogenerated
prerequisities are should be appened to SRC_QSTR_AUTO_DEPS variable.
- remove most qstrings from py/qstrdefs, keep only qstrings that
contain special characters - these cannot be easily detected in the
sources without additional annotations
- remove most manual qstrdefs, use qstrdef autogen for: py, cc3200,
stmhal, teensy, unix, windows, pic16bit:
- remove all micropython generic qstrdefs except for the special strings that contain special characters (e.g. /,+,<,> etc.)
- remove all port specific qstrdefs except for special strings
- append sources for qstr generation in platform makefiles (SRC_QSTR)
This script will search for patterns of the form Q(...) and generate a
list of them.
The original code by Pavel Moravec has been significantly simplified to
remove the part that searched for C preprocessor directives (eg #if).
This is because all source is now run through CPP before being fed into
this script.
Small hash tables (eg those used in user class instances that only have a
few members) now only use the minimum amount of memory necessary to hold
the key/value pairs. This can reduce performance for instances that have
many members (because then there are many reallocations/rehashings of the
table), but helps to conserve memory.
See issue #1760.
Most grammar rules can optimise to the identity if they only have a single
argument, saving a lot of RAM building the parse tree. Previous to this
patch, whether a given grammar rule could be optimised was defined (mostly
implicitly) by a complicated set of logic rules. With this patch the
definition is always specified explicitly by using "and_ident" in the rule
definition in the grammar. This simplifies the logic of the parser,
making it a bit smaller and faster. RAM usage in unaffected.
The config variable MICROPY_MODULE_FROZEN is now made of two separate
parts: MICROPY_MODULE_FROZEN_STR and MICROPY_MODULE_FROZEN_MPY. This
allows to have none, either or both of frozen strings and frozen mpy
files (aka frozen bytecode).
They are sugar for marking function as generator, "yield from"
and pep492 python "semantically equivalents" respectively.
@dpgeorge was the original author of this patch, but @pohmelie made
changes to implement `async for` and `async with`.
Will call underlying C virtual methods of stream interface. This isn't
intended to be added to every stream object (it's not in CPython), but
is convenient way to expose extra operation on Python side without
adding bunch of Python-level methods.
Features inline get/put operations for the highest performance. Locking
is not part of implementation, operation should be wrapped with locking
externally as needed.
When taking the logarithm of the float to determine the exponent, there
are some edge cases that finish the log loop too large. Eg for an
input value of 1e32-epsilon, this is actually less than 1e32 from the
log-loop table and finishes as 10.0e31 when it should be 1.0e32. It
is thus rendered as :e32 (: comes after 9 in ascii).
There was the same problem with numbers less than 1.
Previous to this patch, the "**b" in "a**b" had its own parse node with
just one item (the "b"). Now, the "b" is just the last element of the
power parse-node. This saves (a tiny bit of) RAM when compiling.
Passing an mp_uint_t to a %d printf format is incorrect for builds where
mp_uint_t is larger than word size (eg a nanboxing build). This patch
adds some simple casting to int in these cases.
If the heap is locked, or memory allocation fails, then calling a bound
method will still succeed by allocating the argument state on the stack.
The new code also allocates less stack than before if less than 4
arguments are passed. It's also a tiny bit smaller in code size.
This was done as part of the ESA project.
This new compile-time option allows to make the bytecode compiler
configurable at runtime by setting the fields in the mp_dynamic_compiler
structure. By using this feature, the compiler can generate bytecode
that targets any MicroPython runtime/VM, regardless of the host and
target compile-time settings.
Options so far that fall under this dynamic setting are:
- maximum number of bits that a small int can hold;
- whether caching of lookups is used in the bytecode;
- whether to use unicode strings or not (lexer behaviour differs, and
therefore generated string constants differ).
Reduces code size by 112 bytes on Thumb2 arch, and makes assembler faster
because comparison can be a simple equals instead of a string compare.
Not all ops have been converted, only those that were simple to convert
and reduced code size.
The chunks of memory that the parser allocates contain parse nodes and
are pointed to from many places, so these chunks cannot be relocated
by the memory manager. This patch makes it so that when a chunk is
shrunk to fit, it is not relocated.
These can be used to insert arbitrary checks, polling, etc into the VM.
They are left general because the VM is a highly tuned loop and it should
be up to a given port how that port wants to modify the VM internals.
One common use would be to insert a polling check, but only done after
a certain number of opcodes were executed, so as not to slow down the VM
too much. For example:
#define MICROPY_VM_HOOK_COUNT (30)
#define MICROPY_VM_HOOK_INIT static uint vm_hook_divisor = MICROPY_VM_HOOK_COUNT
#define MICROPY_VM_HOOK_POLL if (--vm_hook_divisor == 0) { \
vm_hook_divisor = MICROPY_VM_HOOK_COUNT;
extern void vm_hook_function(void);
vm_hook_function();
}
#define MICROPY_VM_HOOK_LOOP MICROPY_VM_HOOK_POLL
#define MICROPY_VM_HOOK_RETURN MICROPY_VM_HOOK_POLL
The new block protocol is:
- readblocks(self, n, buf)
- writeblocks(self, n, buf)
- ioctl(self, cmd, arg)
The new ioctl method handles the old sync and count methods, as well as
a new "get sector size" method.
The old protocol is still supported, and used if the device doesn't have
the ioctl method.
This allows you to pass a number (being an address) to a viper function
that expects a pointer, and also allows casting of integers to pointers
within viper functions.
This was actually the original behaviour, but it regressed due to native
type identifiers being promoted to 4 bits in width.
This function computes (x**y)%z in an efficient way. For large arguments
this operation is otherwise not computable by doing x**y and then %z.
It's currently not used, but is added in case it's useful one day.
For these 3 bitwise operations there are now fast functions for
positive-only arguments, and general functions for arbitrary sign
arguments (the fast functions are the existing implementation).
By default the fast functions are not used (to save space) and instead
the general functions are used for all operations.
Enable MICROPY_OPT_MPZ_BITWISE to use the fast functions for positive
arguments.
Before this patch, the native types for uint and ptr/ptr8/ptr16/ptr32
all overlapped and it was possible to make a mistake in casting. Now,
these types are all separate and any coding mistakes will be raised
as runtime errors.
Eg: '{:{}}'.format(123, '>20')
@pohmelie was the original author of this patch, but @dpgeorge made
significant changes to reduce code size and improve efficiency.
For single prec, exponents never get larger than about 37. For double
prec, exponents can be larger than 99 and need 3 bytes to format. This
patch makes the number of bytes needed configurable.
Addresses issue #1772.
Calling it from mp_init() is too late for some ports (like Unix), and leads
to incomplete stack frame being captured, with following GC issues. So, now
each port should call mp_stack_ctrl_init() on its own, ASAP after startup,
and taking special precautions so it really was called before stack variables
get allocated (because if such variable with a pointer is missed, it may lead
to over-collecting (typical symptom is segfaulting)).
MP_BC_NOT was removed and the "not" operation made a proper unary
operator, and the opcode format table needs to be updated to reflect
this change (but actually the change is only cosmetic).
Functions added are:
- randint
- randrange
- choice
- random
- uniform
They are enabled with configuration variable
MICROPY_PY_URANDOM_EXTRA_FUNCS, which is disabled by default. It is
enabled for unix coverage build and stmhal.
SHA1 is used in a number of protocols and algorithm originated 5 years ago
or so, in other words, it's in "wide use", and only newer protocols use
SHA2.
The implementation depends on axTLS enabled. TODO: Make separate config
option specifically for sha1().
micropython.stack_use() returns an integer being the number of bytes used
on the stack.
micropython.heap_lock() and heap_unlock() can be used to prevent the
memory manager from allocating anything on the heap. Calls to these are
allowed to be nested.
Seedable and reproducible pseudo-random number generator. Implemented
functions are getrandbits(n) (n <= 32) and seed().
The algorithm used is Yasmarang by Ilya Levin:
http://www.literatecode.com/yasmarang
this allows python code to use property(lambda:..., doc=...) idiom.
named versions for the fget, fset and fdel arguments are left out in the
interest of saving space; they are rarely used and easy to enable when
actually needed.
a test case is included.
The first argument to the type.make_new method is naturally a uPy type,
and all uses of this argument cast it directly to a pointer to a type
structure. So it makes sense to just have it a pointer to a type from
the very beginning (and a const pointer at that). This patch makes
such a change, and removes all unnecessary casting to/from mp_obj_t.
This patch changes the type signature of .make_new and .call object method
slots to use size_t for n_args and n_kw (was mp_uint_t. Makes code more
efficient when mp_uint_t is larger than a machine word. Doesn't affect
ports when size_t and mp_uint_t have the same size.