Consider an artificial neuron with one input and many outputs. Activation of the input leads to activation of one of the outputs where the choice of an activated output is probabilistic. A set of such neurons is a probabilistic mapping—a function that ambiguously maps an argument from a set of possible arguments to a result from a set of possible outcomes. Such function, when invoked one time, can return one result, and when invoked another time, can return yet another result for the same argument.
If every argument of a probabilistic mapping has a corresponding set of outcomes with fixed probabilities, we call the probabilistic mapping a fixed probabilistic mapping.
An important concept related to the possibility of using a result of a probabilistic mapping (directly or after transforming) as an argument (or as its part) of this probabilistic mapping is state—a variable that changes its value based on its previous value.
A fixed probabilistic mapping can model a probabilistic finite automaton. An argument of such probabilistic mapping is a superposition of the previous automaton state and an input signal received. A result of the probabilistic mapping is the next automaton state (or a superposition of the next automaton state and an output signal emitted).
To establish a uniform state transition model, it is tempting to treat a set of possible states of a probabilistic mapping as a Cartesian product of sets of possible sub-states making up a full state. In this sometimes useful approach, a resulting set of states can be very large. As sub-states from different sets usually interrelate with each other, the actual number of possible states is often much smaller. They resemble perceived states—mental model states a researcher realizes when learning or programming the behavior of an entity. When using QSMM, we will tend to work with such perceived states.
A probabilistic finite automaton can have a representation in the form of a probabilistic program. In particular, such probabilistic program may be a probabilistic assembler program—an assembler program containing probabilistic jump instructions. If there are no probabilistic jump instructions in the assembler program, the finite automaton and the assembler program are deterministic.
The instruction set of a probabilistic assembler program may consist of:
Locations of custom instructions in the assembler program represent states of the probabilistic finite automaton. An argument of a fixed probabilistic mapping backing up that finite automaton is a superposition of a location of a custom instruction invoked and an outcome returned by the custom instruction. An outcome of this fixed probabilistic mapping is a location of the next custom instruction to invoke. Simple jump instructions, conditional jump instructions, and probabilistic jump instructions in the assembler program map a particular argument of this probabilistic mapping to a set of possible outcomes with fixed probabilities.
A program usually contains a set of subroutines. A probabilistic assembler program may contain custom instructions for calling subroutines and custom instructions for returning control back. A probabilistic finite automaton may represent every subroutine. Calling a subroutine means pushing a reference to a current probabilistic finite automaton and its current state to a stack and transferring control to the initial state of another probabilistic finite automaton. Returning control from the subroutine means popping the reference to the current automaton and its current state from the stack and transferring control to that state.
We can implement modulating the behavior of a probabilistic mapping by spur. Supposing a result returned by a probabilistic mapping for its specific argument somehow affects the spur, a desired behavior of the probabilistic mapping for this argument would be returning more often an outcome that leads to a greater absolute spur value (a positive value if the goal is to maximize the spur, or a negative value if the goal is to minimize the spur) or a greater spur increment or decrement velocity. We call a probabilistic mapping with such desired behavior an adaptive probabilistic mapping.
An adaptive probabilistic mapping helps produce goal-directed adaptive behavior: return more often outcomes that bring a machine closer to a goal where the spur measures a proximity to the goal. An intelligent machine might include various superpositions of adaptive probabilistic mappings, its inputs and outputs, and those superpositions would specify an adaptive generic state model aimed to solve general or specific problems.
In QSMM, actor is an adaptive probabilistic mapping that works similarly to a fixed probabilistic mapping, but instead of fixed outcome probabilities, it uses probabilities adjusted adaptively. An actor is a generic block for building intelligent machines.
If a fixed probabilistic mapping that backs up a probabilistic finite automaton becomes an adaptive probabilistic mapping, the automaton becomes an adaptive probabilistic finite automaton. Consequently, a probabilistic assembler program the automaton represents becomes an adaptive probabilistic assembler program.
Using QSMM, a researcher can develop adaptive probabilistic assembler programs producing goal-directed behavior. As one may realize, adaptive probabilistic assembler programs is a step towards the automated synthesis of algorithms.
An assembler program represents a state model, and a subroutine of the assembler program represents a state sub-model. In QSMM, node means a callable state sub-model. If a state model contains multiple nodes, we call it multinode model. A node is a probabilistic finite automaton corresponding to a probabilistic assembler program. Node execution is the operation of this finite automaton. The finite automaton and the assembler program turn into their adaptive counterparts during node execution.
The synthesis and execution of subroutines a researcher could relate to setting up processing chains with the production and use of various work tools.