states in stateless machine

Hi all!

I'm experimenting with constructing a programming language for artificial intelligence programming. I want it also to be enabled for regular application programming. Right now i have thought through the stateless part, meaning that I have a rough definition of underlying stateless machine.

In real world there are also constructs that require statefull machines to hold some knowledge, but I'd hate to extend current language definition with additional statefull stuff that complicates the language that is now so simple without it.

So, I'm wondering are there known methods or new ideas to describe states inside stateless machines?

Tx :)

Comment viewing options

leveraging non-determinism

Many non-deterministic systems - e.g. constraint systems with solutions, grammars with sentences generated, planners with plans - provide opportunity for implicit state that is outside the semantics of the model. The non-determinism essentially enables an implementation to select results that are consistent or otherwise stable relative to some history. However, since said state is not a formal part of the model's semantics (it's just a convenience) one can reasonably consider the machine to be stateless.

I find the notion of such stateless machines to be useful with reactive systems. For example, if I use a constraint model or planner to lay out a bunch of windows, and I add a new window, it's useful to minimize how much the existing windows move to accommodate the new one. But I don't usually wish to formalize this state as part of the model, because it tends to interfere with on-the-fly replanning and updates.

I discuss this approach on my blog as 'stateless stable models' or for game development as 'stateless stable arts'. There is a useful connection to machine learning as having potential approach to simultaneously achieve stability and improve solution-discovery speed and even improve composition (e.g. of art styles).

functions of history, pipelines, and htm

Rather than 'stateless' machines, a related technique is to leverage state explicitly but in a controlled manner to guard against cyclic feedback. If we avoid explicit accumulator functions, we can get many of the benefits of stateless code.

For example, we might model a function as having an output based on a history of inputs, perhaps a windowed history such as the last N elements, or perhaps an exponential decay of history. This technique could be pipelined, such that each function in the pipeline takes a history to generate a single output. Unlike conventional object models that explicitly maintain local state, it would be very easy to update any particular function in such a pipeline, and the system as a whole would be extremely stable.

Hierarchical Temporal Memory (HTM) developed by Jeff Hawkins seems to use a similar approach in the specific context of machine learning.

I have come up with a

I have come up with a solution where each atom can contain two fragments:

1. explicit states in a form of mutable data
2. implicit reactive part (stateless) where data is inferred from explicit part and other implicit parts of the same atom

So, we change by hand explicit parts (states) while implicit parts are automatically calculated on the fly.

Although this solution works as a programming language, somehow I'm not quite satisfied with it.

To me 'state' describes

To me 'state' describes accumulation of information over time. It isn't clear to me that what you're calling state is something that I'd call state.

Are you thinking of something like spreadhseets, where users can manipulate cells by hand? and these cells are thus, in some external sense, stateful?

stateless dual of state

I don't have a particular citation for you but the idea is small enough to sum up in a comment:

Let's agree for now on this abstract model of state:

A store is an entity modified by mutation operations and examined by read operations. For simplicity we'll assume that the order in which mutations and reads are processed does not matter much other than two rules:

1. If a client of a store performs two mutations in some definite order, all reads must be consistent with those two mutations being processed in that order.

2. If a client of a store performs two reads in some definite order, the results of those reads must be consistent with being processed in that order.

To make that concrete, suppose that one client performs (in order):

x = 1;
x = 2;


Another client performs (in order):

print x;
print x;


The output may be "1 1" or "1 2" or "2 2" but, if no other client is active and the initial value of x is 0, the output may not be "2 1".

Notice that the total collection of mutations and reads may or may not fully determine the order in which they must be processed. In the example above, the relative order of mutations and reads is indeterminate, but the relative order of the mutations is determined, as is the relative order of the reads.

Overall, in other words, the store must behave as if it is consistent with some serialization of mutations and reads, but there may be some partial ambiguity as to exactly which serialization occurs.

The store, as you can see, changes over time and that is what gives it the quality of being "stateful".

The challenge is to replace the store with immutable values.

One way to do that is by representing the store, at a given instant in time, as an immutable value. We could write:

Sa      the store at time a
Sb      the store at time b


We could now describe mutation as a pure, albeit indeterminate, function:

  mut ('x, 1, Sa) -> Sb
means assign x to 1 sometime after time a
the store at time b reflects the mutation


Similarly:

  read ('x, Sc) -> 3, Sd
means read x sometime after time c
in this case x contained 3 at later time d


Many variations on the exact semantics are possible. A simple rule (to preserve purity) would require that no store-value (Sa) be used for more than one mutation or store:

  mut ('x, 1, Sa) -> Sb
mut ('x, 1, Sa) -> Sb  idempotency and purity go hand in hand
mut ('x, 3, Sa) -> ERROR


I think we are thinking of

Hi dmbarbour :)

I think we are thinking of the same thing. If not cells, then we can think of properties of an OOP object as a system of states.

By the way, do U have some details about Ur programming language, I believe U call it ABC?

Ur programming language is not ABC

Ur programming language is Adam Chlipala's work. :)

The languages I'm developing to bootstrap Awelon project are Awelon Bytecode (ABC) and Awelon Object (AO). ABC isn't suitable for direct human use (at least not at scale), but AO is a relatively usable thin macro language above it. You can find the most details on my languages at my github repo. There are many documents; see AboutABC and AboutAO to start.

monotonic state

Thomas's comment reminded me of Lindsey Kuper's work on LVars (lattice variables) as another approach to 'stateless' state access. LVars in turn are essentially a generalization of logic/unification variables (cf. Oz, deterministic concurrency).

These approaches because can hinder reasoning about progress, latency, distributed dataflow, and disruption tolerance. I've generally rejected them as unsuitable for my own use-cases. However, I can see why some people like them, and I've periodically entertained their use.

Both LVars and logic variables are examples of monotonic state, i.e. state that only gains information over time. There are perhaps other monotonic states that can be considered, including useful specializations like time-varying signals.

I like the idea of reactive

I like the idea of reactive programming very much. I don't know why is it not implemented in the most popular programming languages. Maybe it is because we are still in "stone age" of programming.

Isn't the essence of reactive programming code somewhat like stateless machine? I mean that reactive code doesn't change, so it is stateless function. Or maybe it can be seen as a function that heavily relies on a bunch of outer variables as an input and a result as an output.

reactive â‰  stateless

Most reactive models - including synchronous reactive programming (Lustre, Esterel, etc.) and functional reactive programming (Elm, Yampa, etc.) - are stateful. They can, for example, express event accumulators that count the number of times a user presses a button. Some functional reactive systems can even express continuous integrals, accumulating the area under a curve. This can be useful for modeling physics behaviors - e.g. the position of a pendulum hanging from the mouse by an elastic wire. But this expressiveness comes with a cost for reasoning about performance.

Not all reactive programming is stateful, of course. We can certainly model stateless systems reactively, or develop languages for reactive programming that eschew state. My 'reactive demand programming' (RDP) is stateful, but externalizes state in order to simplify live programming, modeling of open and distributed systems, orthogonal persistence, etc.. Spreadsheets are an example of stateless reactivity, but are unsuitable for many problem domains.

The properties we call 'reactive' and 'stateful' should be considered entirely orthogonal.

No de facto standard approach

I think a very good reason that reactive programming isn't built into popular languages is that there really isn't (yet?) any single clear general-purpose design. Perhaps eventually that will change, but I think even if so, we're probably still far off from there.