OOP language extension

Here is a simple idea intended to plug a few semantic holes in the imperative programming style. The idea is to provide a full proof way to handle mutable state in imperative programs. Basically we want to introduce a class called an ASTB or applicative state transition block. The definition is simple. We will need four data structures: a read only state, and a write only temporary state a structure to hold the input and one for the output. There seem to be six external methods: a driver that provides an input and starts the computation of the new state, and returns the output on completion. A constructor and a destructor, and a method to read the state and output at any time. We need one more external method to initialize the state. This is a little more tricky so we will provide some sort of security, and only allow update when a transition is not in progress (some sort of mutex lock).

Internally the object does the following: when an input is supplied a series of functions compute new state values and output values and puts them in the temporary state data structure, and the output data structure. Internally there is a method that has permission to transfer the temporary state to the state when the output is returned. That seems to be enough to do the job, although we might also want a temporary output, and an output, and transfer that on completion with a lock. This wouldn't be needed if we output through the driver return value.

There is certainly nothing new about this and we suspect that there are patterns for doing this. The point is that we want to make the case that there are sound ways to handle state, get comments, and perhaps encourage a little more attention to these issues.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I'm confused

What is the issue here? I'm not sure that I completely understand what you are talking about, but I'll take a stab.

This seems similar to a state monad:

  • we have a 'driver' to execute a computation
  • we have a 'constructor' that injects values into the appropriate domain (the 'unit' function)
  • we can access state variables
  • at each step along the computation, mutation is actually just constructing a temporary state (with mutated elements differing from the prior state) and supplying this to the remainder of the computation

Different Languages?

Thankfully the Wikipedia has an explanation of practically everything.
See for example the entry on synchronous logic. A quote about half way down:

"The main advantage of synchronous logic is its simplicity. Every operation in the circuit must be completed inside a fixed interval of time between two clock pulses, called a 'clock cycle'. As long as this condition is met (ignoring certain other details), the circuit is guaranteed to be reliable"

To understand a little of why this is true you might look at the entry for ladder logic. Notice that ladder logic is subject to "races" if the "rungs" are not arranged properly, a condition that also occurs in software systems. The solution is to use synchronous logic. But in software we don't have "clocks" as such, but we can simulate the effect of clocking by triggering a "transition" based on an event. This is the basic idea of the ASTB.

Edit: An easy way to see the relation between ladder logic and programming is to think of the rungs of the ladder as separate lines of code.

There is a great deal of overlap between computer science and engineering but we often seem to speak different languages. For instance the monad is probably the alternative to an AST in the functional world. But then I don't know much about that.

State with atomic commits?

Backus' AST system (Applicative State Transition -- great, another clash in the TLA namespace) really looks like what we get with a state monad. However, it seems to me that programming in a system where one *has* to specify transitions in terms of the complete state is far from ideal. I would probably try to add a sublanguage (monad) to manipulate the elements of the state. Thus, the state monad would manipulate and compose functions of (State -> State), while the sublanguage would compose functions of (State -> Part_of_State) into (State -> State) functions. One could see the computations in the sublanguage as atomic blocks.

Monad Connection

I usually think of the AST as automata, and that means we can break it up using the serial and parallel decomposition theorems. Sometimes it makes sense to group state together, other times the states can be decomposed into several AST's and connected inputs to outputs. The connection to monads is something I am still working on.
Edit: Atomic commits .

Petri nets?

One particular way to describe decomposed state machines is Petri nets. Look at coloured nets (e.g. CPN Tools from the university of Aarhus) for all the machinery you need.

example?

Perhaps you could present an example to clarify your description?

Edit

Does the edit I just did help?

Another example

Here is an example I used a few days ago. It is almost too simple. In that example I used a Switch statement instead of a ASTB.

The switch you wrote is a simple function.

The switch you wrote is a simple function which inverts a boolean variable; in other words, the operator 'logical not'.

The problem is that with this style of programming, usage of the variable 'paren' is hardcoded inside the code, with no obvious way to properly control its values, even if you use a switch statement or operator 'logical not'.

A better approach would be to decouple the parsing code from the parenthesis control code, and provide two parsing functions, one when parenthesis is true and one when parenthesis is false, that accept the rest of the parser code as parameters; by using function composition, a better result can be achieved.

The problem of state is a difficult one to solve. By giving uncontrolled access to state, many people use state to solve computational problems that are easily solved using a functional approach, thus making programs more difficult than what they should be. But, in my opinion, shutting state manipulation out of a program is also not the way to go; many applications have a great deal of state which is not used for controlling flow of computation but as persistent storage.

I think the solution lies in reactive programming: Since state transition can not be avoided, a better way would be to connect reactions to actions, so as that we are sure state change always invokes the appropriate piece of code; we might not be able to predict when the reaction will take place, but at least we can be certain that the appropriate reaction will be invoked at a particular state.

Events

a better way would be to connect reactions to actions,

This is an approach I like to use myself. It is a simple matter to separate events and functions by not allowing side effects in functions and by defining an "event" to be a function with a side effect, and using the event key word to be clear about it. Only events can write to a state, and functions are allowed to read state. Also we use a structure something like the AST block and only update state at the end of the event if it "worked". The question I often wonder about is: does it break the functional model?

Perhaps too complicated?

I can not say I understand you 100%, but if you check how nature works (for example), you will see that the signal-reaction model prevails in nature.

In a reactive programming language, signals should be the end-points of computations: when a computation ends, it emits a signal with a value. One or more computations can be attached to those signals, thus allowing the formation a state transition model that is natural and no explicit support from the language is required.

The action-reaction model can also be used for handling exceptions: for example, the operation A / B can yield two signals: a) the result, b) a division-by-zero signal.

Sequencing computations means to chain end-points of computations with other computations; by using this model, parallelism comes out in a natural way: two computations that are attached in independent signals (where there is no data dependency) can be executed in parallel.

A related way of thinking.

A related way to think about this in the world of predicates and backtracking is the "sign" concept, derived from pragmatic philosophy. A sign is a predicate that comes into existence when an event succeeds. For example (compressor-motor on) is a predicate that is put into the data base when an EVENT rule succeeds in starting the motor. To do this there is always an action and a test for success. I usually think of facts like (compressor-motor ?) as making up a situation. Formal rules can reason about a situation but can't change it. In this system there are no exceptions. Failure causes backtracking and the knowledge base will redirect the system.

Many people are "afraid" of backtracking but in a system with modus ponens rules and event rules the forward and backward systems are equivalent and one can simply choose the direction.