## Reversing operations

If one wants to iterate through all reachable states s1 from an initial state s0 in an imperative language (think of a chess board and a possible set of moves) one could do something like:

for i in 1..#changes {
s1 := copy(s0)
change(s1, i)
// process s1
}


It is usually faster and less memory intensive to do the following instead:

for i in 1..#changes {
change(s0, i) pushing undo data to the stack
// process s0
undo(s0) popping undo data from the stack
}


Are there general techniques for inferring the "undo data" that an operation should save, and possibly the corresponding undo operation?

## Comment viewing options

### You may enjoy the Galois

You may enjoy the Galois system. Traditional speculative parallelism does exactly what you don't want: it speculatively runs on a copy. To go back, it scraps the copied state and use the cached state. Galois does what you want: when it speculates ahead, it records a log of undo operations. To go back, it can just play the log.

To provide a more solid linguistic basis, it may be possible to adapt bidirectional languages such as lenses. I've been thinking more in terms of incremental computation however, but won't say much more there as it's active research :)

### Persistent Data Structures

Most computations are not conservative, and your ability to infer state change is severely limited across any modular composition. So you'll probably need to constrain your vocabulary if you want reversible computing.

Keep it simple and sufficient. I think one simple and sufficient answer is to just select a data structure with O(1) copy costs and a storage cost proportional to the changes applied (or at least well below linear), aka persistent data structure. Clojure explicitly uses this technique to support MVCC.

### Reversible computations are

Reversible computations are very interesting and useful. If you have an aggregate operation on a set and an element is removed or its value changes, you can "reverse" its effect on the aggregate to efficiently update the value of the aggregate operation. Likewise, if you have a bi-directional binding, you can reverse the bound-to operation to determine its effects on the LHS; I do this a lot in Bling.

Unfortunately, David is right, most computations are not conservative and in general it is difficult to compute reversals. I'm guessing most quantum-computing programming languages are based on reversible operations because they have to be, but such languages are very theoretical and probably don't make sense in your context.

### As dmbarbour says, most

As dmbarbour says, most everyday computations are not reversible, ie. they are many:1 mappings. This is the same as saying information is lost during the computation, and that information is exactly the "undo data" you are asking about.

There is a simple way to make any computation reversible, and that is to change information-destroying operations into information-moving operations, where the information is moved to a junk area that's never read. In general we can take any Turing machine and give it a second tape, on which we write any information which would otherwise be lost during an update of the first tape. This doesn't affect our computation, which only cares about the first tape, but it does make our computations 1:1, and importantly the second tape is the "undo data" which lets us get back to our starting configuration.

Moving more high-level than Turing Machines, if you can enumerate the domain of every destructive (many:1) operation then instead of every output value having a set of possible inputs, it has an ordered list of possible inputs. For example, reversing square(x) for an output of 25 will give the set {5, -5}. If we enumerate the integers as 0, 1, -1, 2, ... then we can put the set {5, -5} into their enumerated order as the list [5, -5].

Each time a destructive operation (like square) is called, it should look up its argument in the list of possible inputs and log the index to the "undo data". For example, "square(-5)" should enumerate the integers and their squares until it gets to -5: (0,0), (1,1), (-1,1), (2,4), (-2,4), (3,9), (-3,9), (4,16), (-4,16), (5,25), (-5,25). It then logs the number of values it found with squares equal to square(-5), which in this case is 1. Thus the undo data for square(x) is 1 bit, which specifies whether it's positive or negative.

To undo a computation, given its undo data, we just reverse each operation, popping an index off the undo data if the operation is destructive.

Such a scheme is probably vastly inefficient and requires enumerable values, but it's as generic as I can think of at the moment. Note that it is near-optimal space-wise, it generalises the Turing Machine case (everything on a Turing Machine is enumerable, since we can enumerate tapes), and is amenable to arbitrary optimisations (like having "square" log 0/1 based on the sign, rather than doing the full enumeration).

Also, a fun fact: for any real life computer, the undo log is the heat it gives off during the computation!

### Also, a fun fact: for any

Also, a fun fact: for any real life computer, the undo log is the heat it gives off during the computation!

You mean for any "perfect" theoretically thermally optimal computer. I'm sure we are nowhere near that point yet in the computers we actually build.

### Darcs

While not answering your immediate question, I wonder if the Darcs theory of patches might be relevant?

### J

The J language has first-class support for this (for pure expressions) in terms of Under (&.). Many built-in operations have pre-defined inverses, and it's possible to define them yourself for your own functions.

### Learning things like this is

Learning things like this is why I still love LtU.

Bling had something similar to J. Still, if you defined a method used out of only reversible operations, the method itself was also reversible. Bling also applied this concept to derivatives (inverse of op, derivative of op); i.e., ops went beyond being just computations.

It is really hard for me to grok the J under ops. How do they handle multi-value cases? E.g., a = b + c, when we go in reverse, we have a value for c and a and wish to compute b (b = a - c). Bling always took the left most value so to say: right = left + width, by inversion assignment would always shove values into left (left' = right' - width). Getting values into width required using the abstraction right0 = width + left instead. Since a + b was not the same as b + a , this was a weird semantic corner of the language.

### Immutable structures

I've written game AIs using both your proposed methods.

There's also a third approach: use immutable data structures and instead of change() use a function that constructs a new board from the old one with the appropriate move. Eg. a quadtree type structure or one of many other similar structures. The new tree is largely built from pieces of the old tree so you don't need to make a complete new copy. You don't need to undo, you can just keep the old and new trees in memory (efficient because they share data) and just free the new tree when you're done.

Easy undo is one of those things that can come for free with immutable FP. (Though these data structures all work fine outside of the context of FP.) Of course there's a trade-off, you may find that the performance you get from a flat array is much better than anything you could get from a tree structure.

### Immutablity is basically the

Immutablity is basically the problem he is avoiding. Instead of making a copy cost on every operation, his insight is that it may be cheaper to be destructive and recover.

I agree that copy can be cheap with FP tricks. To help prime the search: zippers and Chris Okasaki's book on functional data structures.

Or better to say, assignments whose scopes are bound dynamically. So compare

dlet (x = 4) in { ... }


to

dlimit (change(s1, i)) in { ... }


An implementation could intercept all assignments in the limited method call, push the old values on a stack (or just trap them in a closure), and re-assign the old values when the body is finished.

### Program Inversion

There is some work by Torben Mogensen on semi-inversion in functional languages (GPCE 2005 and PEPM 2008). There is also a paper in PLDI'11 that looks relevent (haven't read it yet, but the abstract sounds interesting):

Path-based Inductive Synthesis for Program Inversion
Saurabh Srivastava, Sumit Gulwani, Jeffrey S. Foster, Swarat Chaudhuri