## Connections between Transactions and Promises/Futures?

I've been spinning my wheels on this for awhile, so I'm hoping LtU has an answer: has there been any connection derived between transactions and promises/logic variables? I'm considering here promises as implemented in Alice ML and at least ACI properties of transactions minus the D. The connection is more apparent in a personal project where this came up, so apologies if this seems ridiculous at first.

If you view the store as an immutable pair:

type 'a Store = { current: 'a; future: 'a Store Future }

beginning a transaction consists of starting a computation with the promise for that future:

val begin_trans: 'a Store -> ('a Store Promise -> ()) -> ()

and committing a transaction consists of resolving the promise for that future

val commit_trans: 'a Store Promise -> 'a Store -> ()

Of course, transactions can nest stack-like, and one could recreate this by creating more promises down the call chain which get resolved as calls return. Of course, promises also permit non-nested resolution if needed (though I'm sure that sounds ridiculous for transactions).

It seems a little tenuous as I've glossed over many details and LtU isn't the place for detail design discussions, but hopefully it's understandable enough that someone can point out where this has been already been discussed, or tell me how ludicrous the entire idea is.

## Comment viewing options

### Example?

I'm having some difficulty placing the sparse notation above in context.

The classic, simple example is the highly contrived (and somewhat insecure) "banking" operation, where two parties each grab a current value, subtract (withdraw) some, then put the remainder back.

How would you express this example as a pair of promises on 'a Store?

Importantly, where is contention being detected, resolved, reversed, and possibly retried?

The above 'commit_trans' interface doesn't seem to offer any feedback in case of failure, so I would suspect the transaction is retried automatically until it succeeds, but I'm not quite grokking how this might occur. Or are you modifying the Promise to report failure?

### Yes, the promise fails when

Yes, the promise fails when the second client also tries to set it (promises are single-assignment). This failure could propagate up the call chain and a retry is possible from there with a new promise.

Re: contention detection, fulfilled by promise single-assignment.
Re: resolved, failure propagation up to begin_trans where it could be retried, or some other policy could be enforced.
Re: reversed, not sure what you mean here.

Re: banking operation, only one client will successfully set the promise they share, so the second withdrawal would fail.

A typical transaction will read and write to a wide set of storage elements. A conflict between two transactions can only occur where the sets intersect. When this happens, however, at least one transaction must be aborted, and this requires reversing (or abandoning) changes to all stores touched by the aborted transactions.

The single-store atomic promise is very much a trivial case - as is the banking example. Attempting to represent all program state as a single Store will never scale, not even to a meager two concurrent threads, since it would always require abandoning all but one concurrent read-write transaction. The only place I can see the trivial single-store "transactions" you propose as being useful is if the vast majority of operations on the Store are read-only and you don't know in advance whether they will be read-only.

For practical transactions, you'd need either to detect and resolve read/write/read-write conflicts on sub-structure of 'a Store (i.e. "merge" anything that isn't truly a conflict between different Future values for a single Store), or to support transactions that touch more than one Store. Of those, the latter is more flexible, more powerful, better defined, and therefore more realistic. Some transactional systems support both of these approaches.

Upon achieving any of these practical options, the 'Future' concept becomes less relevant and meaningful. I.e. it becomes for the multi-store approach quite unclear what a dozen different Futures would mean to the users... and in the "merged futures" approach, one generally loses the atomicity and isolation features in any case.

Thus, unless you have a design in mind to up the system to multiple Stores (or sub-structural conflict resolution), the relationship you're seeing between promises and transactions is at most a trivial one. Indeed, I'd not bother associating it with transactions at all: you will find a far closer comparison with RCU.

### The single-store atomic

The single-store atomic promise is very much a trivial case - as is the banking example.

Yes, the single-store closely reflects the application where this came up, which got me thinking about whether or not there is an underlying connection between the two and whether non-nested transaction commit might be a useful/usable pattern at all.

The only place I can see the trivial single-store "transactions" you propose as being useful is if the vast majority of operations on the Store are read-only and you don't know in advance whether they will be read-only.

It's more the case that transactions/promises are used to prune a tree of computations persisted in a single-value store. After noticing I could express transactions as a special case of (nested) promise resolution in this domain, I was wondering whether there was really any deeper connection between the two so that I could base my primitives on a more rigorous formal foundation.

Edit: and yes, the computation tree is intentionally read-only, and consists of suspended continuations; these suspended continuations are released via pruning by transaction commit/promise resolution as mentioned above. Just a little more background if that helps.

### Very specific application

Single-assignment corresponds to first set value wins. There are other choices that are useful in general, but I suppose that is the only choice you need in your application.

When you build "computations persisted in a single-value store", I suppose that each nested computation gets its own store? And parallel computations share a same store?

### When you build "computations

When you build "computations persisted in a single-value store", I suppose that each nested computation gets its own store? And parallel computations share a same store?

Yes, each nested computation can write to its own store, and does so with the default transactional semantics. This store only eventually folds back into the top-level store. But, using promises to implement transactions also leaves open the possibility of non-nested resolution, where you don't have to pass down a new store, but an existing one from a higher-level, thus enabling you to short-circuit the cascade. I'm not sure whether this will turn out to be useful.

### Conflict resolution

What happens if a different transaction already fulfilled that promise? Is that what you detect to deliver ACI properties?

My idea of transactions is that a variable starts a transaction as a local copy and a shared value. At the end of the transaction I tell the handler what is the old value and the new value of the variable. I could also provide a (current, future) pair to the handler at the beginning of the transaction instead.

According to the conflict resolution policy for the variable, the handler decides what to do about my request to update the shared variable with a new (or identical) value.

### Coherent Reactions

Under the synchrony hypothesis, a bunch of computations (e.g., functions) occur in one step in response to some external stimulus, and then the process repeats. The outputs, in this sense, are data flow / logic variables per time-step.

A question becomes, what if the computations to compute a time-step are wonky? E.g., what about FRP when there are effectful functions: under some topological schedule it works, but what if we're optimistic and try to speculatively parallelize?

I had been thinking about how to use STM-like techniques to deal with such scenarios... and then saw Jonathan Edward's coherent reactions paper. Under my initial interpretation of his original draft, you can now view the problem of finding the right schedule as search. When/how to proceed with computation is a little orthogonal -- the cool thing for me was how to think about these together. His work is interesting for other reasons too, but, for this context, he is the only other person I've seen that started to put these murky ideas together.

Perhaps not a direct answer, but food for thought.

### No substitute

Promise-based streams like you describe are no substitute for TM. The most important property of TM is that atomic transactions can be composed freely. This is not possible with simple synchronisation on promises and futures alone. As others have pointed out, you crucially need the ability to roll back failed transactions.

### Assuming we're dealing with

Assuming we're dealing with multiple types of values or multiple locations, I agree. But as I explained in other comments, only a single type of value is being manipulated here so synchronization on multiple "stores" or multiple locations in a store is not an issue.

Consider a tree of suspended continuations each of which are marked with a flag indicating whether they are a transaction entry point. Starting a transaction consists of copying an existing continuation, setting the flag, executing it and storing the resulting suspended continuation in the tree as a child of the original continuation. You can rollback to arbitrary points in the computation, even within a transaction, and continue along another branch. Full rollback is simply backtracking to the entry point and releasing all child suspensions. Commit writes the resulting continuation over the entry point and releases all children.

Now consider the description in my post where the transaction start point is treated like a promise that you can set explicitly in a child continuation, or implicitly via cascading commit/rollback. It seems to me that the number of expressible transaction-based programs is a strict subset of the number of expressible promise-based programs, since with promises you could prune the tree in a non-nested fashion. So in this domain, it looks like transactions can be implemented in terms of promises.

Hopefully this clarifies the issue.

### Kay's "Worlds"

You can rollback to arbitrary points in the computation, even within a transaction, and continue along another branch. Full rollback is simply backtracking to the entry point and releasing all child suspensions. Commit writes the resulting continuation over the entry point and releases all children.

I am reminded of Alan Kay's paper on Worlds. The basic idea there was to peruse many futures before intelligently choosing just one of them.

the number of expressible transaction-based programs is a strict subset of the number of expressible promise-based programs, since with promises you could prune the tree in a non-nested fashion.

In transaction-based composition, it is traditional that one can modify some 'data'-set atomically. Somewhat less traditional is the ability to extend the 'computations' set atomically, i.e. by specifying post-commit behaviors. Without first-class support for atomically specifying future behaviors, one often ends up emulating it (i.e. by placing activation records in a database...).

In any case, use of post-commit behaviors allows something of an "out" when it comes to transactions, but is rather painfully restrictive on the implementation (i.e. one cannot make any intelligent performance decisions, and it restricts automatic inlining).

A design I've been aiming to support provides "decaying" transactions that provide a similar out. The decay option is specified by the programmer, but leaves to the optimizers the decision of whether to delay behavior to post-commit vs. execute all or part of it within the transaction. The only behavior that is necessarily part of a decaying transaction is any behavior that contributes towards a "reply" one needs prior to committing the transaction. (My use of the word "decay" was a reference to "atomic decay" - a situation where the once indivisible 'atom'-concept starts breaking down.)

In any case, I'd just like to say that there are approaches involving transactional programs that allow one to escape the hierarchical restrictions. These may not be useful in your domain, but I'd hate to see transactional designs maligned inappropriately (esp. since I've boarded that bandwagon...)

### Post-commit

I am curious about this notion of "post-commit" in a transaction. I found this paper, there is an example of memory that may need to be freed (malloc-based) but that can only happen in case of commit. I don't see why specifying that would have to be atomic.

Developing Libraries Using
Software Transactional Memory

Do you have other examples?

### Post-commit

My use of the term 'post-commit' meant nothing more than "after the commit". As in, any time after. If you want a post-commit behavior to be atomic, then have it start a new transaction...

The important feature of post-commit behaviors is that they don't occur if you don't commit. And, for the 'decay' variation, those also don't occur if you don't commit - but they might start occurring before you commit (the more flexible 'start' time supporting a wider range of optimizations).

There isn't anything special about the term, though there is some challenge in deciding appropriate semantics for exactly how to handle post-commit behaviors (continuations, messages) emitted by 'nested' transactions. Does one wait until the global commit, or does one only wait until popping one message back?

I'm thinking to make decaying transactions the default. That would keep the semantics simple: nested transactions simply continue to decay. Of course, one would need to wait on an acknowledgement after each 'write' in order to ensure that further reads inside the transaction would actually see the updated value, but that isn't a huge deal (one needs to do the same outside a transaction, anyway, when dealing with distributed storage).

Do you need more clarification?

### Looks like your use of

Looks like your use of "atomic" was not the one I understood.

If I understand correctly now, the "decay" starts when the decision to commit is taken, but before the changes are actually committed to main memory (in an undo-log based transactional system, that would be immediate). As long as there is no access to vars being committed, the semantics should be same as post-commit.

Not sure what the concern is in the last paragraph. For the question of intermediate or global commit, maybe you could name the transactions?

### Examples of Post-Commit and Decay

Alice starts a transaction T, requests a value X from memory, doubles the value to 2X and assigns it back to memory, sends a message M2X to Charlie, does a few more actions Z, then commits the transaction T.

Case A: Alice sends M2X to Charlie as part of the transaction T.

Outcome A: Charlie reacts to this message within the transaction T. That is, any updates Charlie makes to the state of the program will also be committed as part of Alice's transaction. Alice will use a distributed commit protocol (such as two-phase commit) to ask Charlie to prepare to commit the transaction, which (of course) won't happen until Charlie has finished reacting to the message.

If Charlie defaults to sending messages as part of the transaction T, then Charlie might end up including other resources, such as Bob, in the transaction. Thus, this approach has tendency to make transactions very 'deep' and 'wide'. By nature, big transactions are more likely to conflict with another transaction, which means high risk of being aborted.

Case B: Alice sends M2X as a true Post-Commit behavior.

Outcome B: Charlie doesn't receive M2X unless Alice successfully commits. Charlie never hears about transaction T, but rather processes M2X as though it were a new message.

Case C: Alice sends M2X with Post-Commit Semantics. Note: This is how you interpreted 'decay', above, but is not the same.

Outcome C: (2 sub-cases): (1) M2X is sent as a true Post-Commit message, in which case this devolves to Outcome B. (2) Charlie starts processing M2X immediately. Charlie will be prepared to undo any reactions from M2X should Alice fail to commit her action or if a conflict occurs between reacting to M2X and Alice's additional behaviors Z. In case of conflict, M2X will be delayed and re-executed later (to ensure that M2X occurs strictly after transaction T). After T commits, the reactions to M2X up to that point can also be committed and all further reactions to M2X can continue outside of any transaction - i.e one is rid of the transactional overhead. If the commit for the M2X reaction fails due to other conflicts, the reaction to M2X may be restarted entirely - again, outside of a transaction.

These post-commit semantics are convenient in that they allow one to start processing more immediately, reducing latencies, and in that they are very well-defined. However, they have a few disadvantages... for example, if using the post-commit 'send' operation, Alice cannot receive any replies from Charlie (since that would require receiving a message from the future...). Additionally, a variety of optimizations remain unavailable due to the hard 'edge' where Charlie must detect conflicts between his reaction to the M2X message and Alice's further actions Z in T.

Case D: Alice sends M2X with the atomic-decay property.

Outcome D: (2 sub-cases): (1) Charlie receives M2X as a true post-commit message, in which case this devolves to Outcome B. (2) Charlie starts processing M2X while T is still ongoing. This case is very similar to Outcome C(2), but with a significant difference: Charlie does not need to detect conflicts between his reaction to M2X and T; instead, "decay" behaviors may freely occur within OR after a commit. The serialization is not wholly post-commit. Indeed, this is why I favored the word "decay" because it brings to me an image of the hard 'serialized' edges from transactional overhead sort of rotting away into a spongy, fuzzy edges before falling away entirely.

In comparison to post-commit semantics, there are two major advantages for decay: (a) a wider range of inline-ing optimizations can be supported by that fuzzy edge without needing to re-do anything. (b) if Alice needs to process a reply from Charlie, she can capture that reply in a promise and eventually wait until it is fulfilled. By doing so, Alice will promote the potentially decaying message M2X into a true part of the transaction. Alice does not need to know in advance of sending M2X whether or not the reply from Charlie will be needed.

Post-commit semantics make a painful default since that also means one cannot ever support replies as a default (and replies are needed for any memory access).

By comparison, support for decay-semantics is a very feasible 'default' because it need not be explicit in the code. Even better, in the case of deep composition of transactions - i.e. where Charlie sends some message off to Bob as part of a transaction - a default for 'decaying' transactions tends to isolate the atomic aspects up through the dynamic requirement for replies. Transactions will tend towards being as 'shallow' and 'narrow' as feasible based on the program composition... rather than 'deep' and 'wide'. Accordingly, the risk of transaction conflict is much reduced.

Another advantage of decaying transactions: at some point in any programming system, you'll start to touch non-transactional resources (such as speakers, and guns) at the edge of the system. Decay makes for common, predictable, and controllable semantics in the interactions between transactions and APIs.

For the question of intermediate or global commit, maybe you could name the transactions?

Not ideal.

Ideally, you want to program objects (like Alice and Charlie and Bob) to act ignorant of whether they happen to be inside a transaction. If objects are transaction aware, then you cannot verify that the behavior inside a transaction will be consistent with behavior in isolation outside of a transaction. We want our objects to follow the exact same code paths, and make the same observations, between reacting to a message outside a transaction but in isolation vs. reacting to a message in a transaction in a concurrency scenario.

That is the consistency relationship needed to reason about transactional behaviors under arbitrary composition. And achieving it requires appropriate decisions on program semantics and expression. Naming transactions in any meaningful way must be rejected - except, of course, at the implementation layer.

### These post-commit semantics

These post-commit semantics are convenient in that they allow one to start processing more immediately, reducing latencies, and in that they are very well-defined. However, they have a few disadvantages... for example, if using the post-commit 'send' operation, Alice cannot receive any replies from Charlie (since that would require receiving a message from the future...). Additionally, a variety of optimizations remain unavailable due to the hard 'edge' where Charlie must detect conflicts between his reaction to the M2X message and Alice's further actions Z in T.

Case D: Alice sends M2X with the atomic-decay property.

Outcome D: (2 sub-cases): (1) Charlie receives M2X as a true post-commit message, in which case this devolves to Outcome B. (2) Charlie starts processing M2X while T is still ongoing. This case is very similar to Outcome C(2), but with a significant difference: Charlie does not need to detect conflicts between his reaction to M2X and T; instead, "decay" behaviors may freely occur within OR after a commit. The serialization is not wholly post-commit. Indeed, this is why I favored the word "decay" because it brings to me an image of the hard 'serialized' edges from transactional overhead sort of rotting away into a spongy, fuzzy edges before falling away entirely.

In your example, Z is Alice's post-commit actions?
Are the decay semantics well defined, eg. if the commit is protected by a global mutex for atomicity is there a risk of deadlock? (I still need to study more the commit phase of STM... ;-))

### In your example, Z is

In your example, Z is Alice's post-commit actions?

No. In the example, the message from Alice to Charlie - named 'M2X' - is possibly in-transaction (Case A), possibly post-commit (Case B), possibly post-commit semantics (Case C), possibly atomic decay (Case D).

The set of actions Z is similarly almost anything at all, though you are expected to assume that Z includes in-transaction behaviors (since Z is listed to occur before commit). If Alice were truly intent on performing Z post-commit, I suspect she'd have listed them after the 'commit' step - that's much easier to do as the initiator of the transaction rather than buried deep inside one.

Indeed, the main reasoning for mentioning Z is to allow that there may still be interactions between Alice's continued behavior in transaction T that might conflict with something Charlie does in reaction to M2X. How these interactions are handled distinguishes 'decay' from 'post-commit semantics'.

Are the decay semantics well defined

Decay semantics guarantee, in addition to atomicity, that observations you make will have consistent and isolation semantics (durability optional). This includes observations after behavior with side-effects.

This is a weaker guarantee than full transactional semantics because there are no guarantees about isolation of your behavior if you cannot or do not observe the outcome. For example, if inside a decaying transaction you send a message to Bob, and Bob responds "ok", you likely can't tell whether Bob has actually done anything useful with your message - much less done so in isolation.

It is, nonetheless, well-defined.

If the commit is protected by a global mutex for atomicity is there a risk of deadlock? (I still need to study more the commit phase of STM... ;-))

The use of post-commit behavior semantics or atomic decay semantics has no significant impact on the issues surrounding progress in transactional system implementations. They do, however, influence the meta-data carried by messages and continuations.

In general, STM implementations aim to achieve a property called "obstruction freedom", if not something stronger. Obstruction freedom means that any thread running long enough on its own will make progress. The STM implementation might lock some resources pessimistically, but one can always safely achieve progress by aborting the troublesome transaction holding these locks (after perhaps a short wait, if the contention manager says the other transaction has higher priority). In case of deadlock, that too will result in at least one aborted transaction, followed by progress by the remaining transactions.

Of course, obstruction freedom still allows for live-lock. But a smart contention manager, especially if interacting with a scheduler, can attempt to re-schedule conflicted transactions to ensure progress.

Other than pessimistic locks, there is another class of locks - at the "prepare" phase. Prepare locks cannot cause deadlock upon 'acquire', since if you cannot achieve the lock they may simply vote to abort. However, once you prepare, you cannot readily abandon the lock or abort the transaction holding the prepare-phase lock. This is because the transaction may become partially committed while the prepare-lock is held.

In the case of wholly local transactions, a partial-commit is not a problem: a partially committed transaction will either become a wholly committed transaction (given enough time) OR the whole system will fail (crash!) and nobody will care about a partial-commit to volatile memory anyway. If you need persistence, you'd need to add logging to continue the partial-commit after failure, but that can be done too.

But, with distributed systems, the problem becomes stickier... the system might only partially crash or become partitioned. There are approaches to trade some small risk of inconsistency for a guarantee of progress... such as "presumed abort" and "presumed commit" transactions, 3PC, or voting a new TM into existence if the TM crashes (risking a partitioned network of having each partition make its own decision on commit vs. abort). Distributed transactions remain an ongoing research problem.

That said, even with the inability to guarantee both progress and consistency in distributed transactions, they remain a heck of a lot safer and more secure (especially against malign denial-of-service behaviors) than using locks in a distributed system...

### Non-deterministic?

Are the decay semantics well defined

Decay semantics guarantee, in addition to atomicity, that observations you make will have consistent and isolation semantics (durability optional). This includes observations after behavior with side-effects.
This is a weaker guarantee than full transactional semantics because there are no guarantees about isolation of your behavior if you cannot or do not observe the outcome. For example, if inside a decaying transaction you send a message to Bob, and Bob responds "ok", you likely can't tell whether Bob has actually done anything useful with your message - much less done so in isolation.
It is, nonetheless, well-defined.

From your description it looks non-deterministic.

Supposing both Alice and Bob read and write variable X. Supposing both commit successfully.
Depending on when Bob starts the transaction, he will see the same X as Alice. Then depending on when Bob commits, at the end of the transactions X will be either Alice's or Bob's.

Neither corresponds to a sequential run, as Alice's and Bob's outcome is based on X before commit.

### Transactions don't guarantee Determinism

Don't confuse Isolation or Consistency with Determinism! Those are different properties. If a behavior would be non-deterministic in isolation, then it had darn well better be non-deterministic when performed inside a transaction. Anything else would be inconsistent. And the actual serialization order for transactions tends to be non-deterministic, too.

As far as your example goes: if Alice and Bob are inside separate transactions, then those transactions will be serializable relative to one another. That is, Alice reads and writes X before Bob reads and writes X, or vice versa. If anything else happens, as in the case you describe, then one or both would fail to commit successfully. Importantly, your "suppose both commit successfully" assumption manages to beg the question.

Now, with 'decay' semantics, Bob and Alice would each need to wait for a reply from the 'write' operation - i.e. 'observe' the write - prior to committing. One would need assume that the reply is issued after success or failure rather than before it. By observing the success or failure of the write, Alice and Bob force the attempted write to be part of the full transaction.

### Yeah I wrote something

Yeah I wrote something stupid, that is not a question of determinism. I see that you don't give up on consistency in the decay semantics, though I still can't wrap my head around it.

If you were wondering whether one 'can' decay such that the behavior is not isolated, the answer is: yes. Transactions with atomic decay semantics don't guarantee consistency with isolated behavior in general; they only guarantee to isolate observations made inside the transaction, which also means isolating at least the behavior that leads to those observations.

I'll try to detail your example with both the isolated and non-isolated case.

Assume that a simplistic object/cell named X accepts 'access' and 'update(T)' messages. To the 'access' message X replies with the current value held by the cell, and upon receiving 'update(T)', X modifies the value in the cell to hold the parameter then replies with success|failure.

Further, assume that the language uses local futures for each message send. In practice, it's really nice to have primitive support for futures (it can simplify syntax and allow the compiler to make intelligent pipelining and continuation-passing decisions) but I'll keep syntax in these examples explicit. One can wait on the future by calling a primitive 'realize' operation. ('realize' cannot be a message if all messages return futures, lest we experience infinite regression.)

Alice and Bob each have a reference to X. They each run the same concurrent program, which operates on X. I'll describe two variations on the program:

 ;; Variation F atomic { var V = X <- access ;; V is future value var S = X <- update( 2 * {realize V} ) return () } ;; Variation G atomic { var V = X <- access var S = X <- update( 2 * {realize V} ) return {realize S} }

Variation F is close to what you describe: one reads the value from the shared cell X, then writes it with a new value (one depending on the old), but one does not attempt to observe the result of the write.

Decay semantics under variation F would fail to guarantee that the update occurs as part of the transaction - i.e. the 'update' message could decay and fall out of the transaction (and happen post-commit). If it decays for both Alice and Bob, the final value stored in X could be 2*V or 4*V.

Variation G is a very slight tweak from F, adding {realize S} to obtain the success|failure observation for the update. Decay-semantics guarantee that this observation will be consistent with isolation. Since the success|failure observation depends on side-effects, those side-effects will also occur in isolation.

Thus, Variation G offers what one would expect out of an ACID transaction that updates a simple cell.

If one were using full transactional semantics, then even Variation F would offer the guarantees one would expect of a read followed by a write on a shared cell inside a transaction.

However, the 'send message' operation defaulting to occurring inside the transaction can make composition more difficult... i.e. programmers even of modules that don't use transactions will still need to include explicit transaction barriers to help keep transactions shallow, narrow, and therefore more robust (less likely to encounter conflicts). The default for full transaction semantics is to err on the side of making the 'critical section' bigger. This can be a problem when composing third-party services.

The decay semantics require more explicit interactions to ensure isolation, but still make a useful guarantee that programmers should be able to quickly learn and grok - that all observations made within the transaction will be ACID-compliant. The default is to err on the side of keeping the atomic code to a minimum. As an additional advantage: use of futures allows one to easily issue a command in advance of deciding whether the command must be completed inside the transaction... that is, you issue the command in advance of 'observing' the future by use of the 'realize' operator.

I think the decay semantics will lead to simpler for open-systems programming, especially if the language allows one to avoid most explicit use of shared mutable state. As far as minimizing explicit use of state, it is worth noting that a simple variation on the cell, using 'update(T->T)' with a pure function, reduces the need to explicitly read a cell before writing it, and makes practical the facet-separation of read and write capabilities. (Even more interesting is the ability to update a cell to contain different data-flows, a constant being only one of them.)

### Limits of out-of-transaction update

Nice API to transaction semantics. I would like to observe that if you add Cher into the mix, with the update falling out of the transaction you could have X taking values 2*V, 4*V then 2*V again (time reversal). In addition, since the updates look like they act on individual variables -- if the transaction was also to update Y to value V using decay semantics, you could end with X â‰  2*Y at a point of time (non-coherent updates).

### Not a Limit

I may be wrong, but a context discussing problems of time reversal makes me feel there is a little sarcasm in the "Nice API" comment.

As with any synchronization semantics, programmers would need to learn decay semantics and its quirks.

Decay semantics, like full transaction semantics, are predictable and composable. Importantly: One can tell how code interacts within a decaying transaction by observing the code locally, and programmers can atomically negotiate a contract between two independently developed black-box services to create or provide a new service.

Unlike with full transactions, programmers using decay semantics may also assert a useful robustness property: that once they reach the end of the atomic block, there is no ongoing activity within the transaction upon which one must wait prior to commit. Essentially, the transaction is only as wide and deep as one explicitly makes it.

The above variation F vs. variation G case would make a useful didactic example on the differences. But that's what it is: a difference, not a limit.

If the language encourages or requires a great deal of mutable state to achieve even simple tasks, then I can see how the decay semantics might present a problem more often than a solution. However, if a language and its common libraries eliminate or isolate some of the most common needs for mutable state - such as explicit caching, object graph construction, observer pattern, configuration variables, modes, task queues, registration and injection and runtime-upgrade of factories and services, and data management - then explicit interaction with mutable state becomes rare.

The reduced need for explicit programmer interaction with mutable state, however, doesn't avoid the need for atomic transactions... it simply means there are plenty more service layers between the programmer and the state - i.e. that the state is further removed from the programmer.

This is the context for which 'decay' semantics was designed to work. One should assume there are arbitrary layers of third-party services between you and the unnamed state at the edges, and these services are meanwhile issuing events to observers, spinning off new tasks, and these new tasks are starting their own transactions with yet more services.

In such a context, that robustness property becomes important.

In such a context, the fact that lazy programming leads to narrower, shallower, smaller transactions can greatly aide confidence in the performance-viability of a composition with third-party services.

In such a context, decay semantics are indeed "nice".

### Upgrading the Cell to support Transactions

With regards to the didactic example above, I would note that I called the above a 'simplistic' cell design. Changing the cell design can vastly change the "limits" of what can be usefully achieved both within transactions, and outside of transactions or upon escaping them.

In a transactional system, there is some advantage of changing the update facet to support a T->T update - i.e. via use of a pure transform function as opposed to specifying the result directly.

The reasons for this variation? Manifold: (a) one can locally detect a wide variety of commutative writes - i.e. in case of a potential write-write conflict, one can simply see if the writes were commutative before declaring it a true conflict (which makes a huge difference when dealing with integer-ops, set-operations, etc.), (b) it supports common atomic read-update patterns without requiring one invoke transactional overhead, (c) it makes it feasible for a wide variety of programs to completely separate the update and access facets for cells such that it's rare for the same object to have both facets (which has some really nice composition, optimization, and capability-security properties - the optimization occurring when just the update facet is GC'd or linearized-then-destroyed).

Similarly, in a transactional system it is very useful to provide 'access' to cells via a pure, composable mechanism such as functional reactive programming.

The reason? Once again, manifold: (a) one gets all the normal advantages of FRP (low-latency updates, automatic implicit caching for disruption tolerance and performance, potential implicit multi-cast optimizations if one can distribute code, nice composition properties), (b) if a pure observation-transform is associated directly with the read operation, one can systematically detect cases where a read-write conflict on a resource may exist but is actually not a problem due to the reduction in information by the observation performed.

The single-resource case is simple enough. One could send a pure function to the cell and ask for the result of applying that function to the value in the cell. For example: you read a cell containing a set to determine that it is non-empty - a simple function reducing a set to a simple boolean. If a potentially conflicting transaction (i.e. removing an element from the set in another transaction) is detected, the function is re-computed on the new value. If the set is still non-empty after removal, for example, there is no actual conflict.

But, despite simplicity, languages without 'pure' functions distinct from procedures can't manage even that much - not without that mythical 'sufficiently smart' optimizer. Similarly, there are limits to what the above approach can achieve with regards to false conflicts involving reads on multiple resources.

FRP scales this false-conflict elimination much higher, allowing multiple-resource false-conflict elimination to be performed in the framework of a pure function. For example, if an FRP-expression performs an intersection between the sets contained in two different cells, it may be that the intersection doesn't change even if one or both of the sets changes. First-class FRP makes this easy to test systematically without appeal to a 'sufficiently smart' optimizer.

Using this upgraded cell, I'd simply use the following program for Alice, Bob, and Cher (assuming that 'X' is just an update facet and that there is a separate access facet):

X <- mul 2

That doesn't avoid more complex races and conflicts, of course. But one would certainly need a more complex program to motivate use of transactions, and therefore could go much further whilst escaping transactions.