general patterns in PL persistence schemes?

How many different basic strategies in programming language persistence seem like good ideas to support? The sort of perspective I want is a summary overview, as opposed to a detailed survey of every known tactic touched in literature. A gloss in a hundred words is better than a paper listing everything ever done in persistence.

(In chess a desire to "control the center" is the sort of basic pattern I have in mind, not an encyclopedia of all chess opening variations. In the context of PL tech, the sort of basic patterns I have in mind include image, document, database, file system, etc. But I'm interested an analytical assessment of effect of choosing a pattern.)

Some products featuring a PL as the main offering come coupled with a normative scheme for managing persistent storage. For example, Smalltalk was traditionally associated with image-based storage, though the language per se does not require it. Similarly, HyperTalk came bundled with stack-document storage as files in HyperCard. Other languages might come coupled with a database, or assume cloud-based resources.

I'm mainly interested in "local" storage schemes, like in desktop software, guiding a developer using a language in managing persistent state in session or project form. Yes, this assumes desktop software isn't dead yet. If a user creates content in documents, how does a language organize support for this? That kind of thing. Preventing users from making meaningful documents filled with self-contained data sets would be weird.

This line of inquiry comes from thinking about a language with a builtin virtual file system that unifies the interface for local and distributed data streams. How does one avoid antagonizing a user's need for documents? A document might be mounted as a file system, so saving is modeled as transacting changes on that file system, and two-phase-commit can be used to arrange consistent collections of changes across disparate stores. But this strikes me as a random data point without much context. So I'm interested in what kind of context is provided by other PL approaches to persistent services. Maybe the normal thing to do is say a language defers to an operating system or storage product.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

opinions

I like the idea of virtual filesystems, where state flows from a central locus but can 'chroot' different subprograms onto different subtrees to provide state that is global in some senses yet securely partitioned for modular reasoning. In comparison, I dislike images and hibernate-style models where state can be scattered across many objects, mostly because those models do a very bad job with the whole code-maintenance concern, and in part due to semantic accessibility. Confer the persistence and upgrade problem, or my article local state is poison.

I am currently developing VCache for Haskell as part of a larger effort to model a virtual filesystem for a language (I'm implementing the initial runtime in Haskell). My goal with VCache is to support the common TVar/IORef models used in Haskell, except above a persistent virtual memory, with all persistence founded upon a few named global variables. The named globals can be securely partitioned among different subprograms using a filesystem-like metaphor.

Regarding your question: How does one avoid antagonizing a user's need for documents?

I think that documents are a separate question from persistence. For example, if you develop a persistent web server, then every web page and web application and 'put' and 'get' and import/export APIs and so on would utilize document-like contents, with varying degrees of persistence. If you're developing a desktop-like application, you'd support some functionality to export or import content to and from the user's filesystem.

A point to consider is that persistence at the language layer greatly simplifies interaction with external persistence models, such as filesystems or databases. This is because developers can easily model long-running, incremental interactions with said filesystem or database. We avoid a lot of accidental complexity that arises from interactions between the persistent and the ephemeral, especially regarding partial failure. With language layer persistence, after disruption, the program recovers then continues where it left off.

Trying to mount external files or support two-phase commit and so on would greatly complicate your virtual filesystem, and would not be generically applicable (e.g. wouldn't much benefit a web app server). I wouldn't bother. Leave any abstraction of mounting files to your programmers.

thanks for the good on-topic comments

(The local-state-is-poison link is missing href="url" in the a-tag.) I skimmed the VCache page, which looks interesting. I find the D in ACID for durable is hard to achieve in high transaction rates exceeding rate at which physical mediums can transact. Folks using file systems usually quote ACID transaction rates greatly exceeding the number of file system syncs that can occur, so the D part is not strictly true.

Sometimes a high frequency of non-durable nested transactions is good enough, with a lower frequency of durable global transactions containing them. Maybe the important thing is that transactions look correct inside a scope coordinated this way, with durability as a separate quality.

Trying to mount external files or support two-phase commit and so on would greatly complicate your virtual filesystem, and would not be generically applicable (e.g. wouldn't much benefit a web app server). I wouldn't bother. Leave any abstraction of mounting files to your programmers.

If a virtual file system (vfs) has an abstract interface, then mounting an external file as a vfs ought to hide any complexity involved, or there's not much point in pretending a file is a file system if it can't obey the semantics at least as far as api is concerned.

If it's sometimes useful to mount a file as a virtual file hierarchy, value would be undermined if a dev could not work out any way to guarantee absence of corruption for a user. If a user sometimes wants a document to maintain rational coherency, it should also do it when viewed as a vfs. I thought maybe ability to transact or abort changes in a vfs was a useful optional feature in an api: a vfs might be able to transact. That it's not generally applicable is okay when a vfs need not support transactions.

One or more libraries associated with a PL might come in layers, with extra optional features put into separate layers. A simplest core layer might say storage is totally your problem, and give nothing to programmers. But if another layer adds a vfs interface, with several implementations able to mount a variety of things including files, then it would be broken if a programmer cannot offer durable/persistent data safety to users because the vfs was in the way, but incomplete.

The point I have in mind is simple: if OS semantics end up in PL offerings, there needs to be a way a programmer can get a contractual guarantee that was available via direct use. If a PL has an optional layer that imitates processes or file systems, they aren't useful for production unless similar quality can be achieved as when not using those features. If you can't grow a prototype into a final high quality version, it's a blind alley, and more diversion than help in making progress.

durability

(meta: thanks, fixed the link)

Durability is a domain requirement, and rather ad-hoc. E.g. updates to a shopping cart might not need to be durable, but committing to a purchase should be a durable transaction. Both transactions operate on the same domain model and variables, and no nesting occurs.

Your point about filesystem syncs is only partially valid. It is possible to achieve durability in excess of fsync rates so long as you can use a single fsync to achieve durability for multiple transactions, amortizing the synchronization costs due to batching. This is possible if you have concurrent transactions OR an ad-hoc mix of durable and non-durable transactions.

VCache has these properties. A hundred STM-layer transactions may batch into one LMDB-layer transaction, thus potentially allowing a couple orders of magnitude more transaction throughput than one might assume from fsync rates. If a transaction is not marked durable, it does not wait on the LMDB-layer commit, hence freeing the thread to commit more transactions into the same batch.

Though, realistically, large batches aren't limited by sync rate. As the batch gets bigger (more variables), serialization becomes the more significant fraction of each transaction. I'm not sure where the threshold is with VCache. But if you're doing a lot of transactions on a small number of variables, i.e. such that batch size is roughly constant, then batches are a big win.

vfs

There is a lot of utility in having tree-structured state within an application. It maps nicely to securely partitioning state to different subprograms while preserving enough structure for debugging or code maintenance or even live programming.

But it isn't clear to me that mounting adds much utility. I don't see a need for a vfs designed for language-layer persistence to be a full-featured filesystem. The language already addresses many features that most filesystems lack, such as references and aliasing between values, interfaces and polymorphism. We shouldn't need to replicate those features in terms of mounting within a virtual filesystem.

what was the ill-fitting part of mounting as abstract FS?

Batching in durable transactions sounds great. Mainly I want to address full-featured because the metric is not specified. The OS should not be the standard for measure. Instead it's whatever interface you choose to support. So there's never any need to implement a vfs that can serve as an OS file system.

Say you think up a model that is "like a file system" for your purposes, then define an abstract vfs api from whole cloth, which may not resemble an OS vfs at all. If necessary, because a reader refuses to believe vfs need not refer to an OS, then we use term afs instead for abstract FS which is definitely independent, and not defined by any OS standard unless you feel like it.

Since the nature of an afs is whatever you want, including optional features if desired, the idea of full-featured becomes vague unless you mean the afs interface. If you never intend to plug an afs implementation into something using an OS interface, it will never matter. If you implement an afs instance in terms of an OS file system, that OS entity just looks like another afs instance.

I'm not sure which part of the mounting idea you don't like. If an afs intends to mimic parts of an Plan9 style design, and lightweight processes can edit their local file system view (unshared with other processes), then mounting is an elegant way to express local FS re-arrangement.

A lightweight process might want to open a connection to a server and view it like a file system, or edit one or more documents and view those as file systems too. Inability to mount would force a less generic approach to solving problems. I don't want to persuade you, rather I want to grasp the idea you have in mind that doesn't fit well, so I can mull it over.

mounting and aliasing

A difficulty with mounting is that it is implicitly a form of aliasing, i.e. of sharing state between subprograms. Aliasing is a source of much complexity, enough that I'd prefer it be explicit, e.g. in terms of first-class parameters. It's very convenient if a subprogram can expect and assume exclusive access to its little corner of the filesystem up to the extent it chooses to explicitly share access.

Mounting may seem elegant in Plan9 with its 'everything is a file' philosophy. But in a programming system, if you want to create a different view of some component, you create a function, object, or whatever your chosen paradigm favors. There is no need to treat the 'view' as a feature of the abstract file system. If you want to view part of the filesystem as a stack, the view's interface will just be push and pop.

So, I see mounting as something that creates complexity for both the users and implementation, and whose role is subsumed by the PL.

Isolation

As long as you have the Isolation property from ACID then its okay to share :-)

if only

Isolation is certainly useful. But it doesn't protect ad-hoc domain layer invariants, or provide any form of security, or address the complexities of reasoning with aliasing.

Partly

Consistency addresses the domain level invariants though, and nowhere do we require an AFS to support aliasing... not all filesystems support links.

less aliasing of mutable state is better

Thanks for describing the aliasing issue; addressing it probably makes docs better, so I appreciate it. In exchange I should probably offer comments on aliasing for a system I have in mind, but it seems to veer off topic with respect to durable persistent storage semantics. I can change the focus to coupling though, which keeps it slightly on topic.

As Thomas Lord noted in another discussion, sharing state increases coupling (which we don't like) and aliasing is a form of sharing that couples (so it makes sense not to like aliasing either). When content is immutable, sharing is not observable via effects, so aliasing seems to cause no harm. Being able to see where something is published can induce some coupling, in the sense it becomes hard to change to some other form of visibility instead. So the scary case is sharing mutable state, and here aliasing can cause more harm.

I like to distinguish between can and should, where the robustness principle (Postel's law) recommends being conservative in what you do (should) and liberal in what you accept (can). Accepting more enables porting more old code, even if it irritatingly aliases content more than it should in a file system. When writing new code or improving old code, it's better to arrange as little mutable state is shared as you can arrange in a visible space.

Anyway, communicating through a file system, however the rendezvous is arranged, seems like bad form unless done very locally between components written by devs in close agreement. It's a lot like public data members in objects: state sharing causes coupling that induces fragility. Direct state access is for friends and close relations whose code is maintained together. (Meaning: what looks like separate components is really a larger component, wearing different hats to signify roles.) When third parties become numerous or anonymous, visibility should occur through a server interface instead of file system namespace. When you want to know something, ask instead of poking around directly, unless you're already in it up to your eyeballs.

Regarding Plan9 style, I'm only interested in that things can look like a file, not that they should, to grease transitions in prototyping, where maybe you never optimize later when measurement seems to say it doesn't matter. For debugging purposes, the more you use an abstract file system interface, the more fire breaks you have in place for tracing dataflow, so you can figure out what happened.

If you have lightweight processes, which might be mapped to real native OS processes in some cases without this being clear to code involved, then you cannot structure all alternative views as an object or function, since the code might not be present in one of the parties in a dataflow between nodes. Code visibility is state sharing and causes coupling. This just means I might be slightly more interested in a dataflow view of things, when aiming at apps whose growth path has a lot of distributed parts to it, that might get re-mapped often to local graphs when debugging or simulating typical load patterns.

Immutable data on its own

Immutable data on its own can't be aliased, I hope that is obvious, since it doesn't have its own identity. Only identities can be aliased, and it doesn't matter if the identity is in the form of a pointer or GUID wrapped in an immutable object.

When people say "immutable," they actually usually mean "anonymous" (lacking an identity). It is trivial to build mutation from an immutable world dictionary and a bunch of identities to use as keys.

maybe I'm using another sense of aliasing

The essence of aliasing is giving more than one name to the same thing, right? Or equivalently, having alternative means of accessing the same state. If two variables both bind to the same state, there's aliasing. If something can be reached from an array, or by name via some variable, there's aliasing. I suspect you have an exotic definition of aliasing in mind.

You're claiming an immutable object cannot have identity? Say in some PL a pointer is identity, then you map memory containing object X so it's readonly and can no longer be mutated. Nothing stops you from putting that pointer into multiple arrays, or using multiple variable names to reach it. As near as I can tell that's all aliasing an immutable object.

Sorry if I'm misunderstanding you. My point was that if something cannot mutate, you cannot tell whether it can be reached in varying ways because visible in more than one narrow scope.

Immutable copying

I think the point is with immutable data a copy is indistinguishable from a reference. Basically the difference only shows up when you mutate it and only some change (references) but not others (copies). So aliasing is not a problem with immutable data.

Aliasing allows multiple

Aliasing allows multiple names to refer to the same thing, with an emphasis on "same thing"

If we had an immutable value "a" with immutable contents PyUE4E+JEdOaDAMF6CwzAQ, then "b" with immutable contents PyUE4E+JEdOaDAMF6CwzAQ is basically an alias of "a" if "a"/"b" are used to index an immutable property dictionary.

Immutable values don't have identity on their own; only their contents might have meaningful identity. The identity isn't intrinsic in the value (unless it has a consistent address).

The problem is identity, not mutability. An immutable value that contains an ID is an alias, while

So just to be explicit...

Another way to put this is: two things are the same thing exactly if they are indistinguishable. Immutable stuff is often, but not always, indistinguishable. For example, if I can somehow capture the locations of two read-only cells that store the value 3, the respective values "3" are indistinguishable, but the locations of the cells are distinguishable.

figure vs ground

Let me situate this in context of dmbarbour's remark about mounting vs aliasing, which was short enough that (perhaps) I read more into it than he meant. Further, discussion focus is durable storable, and I was talking about an afs (user-space abstract file system) whose interface unifies volatile in-memory stores with durable stores, including an OS file system, or database, or network service. I also brought up (OS) processes and lightweight processes more than once, so concurrency is directly relevant too. I interpreted dmbarbour's concern to be one about problems induced by concurrency when it's possible to write the same space multiple ways concurrently because access to the space is aliased specifically by mounting. Mounting the same logically mutable storage space under different names when access can be concurrent is an invitation to have a problem when writers don't coordinate.

When I said "immutable content" I meant lvalue, not rvalue, because I was talking about space that can be written concurrently. I'm interested in whether a storage location is immutable, and not whether the value stored there is immutable, to a PL. McDirmid's comment made me realize some folks are thinking about value and not where it's stored. Then I didn't reply because I don't want to explain my concern is more getting correct results under concurrency, with a focus on when content can change (either in physical locations, or logical ones like "the second page of file F").

To me this discussion failed because I didn't learn anything, when I was hoping to hear about ideas outside my current horizon. While discussions do have other purposes than learning, that's what I had in mind. Getting folks aligned with my views is too costly and has little utility. Concern about aliasing was good to hear about, but argument about what words mean is discouraging, though typical.

In so far as I have been talking about storage cells, they are always potentially distinguishable as different locations, so it matters whether they are readonly. For example, if I implement pipes between lightweight processes, octet streams (or datagrams or objects) flowing through might be copied or they might be shared with the sender in readonly buffers. So even when the logical focus is rvalues (content flowing through), the runtime still has lvalue issues with immutable cell invariants.

I don't mind if folks keep talking PL features related to immutable semantics, but I may bow out when it doesn't give me insights into concurrent space management issues.

re: code visibility

mapped to real native OS processes in some cases without this being clear to code involved, then you cannot structure all alternative views as an object or function, since the code might not be present in one of the parties in a dataflow between nodes. Code visibility is state sharing and causes coupling.

You can always go the Erlang-like route, and make it just as easy to serialize code for a function between different processes as it is to serialize other values. If this is done with a securable PL or bytecode (i.e. one where we can easily reason about confinement) then it's even possible for open systems. Separate compilation is readily supported by using secure hashes to name larger blocks of code and keeping a cache of precompiled blocks.

How code is managed in modern OS's is a source of much accidental complexity. If passing objects or functions between lightweight processes is a good fit for your model, then barriers to doing so require you either use heavy-weight processes or work outside your language (scripts, etc.). I observe this has already happened.

The New Hotness and Ancient History

For some reason, this puts me in mind both of A persistent key/value server in 40 lines and a sad fact ("the new hotness," or "why your managed language still needs rocking support for contiguous blocks of bytes") and MetaKit ("ancient history," or "why choosing the right fundamental data structures enables both extreme performance and extreme generality.")

I guess it also puts me in mind of this ancient post about Oleg Kiselyov's ZipperFS.

It seems to me that some sort of union (that I've put literally no thought into beyond "it seems like it could exist") of these three ideas (fast serialization is an enabling technology; column-oriented structures + mmap are a great foundation; delimited continuations + zippers can give you FS-like systems with very desirable properties atop a very broad category of underlying structures) could possibly arrive at something resembling the nexus you seem to be asking about.

Thanks for the MetaKit link,

Thanks for the MetaKit link, hadn't heard of it before. Looks like LMDB has better documentation though, and it's pretty widely used.

persistence in imperative languages

Abstract storage locations are finite pieces of state, typically mutable.

References are values associated with abstract storage locations in a many to one or one to one relation. A reference is a kind of capability token that a program can use to read or write the abstract location.

Identity is an equivalence relation among abstract storage locations and therefore identity is also an equivalence relation among references.

Persistence is a language feature in which identical references may be shared by multiple programs runs, separated in time. Thus: persistent abstract locations are abstract locations that outlive a program run and can be used by a later program run. (Persistence may but does not have to also permit sharing by concurrent program runs. This is orthogonal.)

Object oriented databases are a (the?) classic form persistence takes.

Persistence does not imply the use of tertiary storage. For example, imagine a hypothetical version of FORTRAN in which certain COMMON blocks are retained in main memory after the program halts, so that a second programs, using the same COMMON block definitions, can be run later. That is a version of FORTRAN with persistence.

Confining attention to systems which do use tertiary storage for persistence:

The physical process of moving values between real main memory and real tertiary storage weakens and can easily complicate the semantics and performance model of abstract storage locations.

For example, reading from or writing to a location in an object oriented database may be unpredictably fast or slow. If the data is on disk a read or write can fail in non-trivial ways that a program should handle, but that wouldn't come up in the absence of persistence. In practice, this seems to lead to awkwardness when persistent locations are treated "invisibly" like ordinary locations in a language designed without persistence in mind. Designing a language in which, from the start, all locations are persistent is also awkward in that gives up the simpler store semantics of non-persistent locations.

Image-based persistence attempts to confine those practical awkwardnesses by confining them to the instants of program start-up, checkpointing, and suspension.

Languages in which some locations are explicitly persistent and others not, each having distinct semantics and performance characteristics, are a different attempt to contain the awkwardness.

Persistence as defined here has been practiced for at least three decades, probably longer, without anyone ever managing to become much more than a niche technique. (To me, this is a hint that there are probably no better approaches to it than the problematic ones already well understood.)

persistent as non-ephemeral

(I should assume no more replies are coming.) Thanks for the really general comments. By persistent I had meant generally non-ephemeral from an environment perspective, corresponding to what a user expects is true about data non-loss.

Persistence is a language feature in which identical references may be shared by multiple programs runs, separated in time.

That narrow sense (of language feature for ref lifetime spanning sessions) makes it harder to talk about a general English sense of non-ephemeral. Although devs understand overloading in code, they seldom disambiguate overloading easily in natural language.

Is there another common tech word that means "stored in non-ephemeral memory stable across reboot or power loss"? I hear devs use verb persist a lot, to discuss content that persists across sessions or reboot, without reference to a language feature.

I don't think a language can own that feature without usurping an environment's prerogative to manage resources, so a PL should cooperate instead of drive. It becomes doubly important when OS and PL semantics start to overlap, or else a dev can't easily debug when leaky abstractions interfere with each other. The mapping across api boundaries must be clear.

"Is there another common tech word that..."

By "ephemeral" I think you actually might mean "volatile".

Is there another common tech word that means "stored in non-ephemeral memory stable across reboot or power loss"?

"saved"

I hear devs use verb persist a lot, to discuss content that persists across sessions or reboot, without reference to a language feature.

They are trying to sound smart and they mean "save".

some free association about terms related to saving content

Developers I know say ephemeral for several reasons meaning temporary or transient in a given context, with respect to something that lasts longer that is related but more permanent. For example, cache entries are typically ephemeral with respect to things cached, taken from somewhere else. Similarly, things negotiated dynamically from static configuration are ephemeral when lifetime is transient in the context of negotiation. Volatile doesn't work as well on those cases.

You can go ten years in industry without hearing a developer refer to volatile memory, because memory is assumed volatile by default. Thus an in-memory database is volatile, but that word is unlikely to be used. Among C devs, saying that word is likely to launch a pedantic side discussion on the volatile keyword in C, and will make other devs assume you are talking about race conditions. However, yes, I'm using ephemeral here to mean volatile when talking about storage, but without any nuance of concurrency being relevant.

You can write, store, or save something to a database without implying durability or persistence, because an in-memory database can save things. In effect, save is synonymous with write, and doesn't imply commit until a flush occurs. And even then longevity depends on storage semantics. Even if it ends up on disk, it might be an ephemeral memory-mapped file scrubbed for re-use on daemon restart. (Edit: often because without in-memory state lost in restart, what's on disk cannot be judged valid.)

A lot of tech vocabulary has a local perspective as default: relative to the code interface in use. It's harder to refer to global effects. If a dev can weasel out of a constraint because it wasn't in the explicit contract, they will not only do so, it's considered good form. The phrase "safely on secondary storage preserved for posterity according to contract" is not an easy thing to get into a dev agreement; it's easier to get a djinn to interpret a magic wish correctly the way you intended. The word save usually has air quotes around it.

Some exceptions

Thomas is right for the most part, but there do exist systems such as KeyKOS and EROS in which persistence is entirely transparent. We use the term "persistent" rather than "saved" in those systems to emphasize that the system state is durable without any action or intent by the program.

Alternative terminology

I don't think there is, really. You probably have to accept that the entire concept of persistence is overloaded, and that developers usually get hung up on the technical, rather than the more interesting aspects.

For me, the word persistence conjures up 4 separate aspects, and there may well be others.

  1. What to persist.
    This is (or should be) decided at the level of the data model, and is trivially understood.
  2. How to persist.
    Again, this is relatively easy to understand, and technically lies (or should do) somewhere between the OS (for built in types) and data model implementation (for composite and / or custom types, and the overall structure). In the absence of a strong system-wide idea of how persistence works across all aspects, this is about the only place programming languages get involved.
  3. When to persist.
    I feel most systems get this wrong by leaving it as an application level decision, usually down to "when the user activates the 'save now' option".
  4. Why to persist.
    This is usually (and, again, IMO, wrongly) tied into the previous option. Persist "because the user just told me to"

My feeling is that most systems designers also get far too tied up on the what and the how, ignoring the far more important when and why.

I'm strongly of the opinion that volatile memory should merely be considered a cache onto persistent memory, that all non-trivially derived state (including application and UI state) should remain persistent. So turning your computer off and on again should leave you exactly where you were (with, perhaps, some unimportant UI elements left "defaulted" (mouse position, button state, etc), and moving that persistent storage (or a part of it) between machines should do the same. This approach means "when" becomes "whenever something important changes" and "why" becomes "so nothing is lost". What and how are, at that point, largely system-defined, and a lot of developer effort is removed.

I'm probably wrong, though. After all, Unix has been doing almost the exact opposite of this since the 1970s, and it still works :)

Persistance Overrated

Anecdotally, early operating systems tried the no load/save, no reboot, persistence (Monads-II / Multics). The problem was any bug would cause corruption of the memory image, and it was almost impossible to get rid of. A work around was to have all the files in a human readable/editable format stored externally, and when the persistent core memory became corrupted, the machine would be rebooted and the files loaded back from the external copy. When Multics was abandoned, Unix tried to implement a very small subset of the stuff that worked well from Multics without all the bits that caused the project to not deliver (de-scoping all the risk) and so did away with core persistence and recreated the volatile memory image each reboot from external persistent storage.

I think if you really wanted persistence, I would have a (specially designed) virtual machine on top of a database. You still want to make backup images in case of creeping-corruption, but transactional processing seems to have had the most success at maintaining persistent data-sets over extended periods of time.

Transparent orthogonal

Transparent orthogonal persistence worked fine for KeyKOS and EROS.

What Happened To Them?

I very much liked the capability architecture that KeyKOS and EROS used, but they seem to be discontinued projects. What happened to them?

I am not sure you can claim it worked fine? How did it work fine? How did it deal with programming bugs corrupting the persistent store?

KeyKOS was commercially

KeyKOS was commercially successful for a number of years before it was acquired. EROS evolved into CapROS which is being used commercially last I heard.

The "persistent store" is just a system-wide snapshot of all process address spaces. "Corruption" in this setup really only takes the form of dangling pointers within a process, which yield segfaults, at which point you have to restart the process. Certainly the core checkpointing code could use some formal methods to verify that it doesn't change the data as it's saved, but this is a fairly small piece of code that writes back to disk incrementally, with an atomic commit point.

What sort of corruption are you asking about?

Creeping Corruption

Creeping corruption is when the state is inconsistent, and as the inconsistent record is read that leads to the garbage spreading to other records. An example may be a resource counter that is not incremented due to a bug. When some other process reads the resource counter and thinks it is available it tries to claim a non-existant resource the resource configuration space may become inconsistent. The next time something tries to use the badly configured resource it may trigger an unexpected path in the code leaving some other state inconsistent. The only way to recover is to re-boot and re-initialise from an image that predates the start of the corruption.

The minimal persistent state

The minimal persistent state is some subset of the full process state, some of which may include caches, etc. In a typical OS, your program will have (1) the persistent state on the durable store, (2) some in-memory representation of the durable state you parsed from the store, and (3) any derived state.

With transparent orthogonal persistence, you only have (2) and (3). It's clear that this reduces the probability of bugs since you remove whole swaths of serialization/deserialization code, but it certainly doesn't eliminate all bugs.

Either a bug is recoverable, in which case you can update the program code and send a recovery message, or it's not, in which case the whole program needs to be torn down and restarted.

For instance, a relational database running on KeyKOS might have two processes sharing part of the address space containing the durable state, so that any failure of the main service sends an upcall to the simpler recovery service which can inspect the durable structures for corruptions, then resume or restart the failed process.

Kernel Bugs

And how do you recover from a Kernel bug that causes creeping corruption of the device configuration space, or process management tables, that then consequentially corrupts the user data in all processes?

KeyKOS and EROS were

KeyKOS and EROS were microkernels, so all devices are in user space and can't corrupt other processes. As microkernels, the amount of code that needs to be verified is quite small.

All persistent storage is reducible to nodes for capabilities, or pages for data. A process' address space is a tree of nodes with pages as leaves. The hardware page tables are derived from this tree on-demand, ie. as the addresses are referenced via loads and stores. The process metadata is persisted in a node as well, where a process node contains a schedule capability, an address space capability, etc. Once again, the in-memory representation is derived from this persistent structure when it's cached in memory.

Certainly there could be bugs in any of the above code, but the code that derives the in-memory cached representations from the persistent representation is very well tested, and amenable to verification. If the code that manages the cached representations themselves have a bug, then a system restart might certainly be needed to rebuild the cached structures from the persistent versions.

Given the above, perhaps you have a specific example of the type of corruption you think might be problematic?

Type of Corruption

It will always be the type of corruption you least expect, as you will have tested and verified your code for all the failure modes you can predict, therefore it will be something you didn't predict... unless you knew about the bug, couldn't fix it and released anyway.

In any case a verified microkernel is the best way if you want to do this stuff, and you then kill-and-restart the process instead of the machine. If the process has corrupted its persistent data though, you will still need a checkpoint from before the corruption started. If we call this checkpoint a "save-file" then we can see that its not really so different from keeping a volatile version in memory and saving to external persistent storage. The difference only becomes relevant if the system is rebooted, but with a 'certified' kernel, why would you ever want to re-boot anyway?

If the process has

If the process has corrupted its persistent data though, you will still need a checkpoint from before the corruption started.

I take it by "checkpoint" here you don't mean a system wide checkpoint, you mean an application-specific "checkpoint", like serializing a process' essential state. If that's not what you mean, why would you roll back a whole system just because one program has corrupted its own data?

However, your argument seems predicated on an implicit assumption that your application checkpointing code is somehow less buggy than your main program: how can you make your "checkpoint/restore" logic reliable enough that you feel confident in arbitrarily aborting and resuming from previous checkpoints without error?

It seems the time reimplementing checkpoint/restore would be better spent ensuring the integrity of your core state. If I were truly concerned about integrity on EROS, I'd probably do as I previously described: a separate monitor process as that could perform integrity checks and upgrades of the relevant core data structures, and the core service process that runs from a copy-on-write shared address space with the monitor for the core data.

The monitor periodically swaps to the new space after performing any necessary checks. There's no serialization step involved, and there's no need to reference external storage devices.

Production Systems

I mean a per-process checkpoint/restore provided by the kernel. I would never run any production database without backups, would you? What about user error, the application user accidentally deleting vital information etc. There's also migrating the data to new version of data structures when software updates happen, and all sorts of other production issues.

ACID databases have answers for all these issues, so its not impossible, but you would still need to make data checkpoints to external storage.

ACID databases have answers

ACID databases have answers for all these issues, so its not impossible, but you would still need to make data checkpoints to external storage.

Not necessarily: each "backup" is just a fork which marks all pages CoW, then suspend one of the processes as the "backup". A "restore" is simply suspending the active instance and activating one of the previous "backups". Orthogonal persistence handles writing it to a durable medium.

It took me awhile to start thinking in terms of cooperating processes, but it's a powerful model. With address space sharing it's not really more wasteful in terms of space either.

All my documents at once?

So you have all these processes with CoW backups, which might consist say of every document I have ever edited, how do I find the document I am looking for? How do I transfer data from one application to another. How do I transfer data from one computer to another?

Does it not make sense to have a single unified store where data from different applications can be put, and then found later by some metadata, lets name each one with something we could call a "filename". Would it also not make sense to have checkpoints at precise places where we know the data is consistent (not in the middle of a transaction or process that is changing the data) and that the interactive user is also happy with the state of the data. The problem with continuous checkpointing is that it is very hard for a human to keep track of the versions. Humans prefer Contract-v1.2 rather than a sprawling CoW tree of all modifications made to the contract from start to end. As if by magic, we have re-invented the filesystem...

Now we have this, we no longer require applications to be so careful with volatile memory, we don't need transactions for RAM any more. We don't have to worry that if a mutation process fails half way through the document may be left in an inconsistent state, because in the worst cast the user can reload from a known version ("save"). Of course tracking changes with an undo function is still worth having, but we can cope with bugs.

Historically data stores with real persistent data have required ACID semantics, this is not academic, it is from industry experience.

Thinking about KeyKOS and EROS, are you saying that the operating systems never crashed, even during upgrades. If they were to crash (due to a power glitch) how would you restart them? To persist all data must be slow? What if I want temporary data that is not persisted (because its quicker to recalculate than to write out and read back)?

Edit: I should point out we use persistent state (using PostgreSQL an open source database) in all our applications, but we still provide the ability to import/export/backup parts of the database, and this reads and writes to the filesystem (although not necessarily on the same machine). I think my point is reliable persistent state needs something like ACID semantics, and I would still want volatile/ephemeral state available separately. I would also like access to something like a filesystem for storing backups and transferring files between applications. Even the kernel can write to syslog where I can grep/awk it whatever.

Modern kernels persist in 'standby' mode so you have to reboot much less often, but it still seems sensible to keep a static image for reboot if necessary. The ability to reboot is like a safety net - why take it away when you don't have to. By all means increase persistence in the kernel, but I think it would be foolhardy to rely on it.

I'm obviously not going to

I'm obviously not going to solve 20 years of incremental progress in a few LtU posts. Some of the issues you have raised have been encountered and solved in the domains where orthogonal persistence has been deployed, some haven't yet been encountered and so don't have ready solutions. I hope you can see that a persistent object store running on Turing complete hardware could have solutions, even if they aren't immediately apparent.

Does it not make sense to have a single unified store where data from different applications can be put, and then found later by some metadata, lets name each one with something we could call a "filename".

Sure, and a orthogonal persistent object store is itself a unified store, it's just not indexed by "filename", but indexed by process. Arguably, users don't interact with "files", they interact with processes, some of whom may implement file semantics. Of course any process can use an external store like a set of processes implementing a file system, and KeyKOS had a Unix "guest OS" of sorts, providing all the usual Unix APIs (virtualization in the 80s!), so you can still use files when that's convenient.

Would it also not make sense to have checkpoints at precise places where we know the data is consistent (not in the middle of a transaction or process that is changing the data) and that the interactive user is also happy with the state of the data.

I think the user would also be quite happy to have transparent save semantics so they don't lose their work in the event of a system crash, and that this is a universal property provided by the system instead of relying on buggy, slow implementation of half of proper checkpointing in every individual program. Internal process consistency isn't important since the program will resume from a global state and settle on consistency or inconsistency, just like it would have prior to the checkpoint.

The only common case that seems to be the exception to "applications don't need to worry about their own checkpointing" are actual database systems. But should we design for the common case with an escape hatch for the rare exceptions, or should we design a more awkward approach that permits all behaviour natively?

Thinking about KeyKOS and EROS, are you saying that the operating systems never crashed, even during upgrades. If they were to crash (due to a power glitch) how would you restart them? To persist all data must be slow? What if I want temporary data that is not persisted (because its quicker to recalculate than to write out and read back)?

There were effectively immutable processes that could be exempted from checkpointing as well. EROS had a lot of literature about its transparent persistence, so I'll point you to a couple of overviews that discuss the points you have raised:

Databases Again

I am happy using SQL databases, and some NoSLQ databases for persistence. I could imagine writing an OS with an embedded database to provide the required persistence. I think the persistence is orthogonal to kernel requirements, so if the kind of CoW persistence you are talking about was an effective way of managing data, I would expect to see NoSQL databases already using it. Which database provides the closest to the kind of persistence you are talking about?

In any case I don't think all memory pages should be tied to persistence. Read only data does not need write back for example. I would expect to find 3 kinds of memory page, read-only, temporary read/write, and persistent read/write. Unix has all three of these, where persistent read/write pages are mmaped files. So does't unix already do all this?

By hunting down and killing the developer responsible

A kernel bug as you describe is the same as a kernel bug that causes creeping (or, indeed, sudden but massive) corruption of a filesystem on a "normal" system.

Such a system is simply faulty, it's not an indication of any conceptual problem.

All production OSs have kernel bugs

There have been many kernel bugs in Linux and Windows that cause memory corruption. In windows this often results in the blue-screen-of-death, and in linux an kernel dump into syslog and a hung machine.

The practicality is these things do happen and you need a way to deal with them. The kernel obviously cannot recover because the in-memory data structures are corrupt enough that it has crashed.

The solution is to reboot from secondary storage which stores a 'checkpoint' of application data, but generally no process runtime data (on a reboot all the kernels process and device tables are reinitialised).

Yes, there have been filesystem corruption bugs too, and sometimes you have to restore the filesystem from a backup. The thing with a filesystem is the data changes much less often, so the chance of corruption is lower. Lets say a bug corrupts memory 1 in a million operation. On a modern CPU that would happen in much less than 1 second, but on a filesystem that would represent 1 million 'save' operations from your word-processor.

If you want to build a secure and robust system, you have to start from the premise that anything that can go wrong, will go wrong, and it will happen at the worst possible time.

bugs all the way down

hear, hear. So the solution is that for any given layer of the system, there has to be a (history of) backup(s) of that layer, for rollback. Ad nauseum.

I'm not so sure about databases

I'd probably go with raw, transactional, storage rather than the dumb model we have at the moment. i.e. a transactional block-base store at the device level, rather than something operating at a "file" level. This may require dedicated, specialised hardware.

For a system with ubiquitous persistence, I'm unconvinced that bugs, apart from those at the very lowest heap / persistence layer level, should be able to corrupt the entire store. Assuming, of course, we program in something sane, rather than persisting (again, forgive the pun) in doing the same old thing over and over again based on unsafe langages like C. If we can state with certainty (and I'm not sure we can, regardless of language) that process x can't possibly stomp over process y's data, and we have something in place to kill off processes that go wild, I don't see how the situation can be any worse than the current situation of process x unrecoverably shitting all over its currently open file.

On the other hand, maybe it's me not learning from the errors of the past, as opposed to an entire industry based on blindly doing the same damn thing over and over, and making the user pay, over and over again, for the "most secure version ever" of x.

ACID

I think you need full ACID semantics to have the best chance of minimising data corruption.

I also think that corruption need not be low-level pointers, it can be any bad data that gets into the persistent store. This can then spread as the data gets used by other processes, see Creeping Corruption above.

normative schemes for persistent storage

As I understand it the Smalltalk folks (and probably by extension, Hypercard) were concerned with persistent storage as it appeared to program users.

I'm sorry I can't give you ready cites for this but as I recall the thinking went something like this:

The job of a user interface is to present the operation of a program through a series of metaphors. For example, the value of a parameter in a program might be presented as a form to be filled out and displayed or as a "slider" that can be moved along some scale. The idea is to accurately describe a program (including its possible inputs) as a kind of composition of cognitively simple, usually familiar concepts.

Generally, metaphors to physical concepts were preferred. People know how to grab a piece of paper and write something on it in the real world. Thus they might prefer a UI in which documents are like "paper" and you start with a blank piece and start adding stuff to it.

In that context the question arises: "What is the natural metaphor for saving and loading documents to and from disk?" Nobody ever came up with a good metaphor that works ubiquitously. Save and load still stand as abstract concepts you just have to learn to use some programs. "Sync" between storage devices is a related concept that seems to be irreducible to any usefully familiar metaphor.

You can find some of that crowd sometimes waxing eloquent about the vision of ubiquitous computing as a seemless global hypertext. No more save, restore, copy, or anything like that. Just a big sea of "live documents" with independent existence, doing their groovy hypertext thing.

Once again, the decades of non-success for that vision is to me a hint that it can not be achieved. I think that is probably because of physical constraints on computing, communication, and storage.

off the shelf?

I wish somebody was full enough of folly that they would burn a hackathon on gluing together some crazy stuff like swarm and symas type stuff.

Decades of non-success ...

… of safe programming practices in the wider world are not an indicator that safe programming practices cannot be achieved; similarly, failure of "different" storage schemes to achieve traction in the wider world is merely yet another indication of the level of institutionalised inertia in the computing world.

For other examples, look at the failure to add any sort of metadata other than "name", "hierarchical location" and "type" to datasets, and that only through unholy hacks to the file name. Yes, 3 letter file name extensions, I'm looking at you.

Apple's Newton managed exactly what you describe above, which was, IMO, a far bigger breakthrough than the handwriting recognition side of things it's remembered for. I personally would have been far happier with Apple's MacOS replacement being a desktop / distributed NewtonOS rather than yet another goddamned *n*x.

yet another goddamned *n*x

Yeah, well, aren't there lots of things that could have stood in as The Lesser Evil in that equation, hardy har har.

...Not so much?

Oh. Drat.

"another goddamned *n*x."

another goddamned *n*x.

These things persist because they actually work.

Smarty pants

You mean these things save because they work.

persist because they work

Indeed. It doesn't mean that they make things any better, though. "Just barely good enough" is pretty much the watchword of modern computing.

I'll try to keep this on topic as far as possible - moving to OSX was very much "two steps forwards, one step back" as far as I am concerned - Apple got themselves a fairly resilient multitasking kernel, which was a step forwards from the heavily patched over, early '80s design of MacOS, but, amongst other things, and focussing on this to keep things on topic, lost the inherent (if somewhat incoherent and arguably underreaching) metadata concept that MacOS had - a step back from the '80s to the '60s. I'm almost surprised HFS+ still has resource forks.

As we're talking about persistence, metadata is perhaps slightly off topic - it's more to do with what you persist and how you find / view / process it afterwards rather than how you go about persisting it in the first place, and why you persist it.

Certain aspects of the OSX frameworks do help with persistence - serialising and unserialising of objects is pretty much baked into Core Data, but generally speaking we're still stuck with the (again, ancient) idea that saving data is something the user should trigger rather than something that should happen seamlessly and without effort on the user's part. The "how" is covered in terms of the mechanics of persisting and restoring data, but surely the OS could be (especially given the existence of versioned storage) dealing with persisting data on an ongoing basis, relying only on the user to tag points in the data's lifecycle that are of interest to him/her.

As for the "why" of persistence, OSX, in common with pretty much every other system out there, persists (if you'll forgive the pun) with the idea that what's in the random access memory of the machine is something other than merely a volatile window on a persistent data store.

engineering, not aesthetics

[Widespread adoption and successful use] doesn't mean that they make things any better,

Then I don't understand what you mean by "better".

lost the inherent (if somewhat incoherent and arguably underreaching) metadata concept that MacOS had - a step back from the '80s to the '60s.

You sound similar to a disgruntled lisper. In this context, "step back", like "better" above, is just a kind of unsupported assertion of aesthetic value.

we're still stuck with the (again, ancient) idea that saving data is something the user should trigger rather than something that should happen seamlessly and without effort on the user's part.

I don't know about stuck. It seems to me that the user ought to understand when data is secure and when it is not. Where it is stored. When it is put there. And the user should have some control over these things.

How is this bad?

The hypertext utopians described future computers as magic boxes that never distracted creative users from their thoughts with such base, crass features as saving and loading files. (Do you think they may have been fantasizing about their own personal heavens?)

The problem is that such a vision runs into trouble when you start trying to realize it in the physical real world. It is not particularly clear that matter can be configured to do what they envisioned. That's why I get grumpy when people start talking about "better" and "backwards, not forwards" without a very specific meaning for such terms.

the idea that what's in the random access memory of the machine is something other than merely a volatile window on a persistent data store.

At the very least know that people have tried to change that for decades.

If you want to take a stab at it, probably a good place to start would be to try to understand, deeply, why it is a very hard technical problem. Attributing the situation to "institutional momentum" as another comment did is just lazy.

What do I mean by "better"?

Merriam-Webster says

  1. more advantageous or effective <a better solution>
  2. improved in accuracy or performance <building a better engine>

Consider Microsoft's Excel, which is certainly widespread and successfully used, vs Quantrix and Improv, which were, and still are, significantly better technical solutions.

Or, in the case I was using, throwing away "Classic" MacOS's (limited but still effective) metadata usage, instead stepping back to the horrible hack of "file type encoded as an ad-hoc part of the file name" and then marching blindly onwards to doubly horrible hacks like magic(5).

Yes, I am lacking in gruntle. I used to believe developers were motivated, at least partially, by something other than bottom line.

cooperation frameworks

I'll respond more later as thanks for helpful posts, but I want to avoid shaping responses I get, because being surprised is much better than confirming old ideas. (I try to prove myself wrong a lot, to fix blindness caused by confirmation bias, but it's tricky to ask questions without telegraphing a preference in answers.) To elaborate on my first post would make it less open-ended, and narrow what seems relevant.

I can phrase it partly in terms of a use case involving developers, but I also want replies that ignore devs in description. Say one or more devs are cooperating, but not in lockstep, so they integrate efforts after making progress. Some outputs live across sessions and may need remote publish, and losing or corrupting data is not okay. What kind of rule-of-thumb are they following to avoid stepping on one another? Why do they think resolving conflict will be minor tweaks instead of a big re-org? What archetypal story do they keep in mind that predicts cooperation will work? How does the story help resolve conflict when it occurs?

The idea of basic storage strategy implies there is a shorthand description about why independent efforts will converge once integrated, after some problems resolve, where the model itself has a way to talk about resolution. But instead of being long, like a bible, maybe it's possible to state in a few principles that get elaborated only at need. One dev might want to say: wait for my signal and then we'll all go together. It's still necessary to define what is meant by signal and together, but there's a framework.

If that made you think of fewer things instead of more, then I did it wrong. :-)

Databases

The only languages with persistent data I can immediately think of, as opposed to using the underlying OS IO are databases (or database languages). The model for avoiding problems with shared distributed data and multiple users is the Transaction. What you are looking for is something like Atomicity, Consistency, Isolation, Durability.

A: A transaction either succeeds or fails, no half written stuff.
C: The database is always left in a consistent state after a transaction, whether it succeeds or fails.
I: Parallel transactions behave as if they were serialised before being applied to the database.
D: Once a transaction is completed, its effects will remain even if software crashes, or there is power loss etc.

There is also eventual consistency, which can be used in specific cases where you know the data will converge. An example of this is with distributed multi-master databases, where you guarantee that writes will eventually propagate and be visible to all nodes, but you don't know when. Another example is the Amazon shopping basket where when you add an item, you are guaranteed to get at least one of them (but you may get multiple in your basket in some circumstances due to data-races). Here the consistency is based on application specific needs, so it is probably not suitable as a built in mechanism in a general purpose language.

What would be nice is a framework where you could choose the level of ACID as required. If you want atomicity, you use a transaction, but you don't have to. If you want consistency you provide a roll-back for the transaction and you monitor sync to make sure all nodes have completed update (it appears consistency is overloaded in two senses, local consistency of the data store, and consistency between nodes in a distributed store). If you want isolation you serialise transactions, otherwise you just let them run in parallel. If you want durability you wait for the 'committed' signal from the OS, etc.

Edit: A specially designed VM with runtime/kernel and bespoke language built on top of an SQL or distributed persistent database would seem an interesting project. You almost get this with CouchDB where you can write map/reduce in JavaScript, and it handles all the syncing.

maybe pursue ACID still when not using databases

That transaction model is one I'm most familiar with, starting with a DB class in college in the late 80's, followed by some client document storage stuff at Apple and Netscape. Even when not using a database as such, it seems a good idea to analyze whether each ACID property is present in a solution, or why one is avoided or skipped for some reason. Users tend to care a lot about whether A and C (atomicity and consistency) are true for durable storage after a system failure. User data loss was a big sin at Apple.

When rolling your own tools, getting good results is fairly easy using a log-structured design, which aims to keep everything old until everything new is durable as well, so preparation is complete before commit. You can turn it into a boring turn-the-crank solution that requires only elbow grease, but of course recovering from byzantine failures in failure handling code is as nightmarish as usual. When enough resources exist to keep multiple versions of old things, rollback is easier to do. Deciding when you can actually get rid of things can be awkward, especially if you to offer a securely-erased feature for discarded content.

Distributed consistency and consensus is hard, of course, but that's definitely something I categorize as not my problem, resting squarely on the shoulders of app developers instead.

One useful PL feature might be some kind of help with responding to transaction failure, which has high probability of leaving invalid runtime state behind, which a process cannot survive when invariants depend on agreement with the last known checkpoint. Code typically doesn't track the way derivative state diverges from an original durable copy, beyond enough state to write diffs, one time only, which can't be repeated after failure. (Maybe having a really good undo/redo command history would make recovery more stable.) Having lightweight processes you can kill more casually might help too, when you can abandon a confused process.

Indexing

So you write data blobs to an ACID store. Now another program wants to rendezvous with the data, its handy to have some sort of human readable handle so that the user can tell the other program where to find the data saved by the first program (or find the end of a pipe). Given these handles, it might seem sensible to allow some way to quickly search and find them, so you index them. You might want to attach more metadata like creation or modify date, which you also may want to index for range queries. Very quickly you realise this is a database table with one row per file, and one column for metadata type. Maybe you need auxiliary tables for things like timezones. Then you realise you want to search for all files in a given timezone, so you need a join operation. Slowly you re-invent a relational database.

So just pick an open source database (PostgreSQL for example) and implement a domain-specific sub-language for relational algebra into your programming environment (Like LINQ in C#) and you are all set.

We write all our applications on top of a database like PostgreSQL. We use it for all persistence, and then provide import and export functions (usually as CSV files).

some system software not especially driven by indexes

Thanks, that was an entertaining chain of reasoning. Indexing plays a minor role in my work the last ten years, mostly involving hashmaps with the occasional really irritating best-fit search on piles of network address range rule sets. In a network load balancer, files would typically be ephemeral fictions in an afs, which disappear when connections or transient sub-tasks end. Caches bigger than memory shared by many processes need to end up on disk or SSD just to satisfy capacity requirements.

Here and below are you suggesting all software categories should be written using a relational database? :-) For exceptions where that is not true, does your post offer helpful insight?

Persistent Data Only

Glad it amused someone :-)

I am suggesting persistent data makes sense in a relational database. I could imagine a load balancer that stores configuration and logging data into a DB, and provides a web interface for administration.

I was never suggesting ephemeral data should go there, PLs have plenty of data structures like arrays, hash-maps etc for this.

Web Applications

Applications can be written like this: A persistent daemon process (say Python/Django) that uses an SQL database (PostgreSQL) for persistence and communicates with multiple clients via a local or network socket. This can be added as a background service that launches at boot, or on demand. A graphical client that communicates with the persistent service for each session (whether the same user or different users) that handles the UI (written as an HTML5 JavaScript app).

I don't really see why all applications could not be written like this. For high performance native code can be linked in with Python mods. Note, I am not specifically advocating these technologies, that just happens to be what we use, but the general architecture. We could call this pattern: "Persistent multi-user server, with ephemeral client UI" if you want something more abstract.

Agreed

Agreed. I write many applications at work in this manner, and both the company as well as customers seem to like the approach.

It could work well for open source applications too, where the end user runs the server.

Reality is persistent

Persistence is very philosophical and involves the issue of reality itself. What one takes to be real determines one's view of persistence and the independently real. Real is a good property for any software object. So how do we achieve reality in a software artifact. First of all logic doesn't work, because logic is about truth. Truth is about particulars and identity and we all know where that leads. Think about names that start with 'G'. What we need is a more robust and flexible way of thinking. Many people believe that Algebra is a better response to the reality issue. But the algebraic approach doesn't do away with logic it includes it as a consistent sub algebra: see for example "The Algebra of Logic Tradition".

is that questioning the reality of durable storage?

The following statement strikes me as weird, which I like, because strange is often better than boring. (I recently told a son he should watch more than the first ten minutes of Fargo because it's weird, as opposed to it having exciting action. I said he was too young to appreciate weird the first time he gave it a shot.)

Many people believe that Algebra is a better response to the reality issue.

How so? Seems a bit platonic, as opposed to real. Yes, I appreciate the difference is philosophical.

Persistence is very philosophical and involves the issue of reality itself.

Devs typically have a pragmatic attitude toward testing whether data is "really" durable, though it does involve a long chain of parts with nonzero failure rates. One can always check whether the next layer down has the goods, but it's practical to have faith other folks checked already too, and that sparse sampling audits is enough for progress. As long as systems act like data is durable, that's good enough.

Usually tech stories are assumed real, like science is assumed real: no reason to suppose otherwise. If products say data is stored durably, we believe it unless we see otherwise in tests, or we hear about bad quality reports. I'd be interested to hear how algebra affects a dev's grasp of data reality.

Durability of Real Storage

Don't we also calculate checksums, and store multiple copies in different locations? We replicate databases and have multiple stateless servers to guard against failure. The larger something is, the more likely the failure. The classic example is RAID5 with 4TB disks, where the disk is so large, even though the bit error rate is reasonable, there is a >1 probability of an error when restoring a corrupt volume from a multi disk array if you have more than N disks (where N is a small integer).

I guess I consider M-Discs durable (DVDs with a claimed 1000 year lifetime), but I have seen everything else fail. Tape fails (a lot), discs fail, DVDRs fail.

Algebra can be used in something like a ring-hash to calculate storage nodes for making sure copies are evenly distributed over a multi-node storage cluster, and probabilities of data loss minimised, whilst maximising read/write throughput. It can be used in calculating checksums, and calculating the required read refresh rate, where files are checked to make sure they are still readable and probably correct.

The software vision

You don't use the term "durable storage" in your opening paragraph. Instead you use the word "persistence" in a very general and philosophical way. Persistence; what persists and how it persists; is a unifying theme in all of software. Reality and persistence are related like the two sides of a coin, especially when it comes to software.

Understanding this issue is the key to understanding the software vision itself. The word "software" was coined in 1958 by John Tukey in a New York Times article. At that time we already had the words "program" and "computer". We really didn't need a new word. But the "software vision" takes over and becomes the basis of a new era. Tuckey doesn't say much about his vision except that software is of the mind and just as real as computer hardware.

How is software just as real as hardware? At the time Tucky was head of the AT+T laboratory and was surely familiar with Cybernetics. In fact Tuckey also conned the word "bit". The key idea of Cybernetics is independent reality. A mathematical model maps into any realization. The idea is pervasive in the 40's and 50's. The kind of modeling done in Cybernetics is Algebra.

But this is only half the story. Another big deal at the time is "logic". In the 60's Logic and Cybernetics collide in a kind of intellectual train wreck and logic seems to take over the software vision. Fifty years later we are still trying to pick up the pieces. It is well understood now that logic and algebra work together to form a unity. But this is another long story.

Tukey

Coinventor of the Cooley-Tukey FFT algorithm among other things. One might say he was the original bit-coiner :-) But he didn't claim he coined the term "software". A bit nitpicky, sorry.

I thought Rys was using the term in a much narrower sense than what you're talking about. Data that persists even after the creating program is dead. But I confess I don't quite get why build in a virtual file system in a PL.

The point of a filesystem (to me) is to have a persistence storage structure that is portable across and independent of languages. If the purpose is to represent any real life documents you pretty much have to either grok all existing formats or rely on external programs or the OS to help you. If you are talking about a single language system, you can do what you want and a "filesystem" is not very relevant - you really want an "object" storage. But for persistence all "references" need to be relative to the storage scheme as opposed to memory addresses. A standard trick if you want to save/restore some structured data across multiple invocations of a program is to replace ptrs with relative offsets before writing out and do the inverse op after reading in. Sort of like mark and copy GC. If you want to save most of the state, you'd play similar tricks but also save an edit log for possibke undo/redo. But if you want to share live data across processes, possibly residing on different nodes, that's a tall order. In addition to storage scheme you'd also need a naming scheme and you'd have work out the semantics of what it means when a process executes a function defined in another process on another node. But likely I don't understand what Rys wants.

software vision was fun before social networking narcissism

Thanks for the amplification. I was interested in generalizing, though not in an intentionally philosophical way, unless models are philosophy. The idea of models as maps of territory doesn't require much thought. Ordinary people are expected to grasp what "the map is not the territory" is supposed to mean without struggling. Representation is a primitive concept in industrialized culture, after learning to read, and consuming media. Everyone learns photographs are not necessarily true, when manipulation is so easy, for example.

I'm inclined to treat model as "platonic ideal" and territory as "real" to a first approximation. I include maths under models, and of course the way human beings think and apply models is part of reality, so there's some incestuous circular layering involved in picking a frame of discourse for labeling things model and territory. (I'm not really into discussing morass of meta-models in philosophy -- too much like wankery.)

You don't use the term "durable storage" in your opening paragraph.

Right, which was an error on my part, as I had to explain I'd meant non-ephemeral by persistent in my explanation to Thomas Lord. So durable is roughly a synonym for what I meant by persistent, although that glosses over issues of translation, serialization, relationship mapping, and means of validating consistency as practical issues.

As I talk more about something, I like to mix up words I use, swapping synonyms in and out to refine edges of what I mean. There's at least two good reasons. First, you find out folks don't agree which words are synonyms. :-) You can be in perfect agreement until you use another word, and then it all falls apart, showing agreement was fragile. Second, the more times you use exactly the same word, the less information a reader gets each time, because it's progressively less surprising. Variation helps prevent cognitive fatigue and gives a listener a chance to test their grasp of a model for consistency.

Your points are worth thinking about, but it does seem more abstract than I had in mind. In my dealings with other developers, they seem much more likely to treat software as real when I tend to see it as models instead. To some folks one's current operating system is real, where I see it as a temporary model subject to change. I suspect this has something to do with your first paragraph.

Semantics.

Thanks for the critique. Your objection or skepticism is a normal response to an argument that puts forth ideals or theories as solutions.

The only thing I would say is that a Theory, or model is like a formal language. It is completely true in its domain of discourse. Otherwise it says nothing or produces a parse error. This is how I think about the reality of universals. They are always true because the semantics that makes them true is part of the language. We must be aware of the semantics and stay within its bounds.

Simulations

I had not meant to be critical, but my normal speech is almost devoid of praise. In code reviews I make a point of saying what I like, in code by others, because getting praise out of me otherwise is unlikely. I had a coworker act like I was being effusive once when I said something was good, bringing it to my attention. When I say something is great, this is like getting "absolutely fabulous" out of some folks.

I like models as plans, not as truth, and I want plans to include a contigency for when a plan is wrong. (Some things can be handled, others not. Sometimes crashing is a good course.) I have met a lot of folks inclined to assume plans cannot fail, by definition, and this always seems at best confused. I once had a funny conversation with an architect, who said unicode strings were unable (by definition) to contain certain byte patterns that were illegal, even when received by i/o where corruption could occur. He didn't want to vet received content, because invalid state was defined impossible. Seemed kinda crazy to me.

In a less extreme way, some folks seem to hope code can be proved correct, to the point it's impossible to contain invalid bit patterns with respect to a model as designed, even when process address space contains code with a nonzero chance of whacking memory locations at random, though with low probability. A refusal to evaluate the cost of being wrong always seems irrational, like closing eyes to ignore danger. In the worst case, there's a risk of committing garbage to durable stores, if no code is trying to find bad dataflow so the incidence rate is unmeasured. I think engineering should always pay lip service to handling entropy, even when the abstract solution says there won't be any according to theory. (Every theory is wrong that says there will be no entropy in the form of creeping disorder.)

Sorry that meandered a bit. I think I was trying to make up for sounding negative.

Theory

OK, Maybe we agree. Theories that are true by theoretical standards can certainly fail in practice. But theories can be very important. The whole of electrical engineering is based on circuit theory, but engineers know the limits of assuming that everything consists of discrete inductors. capacitors, and resistors. For example radiation is not taken into account. That is what I meant by staying within the semantics.

The lack of any overall theory in Computer Science is conspicuous. I can't help but think a little theory of the right kind would be useful.

Tarski

The theory I am describing may sound Platnic because of the words "reality" or "ideal", but it is actually pragmatic because of the necessary theory of truth. The theory used in this kind of thinking is "Tarski's Truth Definitions". Tarski's truth is fundamental in algebra and Model Theory. It is also used throughout the book "Algebraic Structure Theory of Sequential Machines" by Hartmanis and Stearns. Another interesting read is "The Semantic Tradition from Kant to Carnap" by Aberto Coffa; Chapter 15 "The Road to Syntax" and Chapter 16 "Syntax and Truth". This is roughly a comparison of Carnap and Tarski.

maze of twisty passages all akin due to arcane labels

[Sorry, I'll restrain myself.]

Self control

my pre-internet cohort, who were interested mainly in weed, beer, cars, and dates. [etc, etc]

This sort of writing is really more suitable for your own blog or some other venue. The quote I picked is just an example, but you're doing a lot of this. Please consider restraining yourself.