Practical Principled FRP: Forget the past, change the future, FRPNow!

A new ICFP 15 paper by Ploeg and Claessen; abstract:

We present a new interface for practical Functional Reactive Programming (FRP) that (1) is close in spirit to the original FRP ideas,
(2) does not have the original space-leak problems, without using arrows or advanced types, and (3) provides a simple and expressive way for performing I/O actions from FRP code. We also provide a denotational semantics for this new interface, and a technique (using Kripke logical relations) for reasoning about which FRP functions may “forget their past”, i.e. which functions do not have an inherent space-leak. Finally, we show how we have implemented this interface as a Haskell library called FRPNow.

Fixed link.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Here is a link.

Here is a link.

Oops. Fixed. LtU could

Oops. Fixed.

LtU could really use some markdown.

Link is broken. You can

Link is broken. You can obtain it on github. And here's the github repo for the library.

More seems to be broken

There are various manners of doing IO in a pure functional manner. One archaic manner is to use stream processing functions. I.e., the whole program is a function/process of I* -> O*, a stream of inputs to a stream of outputs. Drawbacks are mentioned in the paper and that's also inherently leaky.

Another 'archaic' manner is to explicitly pass the world state around, Clean probably still does this. I.e., you have functions like I * World -> O * World. A problem there is that you again can retain references to the old world so you need some form of uniqueness typing to make sure all old references are lost.

Then Wadler came along and proposed monadic IO, well, I think it was Wadler. The trick there is that you use an interface to something which can set up an 'action' and subsequently chain these actions together. By explicitly never mentioning that that monad runs on a world I always found you cheat a bit, but hey, it works great. But fundamental to the IO monad is that you cannot observe the monad otherwise you could peel out (parts of) the world state and remember them. Stated otherwise, you can see a top level Haskell program with type IO a as a specification of a program which can be run on a stream or equivalently on a world state, the run function simply is never mentioned.

CENTRAL TO MONADIC IO IS THAT YOU CANNOT OBSERVE THE IO MONAD.

However, and here stuff becomes fishy to me, the author states directly in the introduction that all is implemented in three steps (1) sample behaviors, (2) a plan to execute stuff in the future, and (3) a function to schedule stuff in the future

async:: IO a -> Now (Event a)

THE AUTHOR OBSERVES THE IO MONAD!

This solution smells. Unless some really fundamental stuff changed, or I never got it in the first place, I am certain the author left pure functional programming and proposes an impure solution.

EDIT: A likely manner of breaking this scheme is to simply schedule something twice. That shouldn't matter in a pure functional setting since you didn't manipulate the world state.

Short elaboration

Most USians here will not be accustomed to the concept of pure functional programming but this problem is akin to the following old problem.

Why can't/shouldn't I have a function 'readKey' with the following type in a pure functional program?

readKey: World -> Key a [1]

Makes sense in an impure functional language, right? You probably would even leave the World type out. But in a pure language you want to be explicit so you pass it the world state and it reads a key from that when you press it. Problem: in a pure/mathematical language the function readKey should always return the same result when given the same world state. So when you pass it the same world a second time it should give back exactly the same key. Or, stated otherwise, reading the key changed the world state and therefore you should have somehow chained those consecutive world states. (Which is what the IO monad does.)

The proposed solution smells. Whenever you schedule something it should return _exactly_ the same event and it shouldn't matter whether you schedule something once, twice, or a trillion times.

It smells a lot.

[1] Edit: Of course, with the proposed solution Haskell likely becomes an impure lazy language and you might as well scrap monadic IO and introduce a readKey and more on an impure World state. If you're gonna be impure anyway..

Anyway

I don't buy all arguments but it has been pointed out that the above definition of async isn't a primitive, as I assumed from the type, but the result type 'wraps' the IO monad again. Which might solve it, or not, but probably will.

Observing IO monad

If IO were represented as a free or operational monad, you could observe the monad, e.g. transparently evaluating the IO program in a simulated file system. The reason IO is opaque and unobservable in Haskell is for compiled performance and optimization. Can't get bare metal performance from an operational monad.

Anyhow, 'Now' is just a wrapper for IO (via `type M = ReaderT Env IO` and `newtype Now a = Now { getNow :: M a }`. And even type `Event` may contain IO. The `run` function translates from a Now plan back to IO. So the authors aren't observing IO from a pure computation. They are simply wrapping it in a framework.

Hogwash

The whole reason the IO monad is unobservable is that otherwise it would need to remember the world state.

I Agree

One of the fundamental properties of a monad is that you can put things in, but never get them out.

Something you can get stuff out from, but never put them in is a Co-monad.

So if you combine a monad and a Co-monad, so you have something you can put things in and get them out, what do you get? Well if the Co-monad is the dual of the monad I can only think of two sensible answers, "nothing at all", or the identity.

Nonsensical beasts

Yeah well. We don't know whether duality actually exists in category theory, it is simply assumed, neither has anyone ever given any formal underpinning for category theory so to me that's just another load of ****.

But feel free to disagree. I'll just wait until a) someone proofs me wrong, or b) someone comes along and shows that duality is an ill-conceived subject. Not that that will matter either way, I guess.

Monad properties

If all you know is that you have a monad, then you can't get anything out of it.

But if you know instead that you have a list or maybe or identity type, all of which are monads, then you can make various observations on these things. Operational monads and free monads are generally observable.

IO is a specific type. Like a list or maybe. There is no fundamental reason it cannot both be a monad and be observable. The choice between opaque and observable is thus not fundamental. If IO were implemented as an operational monad, it would still run. It would just run one or two orders of magnitude slower.

Not buying

I find the stuff they put into kids heads these days annoying. No, you mathematically cannot have objects which lack either constructors or deconstructors. It simply violates basic mathematical assumptions such as Leibniz equality.

Unless you got taught by a category theorist. And he was either a) unaware of that, b) sloppy, or c) just another fool.

Feel free to buy into that theory but I don't.

Wtf?

I'm not selling "objects which lack either constructors or deconstructors".

I feel like you just attacked a straw man, but I can't even find a resemblance to my assertions.

Well then don't buy into everything somebody tells you

I am sorry but a mathematical object without observers, or deconstructors, or whatever you would like to call it cannot be for it would not allow simple Leibniz equality.

Maybe you are unaware of it but the mathematical underpinning of category theory is fishy, to say the least.

I once asked a category theorist what equality he assumed (since it apparently wasn't Leibniz equality.) I only got a blank stare back. I am not buying it. It seems useful to people, that I agree. But that people actually seem to know what they are doing, like in case of the IO monad. Nah.

??

All the types I've mentioned (list, maybe, identity, free monads, operational monads) have concrete realizations with both observers and constructors. As far as I can tell, your objection above has nothing to do with me.

Misunderstanding?

There is no fundamental reason it cannot both be a monad and be observable.

There is. If it would be observable it would need to remember the world state.

Not accurate

There are multiple ways to implement a monadic IO type. Some of them do involve world passing (and this is how Haskell does it, passing a logical '#RealWorld' token). But some other implementations (such as operational or free monads, or coding IO against generic interfaces) don't require the monad ever receive, wrap, or pass the world, not even as a token. It's the latter approach that allows for observable IO.

Put another way: you're right that we must not "remember" the world, you're wrong that IO being observable requires "remembering" the world.

If you can it observe it once, you can observe it twice

Once you get to the ability to observe the world you can do it again.

I am thoroughly unconvinced with you free monad idea and will only believe you if you provide an example.

Not buying.

Irrelevant

Operational IO doesn't even observe the world once. That's what I mean with the "don't ever receive the world" sentence. Direct observation of the real world is separated entirely from the monad, moved to the monad's interpreter - the run function. (It's analogous to operating in a sandbox.)

It is possible to observe any operational monad by providing alternative run functions for simulated worlds. And your earlier claim, the one I rejected, specifically regards observing the IO monad, the `IO a` value. The fact that you've conflated observing the program with observing the world is where your error lies.

I'm not going to educate you on free or operational monads. There are plenty of good tutorials out there. I've pointed you more than once to Oleg's free and freer monads, but afaict you've never even tried to grok them.

Agreed

Well. I am not going to invest more time into it, that's for sure.

But, the last remark, this can come back to haunt you. From a programmer's perspective, it makes sense to differentiate between types which have an arrow coming out with an observable value, and types which have an arrow coming out because the value passed in is being manipulated.

Whereas, in math, the 'set-theoretical' kind, it's all the same, right? Any arrow out states ' this value gives an observable value back.'

Looks to me the coin could fall either way, the programmer's view is correct, the math view is correct, or both are just different views.

But, as I stated, category theory has a loose base. This could come back to haunt you, eventually. But, admittedly, chances are low.

Actually

Has there ever been anything been written about the assumed equality within category theory? I think most category theorist will shy away from the question since 'nothing is bloody wrong with it' and 'I earned my PhD doing fundamental crisp math!' but if you really want to do any formal work with it, it needs to be approached head-on.

Needs an answer.

Using and understanding

Using and understanding monads doesn't require any knowledge of category theory. I certainly won't pretend much knowledge of the subject. Monads by another name might have smelled sweeter.

This is not about monads

This is about category theory. Like I said, category theory doesn't have a fundament, it's not called abstract nonsense for nothing.

That's not necessarily a bad thing, ordinary math doesn't have a fundament either. Most people simply assume some set-theoretical footing and it just-works^tm for most people. Honestly, it even works so well that I am not sure what is going on there, if you understand most fundamental objections (which I don't).

But like I said. Category theory simply assumes some stuff, some stuff I even think which goes straight against ordinary math. Like the assumption that objects can be without observers; which is odd because that just goes straight against Leibniz equality, I think.

I've been out of the game for too long to really understand it but my best guess is that things like these are why that latest Coq endeavour to formalize all math failed (if I got it right). Lot's of hidden assumptions in an informal model people thought was robust enough to base math upon. I don't think it is.

Context and venue

My meaning was this: even if category theory has some flaws (I don't know nearly enough to form an opinion) that doesn't imply those flaws are part of every abstraction that comes out of or was inspired from category theory. Even if you somehow prove category theory sucks, you can't assert much about monads or formal duality or whatever. So it isn't clear to me what you hope to gain by mentioning your doubts in this context.

If your goal truly is an informed discussion on category theory and its potential flaws, I suggest seeking an active mailing list on the subject. There is certainly some relationship to homotopy type theory, which should still be a hott topic in mathematics today.

Not that interested

Oh god, no. I am not that interested. I have some opinions on the subject from which I concluded I don't want to know too much about it and that's about it. I'll gladly leave the hard work of figuring out 'what it all means' to others. They're prone to be better at it than me anyway.

By the way

I fully agree with your notion that possible inherent flaws need not make inspired abstractions flawed. That's exactly what I mean with it-just-works^tm.

If we know anything from math it probably is that loose and sloppy reasoning usually works, I guess.

But I have a background in formal reasoning and software engineering. And there it's often the opposite. Any sloppiness comes back to wreak havoc in terms of bugs and security flaws.

So well. There's that too.

Equality

I struggle with category theory, it's too abstract for me to get a handle on, but with bursts of reading separated by long periods of ignoring it, some things start to make sense.

I don't think equality exists in category theory. Take a category like "addition over natural numbers", what does it mean for two sums to be equal? "1 + 3" is not the same sum as "2 + 2". We have no good equality for functions.

I think what you have in category theory is identity, we can say a thing maps to itself, but there is no equality (or notion of equivalence). What are two equivalent categories? They are either the same (like an endofunctor maps a category to itself) or they are not.

It's like if in programming you can only compare the location of two variables. You cannot ask if one variable represents the same number as another variable (2 == 2), but we can ask if it is the same "2", because we can compare the location. So we have identity without equality.

Of course there is a notion of equality

No, there is a notion of equality. If you postulate that the composition of two arrows map into the same object you need to have a means of deciding what 'the same' is.

But the notion is informal, and I don't think people think a lot about it. They compose charts and when they feel stuff should be the same they simply assume it is.

But I believe they use another notion than what is standard. But maybe I am fool.*

*That's of course always true.

(Actually, what I remember from twenty years ago there seem to be a number of competing notions. Standard Leibniz-like equality, equivalence, and even a hint that things may be 'unobservable'.)

EDIT: I guesstimate that a category theorist will mumble something about that they only use a notion of observable equivalence and that it's just 'turtles all the way down'.

I read it a bit

I read it a bit. But it isn't that it's too abstract. I find it an informal mess and I can't get beyond the point that I start picking it apart and then all the 'learning' stops. It's for handwavy people, and apparently I am not.

Intensional vs extensional equality

The "1+3 vs "2+2" difference is just the difference between intensional and extensional equality. I am pretty sure different people will assume either one in different settings, whatever is necessary.

It's just an informalism so you can do whatever you want, right? And then that happens to-work-out^tm most of the time.

Okay, apart from the above. An oddity

Isn't it a bit odd to differentiate between stuff we agree are observers and stuff we agree are 'wrappers'?

List is not a Monad

A monad as an abstraction of operations (functions over objects) not of the objects themselves.

So a monadic operation is one which never looks at what is in the box. In computer science terms it is an interface not an object.

So it does not make sense to say a list is a monad. It's simply looking at it the wrong way.

So a list can be observable, but it is not a monad. The bind and unit operators defined on a list are a monad. As such any function using _only_ the monad interface cannot look inside the box.

The type signature "IO a" provides some interfaces, one of which is a monad, it could also provide a comonad interface.

The magic of IO in Haskell is that we observe a strict ordering of constructors and destructors even though function application is lazy. I think this is the real encoding of IO, I think Monads are in the end just a distraction.

This is why I prefer the following type signature for IO:

f : (Monad m, MonadIO m) => m a

Where Monad provides unit and bind, and MonadIO process the IO functions.

I guess this is agreeing with you in the end, even though the application of terminology seem a bit off.

Define "isa"

Meh. A list is an instance of the monad abstraction. In this sense, it has the same status as IO. And State. And every other monad instance.

I don't think anyone would confuse "list is a monad" with "list is the monad abstraction or interface" unless they were attempting to win an argument through sophistry. :)

In practice, we never just use the return and bind interfaces of any monad instance.

List is not the monad abstraction

List is not the monad abstraction, a monad is an abstraction of an interface and nothing to do with lists. A list is a list and it provides all the operations lists do. This is not just sophistry, but an important difference between:

List a

and

Monad m => m a

The difference is a function signature that operates on something like:

f : Monad m => m a -> m a

Can only access the monad operators unit and bind. All algorithms should be written in terms of their requirements (that is the require a monad, monad plus, or whatever). There should not really be any occurrences of concrete types like List outside of definitions.

This makes it clear a monad is an interface specified by an algorithm. A list provides a monad interface, but it is not what we mean when we say a function is in the 'X' monad. Note it is the function that is in the monad, not the list. A monad is a classification of functions not of objects. In the above is is the function 'f' that is in the monad 'm' because it's return type is 'm a', however it could be passed a List or many other types of object.

Sigh.

Your opinion about how code "should" be written (generically, with minimal requirements) is noted. It's even justifiable, not a bad way to write code. But it's also irrelevant. It is entirely possible to write code in concrete monads, like IO or list, even if that's not your preference. Many programmers do so. And despite your assertion, it is normal to say the monad abstracts the object (list or IO or generic 'm'), not the function using the object.

Endofunctors

A monad is an endofunctor, that is something that maps a category to itself. A category is functions over objects. So given a function on Lists (f : List a) this is the object of the monad category (it is a functor from a category to the same category) so it would be an operation like (List a -> List a). The members of 'Monad' are functions on Lists.

The explanation using functions with requirements makes the category theoretic definition easier to understand in a programming context, that it is the functions on an object that are the members of the monad category, not the object itself. So List is not abstracted by monad, functions on list are abstracted by monad.

A list is not a monad.
Composition of lists is a monad.

There can be many 'monads' for a List, composition is just one. In your analysis you assume composition is the only operation on lists.

Meh.

There is more than one way to abstract lists as monads, just as there is more than one way to view integers as rings. Yet, there is only one common way and most people easily say that integers are a ring, list is a monad. While you might be miffed by the imprecision in natural language, I believe you still understand the meaning. Selectively pretending otherwise for the sake of arguing definitions isn't worth my time or respect.

Now, you're bending your argument towards what "abstracted" means. In programming, when we view or act upon an object through an interface, we usually say the interface abstracts the object. Not the function using the interface. Perhaps a category theorist might say it differently, but you aren't talking to category theorists on this site. Accepting the monad abstraction into programming is already a difficult step. I don't expect endofunctors and other category theory jargon to make much headway.

It isn't unusual to say a car "is" red, even though the concept of red has little or nothing to do with cars. When you earlier argued monads have "nothing to do with lists", the logic you used would also reject that the car is red. "A car is a car and it provides all the operations cars do," you'd argue, as if "is red" means programmers might need to drive the car via some ridiculous `(Red thing) =>` abstraction. Yet, unless you're sincerely confused, that argument just intentionally misconstrues meaning. It's just sophistry.

Do you have an actual point beyond arguing definitions? Because I'm happy to say that a list is a monad, using a common way of saying such things. And AFAICT, everyone who understands monads in programming (even you) will grok my meaning. Thus accomplishing the point of using a few sloppy natural language words instead of code to communicate.

A monad is not a concrete type.

So my point was, what you were saying is "if you have a list and not a monad, you can use all the functions available on lists", which whilst true, is trivial.

I agree IO is a specific type, it is not a monad, but sequencing IO is a monad. I think this also makes it clear that the 'magic' side effects happen in constructors not in Monads. So I agree with what you said, I just found the explanation confusing. This is either because I had some misunderstanding, or you did. The result of the clarification seems to be that I did not have a misunderstanding, but by your own admission you were just being sloppy with language, which is fine, and a good conclusion.

A final thought is, is it a category error to conflate a list (an object) with a monad (an interface)?

Concrete monads

Consider: We could model "list" as an interface (a class providing cons, empty, etc.). Doing so wouldn't change any properties of the list, but might allow us to add more properties to the object. And support ad-hoc, efficient representations.

There is conversely a concrete form for monads. It's the operational monad. A data structure that by construction provides exactly what is needed for a monad, and nothing more. Oleg's paper on free and freer monads describes this. The operational monad is the freer monad.

It has been argued that, if monads were provided in their concrete form rather than as a type class interface, they'd be a lot easier for humans to grok, test, use and reuse. They would hinder some optimizations, however, much like working with a concrete list hinders transparent use of array fragments or logical reversal and append under the hood. The convention of representing monads as an interface and lists as a concrete data structure is ultimately just that, a convention that must be judged by various utility and fitness metrics.

Further, if you start working with lenses and such, concrete types clearly have concrete interfaces. Similarly, if you work with existential types in Haskell or boxed traits in Rust, the type class or trait will have a concrete runtime representation - a vtable. It is not wrong or unreasonable to consider class or trait 'interfaces' to be concrete parameters that are subject to constant propagation optimization where feasible.

Ultimately, I feel the line between "concrete type" and "interface" is rather blurry. Not something to make a big deal over or found an argument upon. It's more an artifact of the often arbitrary lines or boundaries upon which we discretely reify abstractions or bundle data. Not just in this context, but also for things like object models, relational tables and normalization, distribution of data resources across services, gathering and refinement of data into a report, representation versus data type versus presentation. It's all very fractal, and from ten steps back it's all very fuzzy.

Aside: Regarding your interpretation of my words as "if you have a list and not a monad", to me that's nonsense. The `instance Monad List` is formal proof that, if you have a list, you also have a monad. Programs are constructive proofs. There may be alternative constructions, but you clearly have at least one. You logically cannot "have a list and not a monad". Even if list were an interface, we would have a monad via some variant of `instance (List o) => Monad o`.

Concrete Monad

I meant if you have a concrete list, and not a concrete monad. A concrete monad is not a list.

cracked concrete

My meaning is that the entire concrete/interface distinction you're founding your argument upon is not very solid.

If your argument doesn't work just as well for concrete lists as it does for operating upon an object through a list interface (and it isn't just a point regarding flexible representation) it's almost certainly not a valid argument. Similar for arguments regarding a concrete operational monad vs. a monad interface. The essential properties of a list or monad can be achieved either way.

Related to your earlier question: it's not a category error to compare properties of objects expressed concretely vs. properties of objects required/implied by an interface. It's ultimately the same set of properties.

All backwards

You are still thinking backwards, if you have an object like a list you do not have an interface because you can call any method on the object. Interfaces are useful because they provide abstraction. If you have knowledge about the type of the object providing the interface, you have broken the abstraction.

My view is your conflation of objects and interfaces leads to a muddied understanding and you are missing some important things, but I get that you don't care and want to conflate them so perhaps we just won't agree.

I agree with David here

No sorry, Keaan. You're trying to make something precise no mathematician really bothers about. Any monad is just anything with a return and a bind, it's not more difficult than that.

Haskell makes that explicit so you call that an interface and make an explicit split between that and a concrete type but no mathematician will.

Stepanov

I think you will find Stepanov is a mathematician, and he makes exactly the distinction between concrete and interface.

That looks like a

That looks like a cherry-picked argument to me.

No true Scotsman.

A single counter example is all it takes to prove something false. Your argument has now entered "no true Scotsman" territory :-)

Brits like tea

If I say Brits like tea, are you going to nitpick too?

No true Brit.

No true Brit doesn't like tea. See "No true Scotsman" fallacy above.

Have it your way

Haskell makes that explicit so you call that an interface and make an explicit split between that and a concrete type but, IN GENERAL, no mathematician will.

What is a general mathematician?

I already pointed out that statement is false. I might accept "most mathematicians will not differentiate", however I don't think we have actually the statistics to back up even that statement. All we are left with is your anecdotal evidence.

I am not even sure that you don't have it backwards: every mathematician uses interfaces as functions are not object properties in mathematics. So in category theory we talk about objects and morphisms. If mathematicians do not differentiate, is it simply because they don't have objects that contain methods?

Further number theory and abstract algebra are all about interfaces. A group, a ring, a Monoid etc, are all "mathematical" interfaces. In Stepanov's book "From Mathematics To Generic Programming" he charts the history of interfaces starting with the Rhind Papyrus and Egyptian multiplication in 3000BC, through number theory, and ending up with iterators.

Sigh.

You can't call "any" method on a list. Even a concrete list has a limited interface - a very few constructors and deconstructors. It's exactly the minimal 'interface' for a list. Any generic list interface would necessarily provide at least that. Same holds for concrete operational monads vs. programming against generic monad interfaces - i.e. we can get a minimal monadic interface from a concrete object.

As far as "backwards thinking" goes, you got involved in this discussion by "agreeing" you can't get things out of IO specifically because you can't get things out of a monad in general. You attempted to turn the argument backwards from the start, reframing it in terms of operating on generic interfaces instead of observing concrete IO values. My analogy to pulling things out of a list - a specific concrete object having a monadic interface, just like IO - is vastly more faithful to the original argument (though it unfortunately didn't explain interaction with a world model, which led to some confusion). Ultimately, it has nothing to do with the nature of monads that Haskell IO is opaque.

Even if we were writing using interfaces as you suggest - e.g. if our IO program were expressed using `(Monad m, FilesystemIO m, NetworkIO m, Threading m, etc...) => m ()`, we could still "observe IO" by instantiating the interfaces for a simulated world or sandbox then running the IO program and simulating or intercepting the IO operations. We could just as easily instantiate these interfaces to a concrete operational monad. Thus, using an interface instead of a concrete representation doesn't do anything whatsoever to reduce observability of IO.

In some sense true.

What you say is true in some sense because we can observe the side effects. It is also in some sense false, because the functional program cannot observe these side effects.

If we really look at the mechanics in detail, what we have is a functional program that composes imperative program fragments into a complete imperative program that is then run when the IO monad is deconstructed. In other words the functional program cannot observe anything about the IO because it has already finished beta reduction before the first side-effect occurs.

This is in effect what is meant when we say you can put stuff in but never take it out. The functional program can put imperative program fragments into the IO monad, but there is no way within the language to see what fragments have been put in. This is fundamental because beta-reduction has no deterministic order, so if side effects were observable to the functional program we have some kind of unsoundness, as the order of beta reduction would change the order of observed side-effects.

Irrelevant side effects

The original question is whether we can observe an 'IO a' value. Not whether we can observe side effects. The two conditions aren't inherently coupled. Effects are a consequence of running the IO program in the 'real world', but we don't need to assume every runIO function is strongly coupled to the real world.

If IO is modeled as a free or operational monad, or as a set of type class interfaces like I described last post, we gain the ability to transparently run IO and simulate the effects in a purely functional virtual world (e.g. where a Filesystem might be represented by a simple map of names to binary values). That would allow us to observe the behavior of an 'IO a' value in multiple initial world states, analogous to how we observe a function by providing a variety of arguments. No effects needed.

Not IO

What you are saying appears to be that if we replace the world with a simulator that records it's entire history with a replay facility, then it becomes observable. I concede this is true, but the program cannot assume it is running on such a simulator or it becomes useless (except for debugging).

Consider a rocket guidance control system that observes IO as part of its operation. It would probably fail spectacularly as it's world model diverged from reality.

So I cannot write any useful program if I make that assumption, because my program can never run in the real world, it can only run in the simulator.

In effect what you have described is like the State Monad which models state purely functionally, not the IO monad which interacts with the outside world. You can do a lot of things if you just redefine what they are :-)

Utility

It is true that a program cannot assume itself to be running in a virtual world or sandbox. It doesn't know. But what a program can do is shove an arbitrary subprogram into a virtual world or sandbox, see how it behaves. Similarly, the program might be transparently subordinated to another. There is a lot of potential utility for security, testing, debugging, program search (e.g. programming by example, genetic programming), program composition (e.g. applets, embeddings, virtual machines, running three versions of a program to get a consistent answer), and so on.

Regarding a relationship to a State monad: for obvious reasons, we cannot "get" the whole world state as a value, much less "put" an updated value and overwrite whole world state. An IO monad must operate upon an effects API that can be implemented in the real world. (And afaict that's the only strong requirement for an IO monad. There certainly is no "must not be virtualizable" requirement.)

Subprograms

I think I can live with those use cases. However I would point out that IO devices like disks have concrete properties (storage capacity, seek time, transfer rate), that the simulator cannot always match. Programs save files to disk to survive reboot and because data cannot fit in RAM. You can't just simulate that without spending a lot of money. So in practical terms my program that does file IO cannot have it replaced with a functional simulator, although I concede it could do for testing and debugging purposes, but it would not be able to cope with the same size datasets, not give the same performance or permanence. You could probably get away with any two (simulate permanence and size but not performance etc).

HW simulation, performance

There are certainly use cases for hardware layer simulation. Game system emulators are probably my favorite. Testing software behavior on various phones is something I've heard mention of. It's also feasible to rigorously and reproducibly test behavior under worst-case hardware races, crash recovery, overheating, or other low level conditions that are rare in practice. Etc..

I agree that performance suffers with virtualization. That's why I said, in my first comment of this discussion, that "The reason IO is opaque and unobservable in Haskell is for compiled performance and optimization. Can't get bare metal performance from an operational monad." Performance. Nothing to do with the nature of monads or interfaces.

So it seems you've finally reached the conclusion I provided at the start. It's a bit funny to see you use it as a "however" counterpoint. :D

Agreement

I did say back at the start that I agreed with your conclusion, but not the use of terminology. So the end is not that surprising, we agreed to differ on terminology, so all that remained is the agreement on the conclusion. Perhaps that 'however' was a bit unnecessary, but its easy to forget where you started when a thread is spread over several days :-)

That's the origin of 'monadic' programming

Before you could make certain functional patterns explicit through interfaces or type classes in Haskell people were already programming monadically. I did it too in the bootstrap of the defunct Hi compiler. It just naturally arises if you spend some time manipulating data structures (though I forgot why.) I actually assume Wadler once just recognized it as a functional pattern and restated it into category theory.

Not accurate

The *monad* doesn't *need* to remember state. That responsibility can easily be shunted to a `run` function. Like the operational variant of State, where Get and Put are actions to be 'handled' rather than a state value being carried along via the bind function.

I also agree

Yes, if Now is a monad transformer stack that is based on IO, then nothing is really leaving IO, and everything is okay.

Yah, agree with that

So it wraps, right? You could do that, I agree.

An aproach...

... I plan to use to accomplish FRP is:

    +---------------------+
    |                     |
    |                     ▼
  output                input
+-------+           +------------+
| world |           | FRP system |
+-------+           +------------+
  input                 output
    ▲                     |
    |                     |
    +---------------------+

The world changes in cycles, causing the FRP system to propagate the next world change, and the process repeats circularly. That way the FRP system remains pure, while the world maintains a complex state system, reflecting side effects to itself through each cyclic FRP system pass.