Nesting imperative into functional code

Did you ever try to do that? Nesting functional into imperative code is easy, you just do the FP magic when you call the function. But how to do it the other way around?

Assume that you have a functional code as a base and some imperative code here and there, maybe even stored as strings in variables. Now you have to call something like Javascript "eval" when you want to run the imperative code. But when to do it? Note that imperative code doesn't have to be in a form of a function that returns some value. It can setup some event actions or mess around with some shareable memory states.

When to call such an imperative processes? I guess some event would apply, but what?

In fact I'm working on a functional language that can easily translate any abstract syntax tree (AST) into another AST. The language is interpreted inside javascript (maybe I'll try to do a native version in the future) and in a runtime of interpreting it I want to occasionally translate arbitrary imperative language AST into javascript AST (or assembler AST in the future) and execute it. The problem is how and when to trigger this execution. Maybe someone could help me with this problem?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Where's the problem?

Is the problem that you're going for a lazy reduction strategy? Eager functional languages, LISP/ML, really don't have any problem calling imperative code..

It's an eager strategy. I

It's an eager strategy.

I got the problem explained in chapter XY of a Haskell tutorial. It can be done by IO action stitched inside do syntax to "main" action.

I wonder if someone has a better way to do it? Somehow I don't like this solution.

Yeah well

Haskell has a lazy reduction strategy. Therefor, for humans, it is unpredictable. To solve this unpredictability, they linearize the interaction with the imperative code through use of the IO monad.

Lisp/ML are eager and have predictable behavior.

You read the wrong manual.

Shell pipelines

Shell pipelines are functional (though shell language as a whole has imperative features), and they really don't care whether the pipeline components are functional, imperative, logical, or what.

How so?

Shell pipelines are modelled well by Kahn Process Networks. Are KPN's functional, or are they a distinct model of control flow?

The pipeline is watching for events on file-handles and performing some callback in response to write data into the next program. The system is being pumped by the OS scheduler running parts of each program concurrently.

Yes, that's how it's implemented

But a pipeline seen through an FP lens (not in the technical sense of "lens") is just a series of nested function calls whose type is Byte Stream -> Byte Stream (or Byte Stream -> String -> Byte Stream if it takes one argument, and so on). So "who | wc" is just who $ wc.

Purity

Well, roughly... if both commands are pure. But of course those are arbitrary event loops that could interact with one another outside of the pipe.

Effectful Pipelines

Interestingly, most 'pipelines' models in FP (conduits, iteratees, etc.) have a slot for awaiting external requests (monadic effects). Rúnar Bjarnason and Edward Kmett provide a nice summary of pipelines models while describing an even nicer model for networks of stream transducers called machines.

Preemptive vs Cooperative

The model of a machine looks like a cooperative form of multi-tasking (i.e. the choice to yield during an incremental computation is explicit). I cannot recall any specific results, but would this model the same set of computations as a model that preemptively interrupts the computation inside the machine?

Imperative code is easily

Imperative code is easily described by a function or datatype that occasionally makes a request and awaits feedback. For example:

type Step a 
  = Return a
  | Await (∀x . (Request x, x → Step a)

We can generalize for different types of 'Requests'. Your "something like JavaScript eval" would just be one type of request. :)

type Step rq a
  = Return a
  | Await (∀x . rq x , x → (Step rq a))

We can add features like failure, or yielding intermediate outputs like a Python generator.

type Step rq o a 
  = Return a
  | Await (∀x . rq x, x → (Step rq o a)
  | Yield o (Step rq o a)
  | Fail

The pattern is common enough that functional programmers regularly use imperative syntactic sugar to describe programs that desugar into some variant of 'Step' and compositions thereof. For example, the `do` notation in Haskell. A program represented this way can be run in a pure simulation environment, or against real-world inputs and outputs. Haskell's IO is run this way by a standard compiler-provided program.

The programmer experience depends primarily on program interface (syntax, in this case) and local reasoning abilities. So a Haskell programmer using `do` notation in a scenario where results aren't locally predictable - whether it be real world IO or a pure simulation of a message-passing system - is doing imperative programming. Imperative has been embedded in a purely functional model. Without a decent syntax, it wouldn't work out. For an authentic imperative experience, our programs need to read to the human as a series of steps with occasional requests. "Programming" is a human activity. "Imperative programming" simply describes a common paradigm by which we communicate, view, comprehend, and implement programs.

I disagree with marco's position. Simply calling imperative code during function evaluation is not embedding imperative in functional. It's poisoning functional with imperative. You lose the ability to reason functionally about modules you don't understand very well. FP isn't half as useful without purity of reasoning. FP couldn't even mature as a paradigm while held back by this poison in its original implementations (Lisp, ML, etc.).

Does "do notation" executes

Does "do notation" executes only if it is directly or indirectly called from "main"?

If it is so, then I'd prefer a combination of regular imperative language (like c++, c#, javascript, ...) that calls and deals with functional code when needed. Anyway, I'm not a fan of emulating imperative code in functional languages.

re: do notation and emulating imperative

Haskell's 'do notation' is generic syntactic sugar for any 'monad', which includes the Step functions I described earlier. The do notation can be used for parsing (cf. parsec) or modeling stateful computations (cf. State and ST). Many monads have purely functional interpreters.

I'm a big fan of imperative programming within a purely functional context. It's far nicer than imperative programming in conventional languages then running and testing code directly in an opaque open system like the machine or network. With FP:

  • I get deterministic behavior and replay, convenient for both comprehension and testing.
  • I can easily tweak parameters and run many computations in parallel, observing different progressions without interference.
  • I can easily abstract behaviors and control structures as libraries instead of relying on a few built-in loops and conditionals.
  • I still benefit from modeling (or emulating) concurrent systems: I can step through computations, model interference (e.g. injecting messages, twiddling states), accept inputs with flexible ordering, extend the system with new computations, model races, integrate clocks, implement progress bars.
  • Secure sandboxing or constraining a subprogram to specific effects is easily supported. The program is easy to run in a simulated or mockup environment.
  • When I lift the program to the 'real world', I can be confident about its behavior, or I can easily record all the inputs for replay and debugging.

All this, and performance doesn't even need to suffer. I can achieve high levels of parallelism (par/seq) because I know that evaluations don't interfere with each other. I can leverage a GPGPU (e.g. via some equivalent to Haskell's accelerate). I can have O(1) updates on arrays (via substructural types to prevent copies). I just need an FP language with the right set of features. (I'm developing one.)

I do need to eventually lift my program's logic into the real world, which has organically developed around imperative languages. Fortunately, this integration is frequently achieveable by a generic program: an interpreter or compiler that can be fed a set of callbacks to model specific effects. LLVM has greatly lowered barriers for the compilation option.

Conventional imperative languages are terribly ad-hoc, entangled with the machine, difficult to comprehend or reason about. Concurrency, parallelism, and non-determinism are conflated into one big ball of conceptual mud. Local reasoning about interference and security is difficult. With FP, I model an imperative language with just the effects I need for a given subprogram.

Why wouldn't you be a fan? Probably never seriously tried it.

Yes, it makes sense from the

Yes, it makes sense from the aspect of positive modelling properties.

Still, someone could be concerned about performance. It would be running imperative code on top of the FP code, which is run again on top of the machine (imperative) code.

But you say it is not a big deal and it suffers of no slowdowns?

Fast enough

I thought about it and I think that a good FP compiler should do. One could write a fast special asm code for special constructs like monads. It would be like asm acclerators for specific, the most used functions.

performance

I imagine that imperative style in FP is actually easier to compile efficiently to imperative code compared to, say, deep higher-order programming. (GHC, for example, knows enough about the IO monad to write specialized code.)

Most FP languages and implementations do have many issues that hinder them from achieving optimal performance. For that matter, so do most imperative languages. FP languages are easily performance competitive with Java, C#, Python, and JavaScript for example.

I don't consider purely functional programming itself to be the source of performance problems. But a lack of control over representation, data locality, etc. common to most FP languages does tend to hurt. And to the extent we use higher order programming driven by runtime data (resulting in massive compositions of functions) we would seriously benefit from runtime optimization or JIT recompilation.

A solution

I think I've found a solution. Not much spectacular, but I think it might work. In fact, it is so obvious, it's about DOM events.

Let’s say we have a pure function called “Constructive” that is capable of taking code fragments written in different languages. This function returns the code fragments compiled to Javascript. But when to invoke this function and execute its results?

What we know generally about browser functionality is that it executes code upon DOM events. The answer is to stitch execution of fragments of the code to these DOM events.

Our function “Constructive” could have the following type:

Constructive :: BunchOfPrograms -> DOMEvent -> JavascriptImperativeFunction

and we start with following javascript code:

...metacode: parse and compile some code base
...and pass it to “Constructive” as the first argument

DOM.onLoad = function () { //our starting point

var x = Constructive (‘onLoad’);
x ();

}

This way the function “Constructive” returns the starting code and it is being executed. What is special about this starting code is that the return value of “Constructive” is completely responsible of what will be done next. For example, “Constructive (‘onLoad’)” can return the following:

function () {

...do stuff...
alert (‘Doing stuff’);
...
DOM.onMouseClick = function () {

var x = Constructive (‘onMouseClick’);
x ();

}
...do stuff...

}

and it will do stuff and wait for “onMouseClick” event. Upon that event we just combine execution of another code block returned by “Constructive (‘onMouseClick’)” into regular control flow, which in turn can setup execution upon other events, and so on recursively.

This execution philosophy is completely analogical to Haskell’s “main = do...” block, except that the flow is controlled by DOM (or possibly other) events. Somehow it makes sense because of processor’s imperative philosophy. It all starts from imperative assembler that periodically executes code blocks from here and there. And we can construct those blocks using pure functions, and tell what to do next in those blocks.

Sounds like a little simple operating system, isn't it?

Useful background

Perhaps the wiki page on CPS would be a useful resource for you? Event-based programming does generally look like a simple OS as that was how the original inspiration for many of the primitives that we see when programming an OS. Event-loops provide the inspiration for call-backs, which in turn suggest the need for coroutines. If you walk down that road then CPS is the elegant destination that you will arrive at, in combination with an external process to handle despatch.

If I understood correctly,

If I understood correctly, this is something like transforming a functional code to imperative code sequence.

Yes

Yes, I believe it may be part of what you are searching for - continuations encode the future processing as part of the return value to allow sequencing of computations.

the confusion of knowing too much about too little

Obviously imperative code is meaningful with respect to some abstract machine model and just as obviously the functional reification of any and all such models has the form of values that represent virtual machine states (with, as the situation dictates, greater or lesser degrees of ambiguity). Within functional programming languages, there is a kind of design catalog of idioms of expression for describing machine-state values (streams, monads, whatever).

In fact I'm working on a functional language that can easily translate any abstract syntax tree (AST) into another AST.

Is this, like, LtU's Sokal moment?

?

Is this, like, LtU's Sokal moment?

Sokal moment? I don't understand, I don't come here often, sorry.

re ?

Sokal moment? I don't understand, I don't come here often, sorry.

Wikipedia on "The Sokal affair".

Offended.

I found myself offended by your statement.

I work on this language since the year 2002. It started as an AI project (still is). I needed a way to describe knowledge about the Universe. It turned first into a universal data structure. Then it turned into a programming language. Then the idea about translating between different data structures was born, being data, languages or conclusions. Then I learned a bit about existing functional languages and concluded that I reinvented a hot water.

But it turned out it is a hot water in a form of plasma, which actually sounded pretty cool to me. Still working on it, fresh ideas here, month after month, year after year.

preview pdf is here. It is a little bit outdated, but you can get the vibe from there. Chapter 1 talks about parsing to ASTs and unfinished chapter 2 mentions functions.

Apologies

Yes, it's hard not to read Tom as mocking you. Remember that in a matter of months he may be lampooning the rest of us for not recognizing your genius. :)

I have to say though, skimming your link, I don't see much that I can recognize as new. If and when you're wanting to discuss it here, you might try to write it up in a way that compares and contrasts it with existing functional languages. Good luck with your project.

Tx for support

Tx for support, Matt M, but I'm a far away from being a genius. :)

As of discussing the project, it's still a work in the progress and I don't want to bother you with details that change from day to day. From what I have written already and also from what I didn't write about yet, I know I'd prefer some of the properties I've found out by now and that's enough for me to continue investigation.

Meanwhile, I could try to explain a few interesting things if anyone is interested...

For now, one of revealed new things regarding to Haskell is that I can inline name any of function parameters (for later access), which (when used as tuples instead of repetitive curying) can be used for parsing texts in a BNF style just from function definitions. Shortly, we could parse texts just from function definitions.

This inline naming reduces the need for "data" and "type" keywords (from Haskell) just to the new type annotations. There are indices that keywords "class" and "instance" (Haskell) can be also reduced to the very type annotations, but I yet have to check it out.

Other interesting thing would be be making any function parameter a permanent placeholder for data which can be accessed later from another arbitrary function. If some other function parameters are linked to these placeholders, then it can be a base for reactive values, meaning that changing one symbol value in user interaction automatically updates other linked placeholders.

Another thing comes naturally when a function result (which also acts as a value placeholder) is reactively linked to itself. Then it can yield recursively more and more results that are needed for e.g. deducing implicit knowledge from a self growing set of assumptions. You can also compute math equations and other interesting stuff this way.

It is pretty hard to explain everything in a few sentences, I'd need a couple of dozens pages minimum to make it clear enough. Maybe I'll try to do it soon, at least to clear up my head if nothing else.

re offended

Of course, a programming language to transform any AST into another AST is, more or less, a programming language for the purpose of transforming input into output.

I've been thinking about

I've been thinking about you, Thomas Lord. Maybe I should thank you for pointing out a very attractive aspect of my project (among other aspects). I've learned what could be important - transforming ASTs...

Transforming input to output? It is a very broad concept.
* A simple function is a transformer of its parameters (input) to its result (output).
* A program is a transformer from user input to hardware output.
* A compiler is a transformer from some language (input) to another language (output)
* Any closed system in the Universe could be generalized as a transformer from input to output (chemical reactions, stars - matter to heat, human creations like automobiles - commands to motion, physics systems that change their states as interaction between its elements happens - from one moment (input) to another moment (output))

I'm sure one could find numerous other examples of input/output scheme. It seems to be some kind of universal essence of the world we live in.

And if you consider humans, for example, we serve as little Universe parts compilers - we transform some Universe systems to another Universe systems. Again input to output...

Well, thank you.

For commenting on my digestive tract.

sad

That is not funny. The subject you are talking about is sad. Very sad. What if our children were in a dining menu of some monsters?

Such an odd comment

Such an odd comment. You must have been thinking about something else as I did.

Oh, leave it

Oh, leave it. I'm just frustrated by the situation on the Earth.

But I have a plan.

But I have a plan.
Ok, ok, I'll shut up now.

Too romantic

I don't think an AI is necessarily benevolent. But it could be better than humans in so many ways that I suggest we create a robot world and simply sterilize the earth after that.

different subgenre of romance

A robot world lacking sapience would have no purpose. A robot world with sapience would have the faults of the preceding human world plus any flaws introduced during its creation. Not that I buy into the "rich people will voluntarily share the bounty" theory, either; power corrupts. (Hey! I found squirrels!)

Onward!

Hey, it's not nihilism. It's a necessary next step in evolution on this planet!

robot step?

A robot world, you mean? As an evolutionary step, it's fairly certain to be just as much of a dead end as a nuclear apocalypse would be. If we don't figure out the difference between computing and thinking, and find a way to organize society that favors thinking, we're headed to a very dark place.

Intelligent Evolution

I am more of thinking on the lines that it somehow should be natural to start replacing chaotic/massively parallel/brute force evolution with intelligent evolution.

Like, life starts of with chaotic evolution, right? Survival of the fittest. And from that it created intelligent beings. But now we have intelligence, it can be figured out how to evolve in an intelligent manner. And as biochemical robots we, humans, are deeply flawed. Our bodies are a mess, they have limited life-span, they break down the whole time, spare parts cannot be easily inserted, and as far as intelligence goes we are highly irrational apes.

So, it makes a lot of sense to simply replace all the flawed systems with something better.

Methinks you underestimate

Methinks you underestimate the problem of intelligence, consequently underestimate the problem of evolution, overestimate our ability to bring about an improvement in either, and also (into the bargain) underestimate our intelligence.

I don't think humans are particularly intelligent

I don't think people are particularly intelligent, I agree. I also don't think something like super intelligence will emerge.

But I think we'll crack the nut of 'intelligence', or create an artificial one, this century. Maybe far sooner.

But once you created one, you can do it again, and improve on it, and it won't be hindered by human emotion, and it will not need to die. That will make it already far superior.

What if you could live a thousand years, or ten thousand, or a million? Constantly improving. It will be better.

smart enough to qualify as stupid

I, on the other hand, actually do think humans are... "smart". (I'm talking big picture, here; there's plenty of room for individual variation, from both nature and nurture.) What I mean by "smart" does not preclude stupidity — in fact, it's pretty much prerequisite to stupidity. A computer (current technology) can't be stupid; it does what it's told, with no contextual understanding to which to apply a judgement of relative stupidity. We are (as I remarked in a blog post a while back) specialists in dealing with unanticipated situations, situations calling for an improvisational rather than algorithmic approach. That doesn't mean we get it right; but we also don't fall off the edge of our programming, which is what an algorithm would do. Our ability to cope is likely to be underestimated because, when compared to the performance of an algorithmic device on a situation it's been fully prepared to cope with, our performance on expected situations can't be as good and our performance on unexpected situations looks dismal. The first of these comparisons doen't play to our strengths, and the second isn't a valid comparison.

But once you created one, you can do it again, and improve on it, and it won't be hindered by human emotion, and it will not need to die. That will make it already far superior.

It's probably true that once you created one you can do it again. Whether you can improve on it, and who gets to decide what qualifies as an improvement, are more problematic. (Likely the people who decide what's an improvement are described by this Onion article.) Maybe it wouldn't need to die; but I see no reason to suppose that not needing to die, or not being hindered by human emotion, would be superior.

I don't believe it

I think the mental process is more less one where in a fuzzy manner you elaborate hypotheses through natural language, a means of generation, and you then weed out, or modify, utterances through reflections on those. Those utterances which ring a bell with some fuzzy understanding of the world simply pass through; 99% that means you accept those utterances which feel right.

I believe that's a simplistic and error-prone process. We'll crack this process. It'll likely be cracked through fuzzy approaches, neural nets, not the logical approach through inference machines.

Probably, your belief in intelligence is similar to your believe in a self. There is a self, but that strong feeling you have about it, is likely only a biochemical illusion. At some point you'll drop dead somewhere, and then the biochemical illusion dies along with it.

I think we'll turn out to be entirely stupid with respect to machine intelligence. Not that a machine intelligence will not be irrational, it'll misclassify lots of utterances too, but it will be vastly superior in processing of utterance creation, the mental models backing will be better and larger, the 'intelligence' part will interface much better with calculation or inference machines, and it will not die and have more or immediate access to storing and retrieving of vast amounts of data.

Your believe in intelligence is a biochemical illusion. We'll turn out to be what we are, highly irrational apes.

Who, me?

Heh. What makes you think I believe in a self? (Actually a better question would probably be what you mean by "self"; a philosophical can of worms, that. Gives me sympathy for noncognitivism.)

The idea of intelligence is often linked to a self

Well, most people do. But have it your way. You don't have a self. ;P

But most people link their romantic idea of a self to a romantic idea of intelligence. I think that's mostly an illusion.

Unromantic

I don't think 'romantic' describes my understanding of the nature of sapience (I generally avoid the term "intelligence", which tends to refer to something like IQ which I don't think measures anything all that interesting/significant). My favored theory of sapience casts it as an information-processing technique involving what I call a "non-Cartesian theater"; discussed in a couple of my blog posts. Just to be clear, a few comments back when I mentioned another subgenre of romance, I was referring to the robot-world scenario.

I read your last post

I am not entirely sure I get why you think the term sapience would cover a discussion of what intelligence is better. But I don't think your musings on that are relevant to the discussion we're having.

I see we agree on a lot of things, though you sometimes seem to imply robot AI would be in the form of programs which I disagree with.

The non-Cartesian theater is a reference to Denett I assume?

The robot world proposal is a bit of a tongue-in-cheek philosophical extreme given a number of assumptions. I don't like Skynet discussions.

small clarifications

Dennett, yes.

For AI, I see programs as the only game currently in town; I wouldn't apply the term "AI" to any other current technology. I also don't see current AI as even remotely in contention for the adjective "sapient". Whether program-style AI can ever aspire to sapience, I tender no opinion. I perceive lots of folks have their hopes for future AI pinned on non-program-based technologies, but I don't think artificial sapience, via whatever technology (excepting maybe biotechnology if you "cheat"), is anywhere near as straightforwardly doable as the hopefuls anticipate.

That's nice

I think I understand what you mean with sapience but I still prefer the term AI. Let's just define them to mean the same thing.

Why wouldn't it be in reach? What would be so special about intelligence that it isn't in reach?

What would be so special

What would be so special about intelligence that it isn't in reach?

I see it being treated as a matter of scaling up, as if making our systems big enough would qualify them as sapient. For purposes of passing an IQ test, that technique would likely work; it's been demonstrated recently that it works for Jeopardy!. But I think there's more to it, a "trick"; Deacon (The Symbolic Species) used the term "symbolic thought", I focus on the non-Cartesian theater which I believe is, more or less, human short-term memory (with room for seven plus or minus two players on the stage). I suspect the size of the theater, seven plus or minus two, may be roughly optimal for the desired outcome, sapience, rather than being a human "limitation". And I suspect the size of the audience for the theater may have an optimal range, where too small an audience doesn't sustain sapience but too big an audience also degrades system performance (note that as people age they have more trouble remembering things even without any form of dementia, simply because they're trying to retrieve things from a larger database).

I see it being treated as a

I see it being treated as a matter of scaling up

I see no evidence for that. There are lots of hurdles to take, I imagine. But we know that somehow through localized rules you can grow something complex out of 'simple' components, and we do have a rough idea how our intelligence works. I think it's feasible.

I want to see this! But that's all I got on the subject so let's move on to other ones.