inter-language PL theory patterns relevant to IPC?

I'd like some PL theory perspective on the idea outlined below, if you have relevant insight. Let me know if admin believes this is off-topic. But my interest is the PL angle, not the "let's write a cool app" angle. You can always hack away at getting a desired result without following any disciplined PL scheme, which isn't on-topic here.

As context, I want to start from a use-case, about processes interacting in a certain way. This is in contrast to how tech is sometimes discussed, first thinking of a neat formal idea, then trying to find a use. I want to start instead with a particular use and ask about any relevant neat formal ideas. My payoff is hearing ideas I would not have considered.

Instead of a boil-the-ocean scenario, where you pretend you can rewrite all software in a shiny new language, this use-case assumes things are not rewritten, that we operate in terms of whatever languages they use. So sometimes you talk to them in their own language, perhaps converting a new language to an old one at the boundary between new and old. Next is a concrete example scenario.

Suppose your new stuff goes in a new process that talks to others, like a browser and an IDE. If you have no UI frontend yourself, it looks like a daemon with one or more protocols to interact with other things. For example, a port serving http would let a browser interact easily with your process. This is especially easy because browsers are designed to work like that, but other standalone apps might also cooperate nicely, provided they have api to interact with other processes. I think this is true of emacs, to the extent you could use emacs as a frontend with a better IDE focus than a browser. Presumably other editors would also work as peers of your own process.

A browser likes Javascript, while emacs likes elisp, so presumably you would generate code to send them, if neither of those was a native focus in your process. (Altering a browser and/or an editor to understand your new paradigm is possible too, but it's interesting to work with existing tools that know nothing about you, and require no development changes at all.) Other tools will have their own preferred languages.

How do you go about doing that in a disciplined way with PL theory rigor? What PL concepts apply at different points and times? What general idea persists across different tools and languages? The new process speaking to each tool in its own language has some theory-of-mind about other tools, but that's not a PL concept as far as I know. Instead of winging each case in a one-off, specific, concrete way, what approach preserves some generality?

[I have little to add in discussion unless I think of questions about ideas suggested. It was recently suggested some things I say resemble a con artist's patter in some way, but I haven't been able to think of something I'm conning anyone about. I've had the same job for nine years, and it's boring. I don't look for a new job because web apps are not interesting to me, nor is commerce generally. I can't buff my reputation if I ignore social venues outside this one. I figure I'm pretty anonymous. I just crave interesting ideas. Coworkers want to talk about whether a local code style guide needs tuning; or if adventurous, they suggest rewriting tools in Erlang. So it's a creative desert. Being interested in PL topics is like being a Martian in places grinding away at the next incremental release.]

Edit: If you want code to run in a browser, you send it JS; if you want code to run in emacs, you send it elisp. (For some other executable, you send another language it wants.) The PL question is about being able to translate your model of code into another model, having a reason to believe qualities you want are present, and having something consistent in translation schemes.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

conciser version?

I'm unclear why the Thing has to generate source code in different languages. I guess to me it sounds like it should just be a REST service.

I have written such an app.

In hindsight it is not clear that it was a good idea. It seemed like a good idea at the time: a server written in python that needed to talk to many javascript clients. It seemed reasonable to ask "why not just generate the javascript from the python server?".

The answer is long and complex, but it basically boils down to a lack of reasonable gain. The server and clients generally speak to one another about their problem domain rather than their implementation language. There is limited benefit from access to the client cogen in the server. There is a huge downside in terms of complexity. Intuitively it should not work like this: the work on staging shows that language embedding does not have to be cumbersome. But experience shows there should be a corollary to Greenspun's tenth rule.

This year I am finishing ripping out the javascript cogen and writing separate modules that communicate in json. It may be "ugly" from one perspective, but from another it is far simpler and thus more elegant overall.

If you have a desire to write a rich communication protocol between modules in different languages: over time they will evolve to a common language. You can either do this explicitly at the beginning (your "boil the ocean scenario"), or you can slowly implement interpreters that cover a common sub-set of functionality over time in the separate modules. Eventually it seems inevitable that you will want some sort of static guarantees about the agreement in the two different modules. Those static guarantees are going to look a lot like types, and sooner or later you will stumble on a central core problem that looks like this:

* Need to communicate typed messages between independent computing agents.
* Both livelock and deadlock need to be avoided.
* Static guarantees about the runtime behaviour within the agents becomes dependent on static guarantees about the progression of the protocol.

I think that most of us know where that path leads, and I don't see any silver bullets down that road.

There is actually a third option that you do not discuss explicitly, but it seems to get traction in the real world: pretend that any software written in a language outside of your particular development bubble just doesn't exist. It may not sound reasonable, but at least this silo-based approach to language development has yielded many results. The process of reinvention even yields real fruit sometimes.

This year I am finishing

This year I am finishing ripping out the javascript cogen and writing separate modules that communicate in json. It may be "ugly" from one perspective, but from another it is far simpler and thus more elegant overall.

I think the E language showed what could be achieved here: messaging with promise pipelining can obtain most of the benefits of "transpilation" without the complexity. By which I mean, that the amount of data transmitted and the number of roundtrips are comparable in both, but messaging and promise pipelining retain full encapsulation.


Do you any particular (small-ish) examples in mind? E looked quite interesting last time I looked it over but I was missing a motivating use-case that time around. What did you have in mind as a piece of source that showed "the benefits of transpilation without the complexity"? Would be interesting to see some code.

But still the focus (in E)

But still the focus (in E) is on what can be expressed in a language vs. how the code got their (was developed) in the first place?

Do you any particular

Do you any particular (small-ish) examples in mind? E looked quite interesting last time I looked it over but I was missing a motivating use-case that time around.

E's page on promise pipelining discusses this. As for replacing "transpilation" specifically, take an example that you yourself have replaced JS codegen with JSON RPC calls, the consider relaxing the synchronous RPC semantics so clients don't have to wait for previous calls to complete before proceeding with subsequent calls, such that multiple calls and replies may be batched.

Finally, I find it odd that you consider E to lack motivating use cases. At the time it was the only language that provided first-class support for simple, capability-secure distributed programming (first language to provide transparent persistent event loops, first language to seriously adopt promises and promise pipelining, first language to seriously tackle the principle of least authority via capabilities). Other languages have since adopted some of its principles.

Not what I said

Finally, I find it odd that you consider E to lack motivating use cases.

That is not what I wrote. I wrote "E looked quite interesting last time I looked it over but I was missing a motivating use-case that time around." (emphasis added). I'm sure that E does have motivating use-cases, that is one of the reasons that I found it interesting. I did not (personally) have a motivating use-case for trying it out last time I read about it. This context would be that motivating use-case for me.


In any case, I found naasking's elaboration quite interesting and valuable, so that's a productive misunderstanding.

E is sadly not as well-known as it should be. Another reference I found interesting recently was the slides of the Strangeloop 2014 talk "How to make error handling less error-prone" by Daira Hopwood (slides, video). The various slides have a weird distribution of technical depth, and the between-the-line story on the interesting interaction between for-convenience language features and GC implementation details may be of interest to other LtU members -- if you like to ponder idly the relations between operating systems and language runtimes, this talk may itch the same spot.

sounds cool

That's fun to hear about, thanks. The third option at the end occurs often: ignoring software in other languages. (Silos based on language-based striping is a great point to add here.) It would be hard to ignore JS in a browser, and JS won't be the basis of a daemon I write, so I can't do a JS silo. Similarly with emacs, I won't write a daemon in elisp either, and if I did, a browser won't run elisp. But your point is relevant. I basically ruled out a silo plan by the way I put the problem, but it's perfectly valid to solve a different problem and favor a silo. Maybe most PL discussion is silo-oriented by default, where other languages are implicitly irrelevant or of fringe scope.

If you have a desire to write a rich communication protocol between modules in different languages: over time they will evolve to a common language.

That makes sense. I had framed alien processes as secondary citizens, able to act as limited frontends that see a subset of a server rather than being peers. If you want peers, you can just deploy multiple instances of something in the same silo that talk to each other. The point of non-peers is to let them keep doing whatever they do best, in the debugged mature form they have now, but able to interact.

A design angle I had in mind was opening some existing client app, then connecting to new software like a daemon, which coughs up a way of interacting remotely, in a limited way natural to the client. This could be a static thing you develop once and merely export, which isn't much of a PL exercise. Or you could generate it anew, dynamically, with an ongoing PL problem to solve, where the client is a transpilation target. The more you want the client to do, the hairier it seems to get, with borderless distributed code soup as a worst case. Instead of trying to export the entire overall daemon model M to a client, you could share only a narrow mini-model m targeted to a specific task, with a new one per task. That way of saying it helps me comment on your domain point:

The server and clients generally speak to one another about their problem domain rather than their implementation language. There is limited benefit from access to the client cogen in the server.

I assumed a server would talk to a client, in the client's language, only about one m mini-model domain specific to what a user tried to do from the client. It's hard for a client to tell the server has a different or larger model, unless the interface purports to unfold like an origami pandora's box (which is maybe what the web app model is intended to resemble).

But experience shows there should be a corollary to Greenspun's tenth rule.

I tend to assume a Greenspun-friendly runtime on both client and server sides. It would be hard to do some things without a lisp at hand to do gnarly stuff, so I think you should go there up front.

As I write I'm sitting

As I write I'm sitting redesigning an intro to unix course that I teach, so I have a ton of connections floating around that are tangentially relevant. Unix is of course the ultimate language-silo design. In many ways it can be seen as a reaction to the complexity that arises from trying to tackle this problem.

The unix philosophy (as espoused by Thompson, Raymoud etc) is a pragmatic approach to ensuring maximum encapsulation around pieces of functionality. The various rules that Raymoud presents in the Art of Unix Programming are all variations on ways to prevent implementation details in one program from leaking into another.

Crossing language boundaries by injecting code is the ultimate dangling dependency. For some small part of the problem domain it is attempting to construct an isomorphism between the expression of that problem in two different sets of language semantics. Even if you slice that isomorphism into smaller pieces (the mini-model M that you refer to) then the dependency will pollute everything in both the server and the client that touches it. "Touching" in this sense could be seen literally as taking a forward, and backward slice from the point in the client where you eval the injected code.

The silo-answer, as it is expressed in unix, is to the make the transport as dumb as possible. Not just avoid the complexity of code in the transport, and restrict it to data, but also to try and pick the simplest representation of the data rather than the richest. Then of course encode in the least common denominator transport, plain text.

Given that this approach is designed explicitly to increase the robustness of software by reducing dependency, it makes me wonder what is the effect of increasing dependency by sharing a much richer representation such as code? My own experience is just the single data-point that I mentioned above, so it is possible that I am just a really poor programmer: and on days when I dig through my own older work this seems like the most probable explanation :)

I would hazard a guess, based on limited data, that the richer the transport of domain information between the client and the server (where generated code is the richest transport) the tighter the inter-dependency between both programs and the less robust both will be.

unix is way cool

(My points are barely PL relevant, so my response here may be off-topic for LtU. But I like the unix model, and don't like mobile code, so I'm going to agree with you, trying to relate this to the top post.)

Above I had taken 'silo' to mean a code partition oriented toward one language ignoring others. But silo seems to mean something else in your unix comments: maybe dependency partition instead of language partition. I like isolating dependencies. It's one reason I greatly dislike distributed code in most forms, including JS in browsers. But letting dislike paralyze you is not an advantage. Some solutions are compromises, coping under protest, in a direction best minimized because off-kilter in a way that would cause software cancer if nurtured.

The kind of daemon I have in mind would run essentially unix style tools inside, as lightweight processes, as a kind of simplified pidgin unix focusing on cheap cost. (My background is mostly backend dataplane optimization, especially for latency, rather than languages; only a few times have I gotten to write a compiler for this, like an html templatizing engine for web acceleration in a reverse proxy server to minimize origin web server traffic.) My interest is mainly dataflow optimization in that context, and dumb data is ideal for many things. A cooperative daemon can be embedded, and would work in anything, even a single-threaded runtime as long as there was a way to provide service to blocking calls issued. It would be like having a benign user-space OS embedded in your app, that was able to deal cleanly with anything particularly horrible in the way of async multiplexed code and data.

I just don't want to write a user interface. If you don't care how long it takes you to get it done, it would be feasible to take open source apps and embed a daemon, so it talked to other instances as peers but had direct access to things in memory with suitable interfaces. This would chew up a lot of time, and I'm not young anymore. Worse yet, suddenly you're managing what amounts to forks of open source apps, or at least add-on extensions for existing forks. Instead, even if you don't like how other apps do things, you can cooperate, even if this means cooperating via injecting code because that's all they permit. In the kind of daemon I had in mind, it would amount to specialized lightweight unixy coroutine tools specifically targeting interaction, via code, with other apps. And if you didn't want to interact with those apps, you would leave those tools out, or limit access via path permissions or whatnot. Code generation for any given interface would be a fringe plugin feature, rather a central feature of architecture, which is instead flexible mini-OS and unix-flavor tools.

Instead of embedding an instance of yourself as a peer, you might translate some small subset of things you want to occur, to be injected as code, provided the other process permits and the user directs it to do exactly this. I don't like the idea of minimizing what can happen to a closed set of static features. I hate "it cannot be done" as a limit in design. Better would be a framework for generating new things as needed, that can be injected, as a developer figures out what seems a good idea while experimenting.

I'm mostly interested in driving the daemon, usually to write traffic load simulations and analysis tools. So in my case browser access is reflection for inspection, as well as remote management as needed. But someone else interested in browser development might look at it the other way around, as a way of using a daemon to create web app tools, in which I have no interest. I'd be more interested in creating emacs modes for distributed IDE stuff across editor and daemon where I'm more interested in what happens in the daemon.

It has always seemed weird to me that apps themselves are often developed as silos, interacting as little as possible with everything else. Cooperating replaceable parts would better fit what I want to happen in development over time. So I don't want things to depend on each other any further than interfaces designed for loose coupling will allow. I'm totally on board with your interest in minimizing dependencies. But I want to permit them in a open model, instead of deny them with a closed silo.

Pulling back on a tangent

I think that I know what you are describing - because I've used something very like it once. It used to be called a machine-machine interface to distinguish it from HCI. Once upon a time I had a rather beautiful home computer whose design borrowed quite a lot from the unix world. The Amiga had many fantastic features, AmigaShell was particularly well designed for a home computer. Buried inside it was AREXX, a port of the IBM scripting language REXX.

The "daemon" portion was not as automatic as what you are describing. I think that they shipped arexx as a library that each application linked against. The application designer made a schema that described the data-model of their application, and a set of actions that could be triggered. AREXX provided a very simple scripting host that could open the arexx port inside any running application. Common uses were reflection/introspection of data-structures at run-time, pulling the values out into a scripting environment. This could be combined with logic to provide behaviour-triggers.

The modern (closest-) equivalent seems to be AppleScript, although even on OSX it is not as ubiquitous as AREXX was on the Amiga platform. The syntax for AppleScript is particularly horrific, so most times that I have had to deal with it (e.g to export metadata from iTunes) I've wrapped it in a python library and used python scripts to interact with the application.

There is quite a broad departure from the unix model: the transport is rich (in the AppleScript model the transport has horrible binary values hacked into it to make it look somewhat like COM). On top of that transport is an application-specific schema that encodes data and actions (from dusty memory) as nouns and verbs in a simple script. This design does avoid transporting actual code / transpilation / foreign-calling conventions.

The basic trick is to have one language-boundary in the system: each process operates its own runtime for its own language. The MMI library needs to be supplied in a way that links against each process runtime - all of the language-pair boundaries have been converted to langX-MMI boundaries and for two application to talk to one another requires langX-MMI-langY.

So here is an alternative design option - one "alien" language as glue for MMI. Pulling the tangent back towards LtU would raise several questions that probably have interesting answers:

* What is the design-space for MMI languages, and how does it relate to protocol-design / formal verification?
* Building application schemas is tedious, how can the process be automated using program analysis?

Suddenly I don't want to write too much more... I've just remembered that there is a grant-application deadline coming up in January that I need an idea to develop for :)

tentative final comment on daemons and scripting

I should call it quits too, after this post. PL stuff related to my question isn't coming up much, though you have some neat historical references there. Thanks for writing several posts I enjoyed. Otherwise it wasn't worth my time; maybe LtU is a bad place for such topics. Now I'll respond to several things you mentioned. No need to respond; this is only to entertain folks, including yourself.

I never used Rexx or Amiga, but I looked a bit at AppleScript, which always seemed a little weird to me. A bunch of folks associated with AppleScript tried to explain a little detail, but it always sounded really ad hoc and non-uniform. Before spinning off as a separate company, when Pink was still at Apple, I said I was going to need a scripting language to do a few things when a person was not going to sit there and do the same thing over and over, like in simple rote integration acceptance tests. Several people said go talk to the AppleScript folks, so I did. I think Donn Denman was one of the guys I met. I asked, "So, what are the builtin primitives?" After a little hemming and hawing, I got the idea almost no primitives were present as universals beyond basics of control flow. Virtually everything with semantics depended on each specific app, to provide operations that do something. Basically there was no standard library, nor expectation any common patterns would usually be present, in each app.

"What do I get by basing things on AppleScript then?" I asked, because I wanted to know how this would give me a leg up. The answer sounded like, "A way of doing things, as a conceptual framework, even though you'll otherwise be starting from scratch." It was later when I asked about the mechanics in how AppleScript support is usually implemented, that it sounded painfully fragile and creaky, encouraging each app to invent personal jargon. Instead of getting easier, the going seemed to get harder the longer you added more scripting support, so the design seemed wrong to me. I steered clear because it sounded like an invitation to walk on hot coals.

I ended up using a lisp. The microkernel had a feature that allowed us to spawn a new thread in a process, from the outside, without cooperation of the target process. I was able to spawn a lisp interface driving apps from the inside. As a joke I called it the virus insertion framework, and I was warned not to say that in public. (It was the idea of alarming the public that was funny.) But that whole endeavor never had clear structure.

About the "daemon" approach, I had not intended that much automation. You would expect most tools running inside to have their own protocol, pretty much like actors taking messages (or streams) with a specific type signature. Automating protocol description would work. If all the code is run through a global analysis tool, as planned, automated reflection of all formats would make it possible to expose generic browser access at a fairly raw level. But you would want to write URL handlers that did intelligent cooking of data, if only to distill vast bodies of data into focused summaries. (I think it would be hard to think of everything you wanted to see up front. It would make sense to write new browser analysis and display modules and push them in to daemon as needed to change what you could see on demand. You could inject new code from browser, editor, or shell with the right auth for permission.)

The primary avenue of communication would be connections and messages, via some flavor of socket or pipe, or even file system when you did things at a slow user paced latency (maybe as push from an editor session). Basically different daemon subprograms would listen to ports (often non-privileged), and you'd be able to browse what was served on which port. Probably multiple peer daemon instances on one box would discover one another and merge high level summary displays, with annotations showing which daemon did what. If you wanted a native UI, it could run in the same address space as a daemon, but another address space would be safer, and a user would never be able to perceive cross process i/o latency. So UI would be a client of server daemons. (This is how things have been done at work for many years now for me; it's unimaginative in this context, but not obvious to everyone.)

big old fan of AREXX here

The trick was that it was understood by just about all the app developers in the Amiga ecosystem that they should have 'ports' open for AREXX. So you really did have one language to rule them all. AREXX might have sucked as a language, I might not like it to look at it today, but it did the trick.

Amiga Programming.

Yes, me too. I mostly remember the Amiga as being the last machine I wanted to write assembly language for. 'C' compilers were just too big, slow and unwieldy without a hard disk. I used a macro assembler, and wrote a solid object 3D renderer that used HAM mode to fill shapes, the blitter to draw the lines, and the copper for draw lists to control the blitter. So the CPU would set the blitter draw mode (to draw or erase) and then handed off the drawing to the copper whilst it built the next draw list. The non-stardard PC hardware (different CPU, different graphics, IO, etc) marked the end of practical assembly language for me, and with the common availability of hard disks, 'C' became usable.

hardware or cross compilation the problem?

I liked 68K assembler a lot. But most of my exposure was stepping through C++ binaries in a 68K-assembler-only debugger for a few years.

Hypothetically, if you wrote in one assembler like 68K and translated to another like Intel, is this more or less susceptible to optimization than starting from a higher level language?

The following dialog means nothing whatsoever. It's a cartoon. I never write with mocking intent. (I dissemble only in response to direct insult; a polite brush-off is good form where I come from, so only look for another meaning when I am carefully polite.)

Stu: It hurts when I make a mistake despite my superpowers.
Ned: You're just a programmer.
Stu: Nooo!

Probably about the same

This is where I would make a very vague and hand-wavy argument based on knowledege that is a few years out of date. I do that. It's always a good opportunity for somebody else to jump in with something more recent.

What is the loss function between the source asm code and the chosen intermediate language? Loss in this case would not mean that part of the input program is not encoded (as the semantics would then be broken and the output of the cross compiler would break). Loss occurs when the source control flow cannot be matched concisely and more control-flow constructs in the intermediate form need to be used. In the worst case the translation to the intermediate may need to be conservative and introduce flows that do not exist in the original, and are not dynamically selected in the intermediate, just to find a matching encoding. For the data-flow side there is a similar encoding problem for the operations being performed.

Now for the hand-wavy bit: if your intermediate is a type of VDG or another kind of synthesis of control- and data- flow edges, and your set of intermediate nodes matches the core semantics that exist in all assemblies then... the loss will not matter. So I would estimate that if you choose your intermediate well then the results should be as (relatively-) good as compiling from a high-level language. I saw quite a lot of this effect in the intermediate form for the CAO language, although I don't think that it is described anywhere in the papers on it. Documentation in that project favored the "oral tradition" approach.

Makes me wonder - what happened to Transmeta in the end? I vaguely remember that they were bought out, but where did their technology end up?

intermediate languages seem kinda fun

Your analysis sounds good to me. For someone to want to do that, they'd need a payoff, providing incentive to generalize. Pursuing general intermediate language stuff might lead to old territory in things like ANDF (Architecture Neutral Distribution Format, see also wikipedia, comp.compilers and TenDRA). I remember trying to learn more about ANDF in the early 90's, after I was told it was an effort to pursue generic intermediate format. I suppose Java as IL mostly displaced every other similar effort for a while. Currently discussion of intermediate language must consider proprietary things like Microsoft's CIL.

I vaguely wish there was more abstract discussion of intermediate language, as opposed to highly concrete treatments of specific tech projects. It would just be interesting. (I tend to think of C as intermediate language, which is boring.)

The Transmeta wikipedia page says stealth mode in 1995, IP0 in 2000, layoffs in 2007, bought out by Novafora in 2009. Intel notably holds a perpetual, non-exclusive license to all Transmeta patents and patent applications. Perhaps otherwise there might have been conflict in microcode generation tech in assembler decoding.

Ned: Are you designing an intermediate language?
Wil: No, thinking about delimiter alternatives for lexical clarity.
Ned: Boooring. For real fun, start an argument with Stu about macros.
Wil: Stu, are you okay? Your eyes look funny.
Stu: Please hold while I page in my hygienic macro flamewar module.
Wil: Gotta go, look at the time. See you guys.

Type systems for IL

I am interested in type systems for intermediate languages. For example a type system that can have register types. Thinking of a compiler as a series of source to source transformations, where there is a single type system for all the intermediate representations.

Register types are dangerous

There is a good reason that they are generally used as hints to the compiler rather than conditions - the register allocation problem is significantly harder if you have vertex-precoloring. The allocation problem on a graph without pre-colored vertices is equivalent to graph-colouring, and most graphs that are encountered are relatively easy (they have a chordal property). If you pre-color vertices (e.g. by marking values as register in the intermediate, or even if your target ISA has fixed register assignments like the multiplier unit in x86) then two things happen:
* The chordal property disappears, so you can hit much harder sequences of sub-graphs while you are searching for an allocation.
* The allocation algorithm loses a degree of freedom in choosing when to split/coalesce.
The consequence of these two differences is that it is much more difficult to find a reasonably good allocation.

Using a single type system for all of the intermediate stages is enticing, but choosing the appropriate level of precision is difficult. If the type system is too imprecise then optimisation opportunities are lost within individual transformations. If the type system is too precise (overfitted) then the intermediate language becomes more complex, and that additional complexity gets dragged into every transformation that uses it.

Type-safe rewriting and constraints on types.

Register assignment is a compiler pass. The input will have no registers assigned, and the output will have registers assigned. The output will need a type system that can represent registers, the input will not use that feature. I expect both the input and the output to the stage to be well typed. I could use a different type system for the input and the output, but with many compiler stages this becomes prohibitively difficult, and you lose the ability to directly compare input to output, and you cannot prove the tree-rewriting is type-safe, as if the input and output type systems are different then the rewrite is outside of any type system.

I hope to express the rewriting for register allocation in the type-system, hence if the rewrites are well typed, and the input is well typed, then the output will automatically be well typed, and does not need checking.

Although there is a single type system, it does not mean that it is valid to have register types in the input to the register allocation stage. There will be constraints on types, so I should be able to assert that the register allocator requires no register types to be used in its input, but this is only possible if 'register types' are defined in the type system so the constraint on types can be expressed.

I suspect Andrew is talking

I suspect Andrew is talking about optimization passes that may reorder blocks and thus change register assignments, ie. optimizations after register allocation. I think you can see how these would be more complicated if they had to preserve register typing properties as well.

You can simply say that all optimizations must be done prior to register allocation, but I suspect you'll lose some good optimizations with this constraint.

Actually I was not, sadly,

Actually I was not, sadly, as that is a much more interesting case than what I had in mind :)

The input will have no

The input will have no registers assigned, and the output will have registers assigned.

Then you are in a particularly nice place and I wish you best of luck in your stay :)

Quite often that constraint breaks down when it comes into contact with the development process for a compiler. Not always though, so if one is working in a scenario where it can be preserved then it does make life easier. One issue is forced pre-allocations because of the target ISA. x86 targets suffer from this, although ARM does not (IFAIR). Sadly the other issue is having programmers as users. Sometimes they don't trust the allocator to find a good allocation, and they really want something that acts as a register keyword. It sounded as if you had this second issue in mind when you first mentioned register types, although I understand now that you are looking t a single atomic allocation processes with clean inputs and outputs.

I hope to express the rewriting for register allocation in the type-system, hence if the rewrites are well typed, and the input is well typed

Hope is a noble trait that carries us through many adversities :) I'm curious about how many of the standard constraints in an allocator would make the translation to a type-system. Would it be a type-error if two values were allocated to the same register when their life-times overlap? How would spills and reloads be represented as parts of a value's type? Sounds like you have some interesting sub-problems.

as if the input and output type systems are different then the rewrite is outside of any type system.

As an unrelated aside: pulling this statement completely out of context for a moment - I wonder if that actually true. What about the case where the two distinct type-systems can be composed into a larger type-system? Surely the rewrite would be well typed within the larger system.

Type System Composition

I would say then you have a single type system, and you are restricting the input to a subset of the total type system. The result would be the same as the single type system with an input constraint to the register allocation pass, except that with the single type system you could relax the constraint to allow register hints and pre-allocation to be handled in earlier passes.

Not full mobile code, just simple mobile scripts

I don't think you need full mobile code to achieve this, just mobile scripts in a simple pure functional language. For example, if you want your server to invoke endpoint A on the value "foo" and pass the result to endpoint B and so back to you, you access the script endpoint and
transmit (b (a "foo")). The available functions would just be the endpoints, plus a few pure functions that don't need to be exposed as endpoints, like + and =.

This scheme does not require any sort of full interpreter of S-expressions at the server: I'm writing Lisp here because it's crisp, but JSON would also be reasonable (you probably want keyword arguments, and they are available in JSON for free), or even a small concatenative language. Plausible extensions, still pure and functional, would be if, begin/progn for executing commands, linear let (i.e. variables that can only be used once) and linear let-values if endpoints can return multiple results (Forthish languages don't need these). You want linear variables so that the results and inputs can be whole streams without having to copy them.

Much Weirdity

One of the weirder things I think about implementing is the ability to generate a program from halfway evaluated compiled code.

The idea is this: most evaluators abstractly can be seen as term rewrite systems on the source code. (This is mostly true for languages like Javascript, Lisp, and certainly true for Haskell.) Wouldn't it be nice if we could generate source code from the operational state of the program?

I.e. halfway through the evaluation of 'fac 6' the operational state of the program corresponds to, say, '6*5*4*3*2*1'.

Somehow it would be nice if we could halt the program there, and back translate the operational state to a program, and serialize that, for later or mobile evaluation.

I doubt I'll implement it. Try/Catch blocks might be in the way at the moment.

Reverse the register mapping

If you freeze the execution to produce an instruction-pointer and a set of register values then it can be mapped back onto the SSA-form for an imperative source easily enough (assuming that you stored the register allocation at compilation time). This roughly corresponds to substituting constants for particular values on the graph and deleting all of their ancestors. Serialising that modified SSA-form back onto the original imperative source is quite easy.

I'm unfamiliar with the innards of functional compilers, but what would be the equivalent intermediate representation that you would need to project back onto?

I am doing something Utterly Simplistic

Registers? Hold your horses. I'll be happy when I implemented a simple evaluator...

I am working towards a 0.1 release where I'll be interpreting combinator expressions instead of compiling them to assembly or bytecode. It's more important to me to get the functionality and innards of the interpreter right, then to evaluate efficiently. So, you are thinking too far, my goal is to have a working interpreter first, generate jitted code later.

The innards of a functional compiler based on combinator rewriting is very simplistic. (The catch: combinator rewriting is a nice model but slow by itself.) The aim of a simplistic functional compiler is to compile blocks of pattern matching expressions.

So, say you have something like:

def length =
    [ nil       -> 0
    | cons x xx -> 1 + length xx ]

Then length corresponds to a combinator, which corresponds to a function, which will evaluate one argument, pattern match against it, and either return a result directly or continue evaluation by pushing a number of arguments (either onto a stack, or to a collection of thunks). Typically, during the pattern match, registers are created which are filled with the constituents of the data record matched against. So, in the example, registers are created for x and xx, which bind to head and tail argument, and, if used in the result, are pushed onto a stack as arguments to other combinators.

Typically, the way towards an efficient functional compiler is to generate code for a (stack) machine, where afterwards you can even compile further and see whether arguments can be passed through registers instead of over the stack. But that's a five to ten man year effort.

My aim is to compile the AST to an intermediate combinator language, and either evaluate directly or later generate jitted code for a rewrite. It is non-trivial to generate back from jitted code, but it should be trivial to compile the intermediate combinator expressions back to an AST. So, if I keep the intermediate combinator expression along with the jitted code, I can compile back.

And then the last task would be to see that a stack, or collection of thunks (a thunk is a heapified stack-frame), represent a half-way evaluated expression and therefor can be compiled back to an expression in the intermediate combinator language, and from there back to an AST.

It's doable. But I am not even halfway there.

New languages can mix with

New languages can mix with "old" languages in the same "runtime" aka

High-performance language interoperability in multi-language runtimes
Full Text: PDFPDF
Author: Matthias Grimmer Johannes Kepler University, Linz, Austria

Programs often consist of parts that are written in different languages because sub-problems lend themselves to being implemented in a particular language. However, multi-language programs often suffer from poor performance, complex cross-language interfaces, or insufficient flexibility.

We propose a novel approach for composing multiple language implementations in a seamless way. Foreign objects of one language can be used like regular objects in another language. Our interoperability mechanism targets language implementations that run on the same VM and have the same style of intermediate representation (IR), e.g., an abstract syntax tree (AST). For accessing foreign objects we generate foreign-language-specific IR patterns that we insert into the IR of the host application.

Thus we avoid converting or marshalling foreign objects at the language border. Our mechanism also allows the just-in-time compiler of the host VM to inline and optimize across language borders.
From ;

Right now you can mix Ruby, C , R , Python and Javascript... + of course Java.

re language interfaces

Mixed languages in one runtime is an older traditional topic addressed by things like SOM (system object model) in the 90's, and more generally by each language targeting the ABI of another or a single language like C as a lingua franca.

A single-runtime solution applies less to the multi-process use-case noted, especially if you cannot alter a client process so its runtime does a new multi-language trick. You could build your own browser, or build your own emacs, both running new code linked within, so your custom runtime has new language shims. That's stacking the deck a bit. (If a magician can end any card trick by handing you a new deck pre-arranged to manifest the gimmick, whatever precedes that step was misdirection. Making a new binary do whatever you want is also stacking the deck.)

The use-case I have in mind is where you sit down at someone's desk, ask what they have installed, and connect to a server that sends whatever language is handled by installed apps. That's assuming a client able to run input code from outside. The permission and capability issue is resolved by a client user trusting a server involved. (In my case, the server is one started locally under a user's control as just another app, so trust is less an issue.)

I'm glad you want to point at related stuff though, where you have old source code you want to use. I was thinking of old executables you want to use without change, as opposed to their source code.