Some notes on Rust, the language.

Nobody seems be saying much about Rust, or if they are, the LtU search can't find it. So I'm starting a Rust topic.

After some initial use of Mozilla's "Rust" language, a few comments. I'm assumming here some general familiarity with the language.

The ownership system, which is the big innovation, seems usable. The options are single ownership with compile-time checking, or multiple ownership with reference counts. The latter is available in both plain and concurrency-locked form, and you're not allowed to share unlocked data across threads. This part of the language seems to work reasonably well. Some data structures, such as trees with backlinks, do not map well to this model.

There's a temptation to resort to the "unsafe" feature to bypass the single-use compile time checking system. I've seen this in some programs ported from other languages. In particular, allocating a new object and returning a reference to it it from a function is common in C++ but difficult in Rust, because the function doing the allocation doesn't know the expected lifetime of what it returns. It takes some advance planning to live within the single-use paradigm.

Rust's type system is more troublesome. Rust's has most of the bells and whistles of C++, plus some new ones. It's clever, well thought out, sound, and bulky. Declarations are comparable in wordiness to C++. Rust has very powerful compile-time programming; there's a regular expression compiler that runs at compile time. I'm concerned that Rust is starting out at the cruft level it took C++ 20 years to achieve. I shudder to think of what things will be like once the Boost crowd discovers Rust.

The lack of exception handing in Rust forces program design into a form where many functions return "Result" or "Some", which are generic enumeration/variant record types. These must be instantiated with the actual return type. As a result, a rather high percentage of functions in Rust seem to involve generics. There are some rather tortured functional programming forms used to handle errors, such as ".and_then(lambda)". Doing N things in succession, each of which can generate an error, is either verbose (match statement) or obscure ("and_then()"). You get to pick. Or you can just use ".unwrap()", which extracts the value from a Some form and makes a failure fatal. It's tempting to over-use that. There's a macro called "try!(e)", which, if e returns a None value, returns from the enclosing function via a return you can't see in the source code. Such hidden returns are troubling.

All lambdas are closures (this may change), and closures are not plain functions. They can only be passed to functions which accept suitable generic parameters. This is because the closure lifetime has to be decided at compile time. Rust has to do a lot of things in somewhat painful ways because the underlying memory model is quite simple. This is one of those things which will confuse programmers coming from garbage-collected languages. Rust will catch their errors, and the compiler diagnostics are quite good. Rust may exceed the pain threshold of some programmers, though.

Despite the claims in the Rust pre-alpha announcement of lanaguage definition stability, the language changes enough every week or so to break existing programs. This is a classic Mozilla problem; that organization has a track record of deprecating old features in Firefox before the replacement new feature works properly.

Despite all this, Rust is going to be a very important language, because it solves the three big problems of C/C++ that causes crashes and buffer overflows. The three big problems in C/C++ memory management are "How big is it?", "Who owns and deletes it?", and "Who locks it?". C/C++ deals with none of those problems effectively. Rust deals with all of them, without introducing garbage collection or extensive run-time processing. This is a significant advance.

I just hope the Rust crowd doesn't screw up. We need a language like this.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The usual type soudness question

Is there a formalization arguing that the Rust type system (or interesting fragments of it) is sound?

Too early

The language is still changing too rapidly to make this worthwhile.

Of course, you can argue that issues like type soundness should not be "bolted on". The Rust guys seem more concerned with pragmatism.

I argue.

I argue. Besides, having a better understanding of what things may be sound, or how to prove things sound, may actually *help* the language evolve, so "the language is changing" may be a point in favor instead of a point against. I know that formalizing things is hard and that it may fail, but I'm surprised by the absence of visibility of any attempt at it -- I'm also surprised not to see more people ask this question.

The Rust project does many things right (it's extremely tricky to coordinate a joint effort between a tight group inside a structure and a larger volunteer base, for example), formal semantics design seems (from the outside) to be one of its few weak points.

To be fair, Niko Matsakis and Aaron Turon (and surely others that I haven't seen in action, I'm not very familiar with the Rust community) have been evolving the type system in careful ways, and I suspect they have some good internal intuition of what should be sound and what isn't.

The project seems to do the best they can do *without* having a formalized (static and dynamics) semantics. There is still a lack of good example here.

(A related "good example" is the effort of Richard Eisenberg to maintain a formal description of the static and dynamic semantics of GHC Core (Haskell). It does not contain a soudness proof, but those have been established for language fragments in various publications.)

There was a start at an

There was an attempt at formalizing a subset of Rust, but it is inactive. If a formalization occurs, it is more likely to be the result of someone from the academic community than an internal effort from the core contributors.

If formalizations came

If formalizations had to come first, it would severely constrain the design space of the language to what was conveniently formalized (even Haskell runs ahead of formalization a lot). Another problem might be that their large volunteer base doesn't include theory-oriented PL people; I think C# gets a lot of help from PL theory crowd in MSR in that regard. If you are volunteering, I'm sure they would welcome your efforts :)

Also, the typical role of formalization is postmortem: to "understand what we did" so "we can do better in the future" rather than to "inform on current design." For better or worse, of course.

Hm

If formalizations had to come first, it would severely constrain the design space of the language to what was conveniently formalized

I'm asking about the type system here, and I don't think it's absurd to restrict a type system to what you can formalize (of course you'll experiment with new features, but one could hope the gap between an experiment and its proof arguments to be of the order of months, not years or decades). Of course, to formalize a type system you also need a proper specification of the dynamic semantics. But I find that most language whose dynamic semantics evolve in no-idea-how-to-formalize-yet territory are not statically typed in the first place.

(even Haskell runs ahead of formalization a lot)

Could you be more precise about what are you referring to here? I'm not very familiar with the internals of the GHC team (I read GHC Weekly news and research papers), but the examples of type system changes I can recall have been formalized, generally before they were considered for inclusion in the language (type-level literals by Adam Gundry, roles by Richard Eisenberg and Stephanie Weirich, and of course the various older work on GADT inference). I rather find GHC to be exemplar in maintaining a strongly-typed intermediate language with strong soundness arguments, so I'm interested in your contrary point of view.

(I initially planned to but chose not to discuss the "people" aspect here, because going assumptions about whether someone specific could or should do this or that makes me uncomfortable.)

Here is a paper circa 2000

Here is a paper circa 2000 (http://web.cecs.pdx.edu/~mpj/thih/thih.pdf), just read the first paragraph of the intro (or section 3.4 of the 2007 HOPL paper!). i would be surprised if everything was even up to date right now with respect to formalizations (ignoring simplifications, of course), but I don't know the internals of the team either. Anyways, your language doesn't exactly fall down without an up to date formalization.

Formalizations aren't exactly easy: a lot depends on existing formalizations that can reused. For an entirely new language with a new type system, there isn't a featherweight Rust yet. So why not years instead of months?

And really, there is only a handful of PL theory people who are capable of and interested in doing this kind of work. They seem to be largely separate from the PL design and/or implementer crowd also (barring acedemic languages).

Haskell HOPL paper

From the HOPL paper (Section 3.4):

Subsequent papers describe a good part of Haskell, especially its type system (Faxen, 2002), but there is no one document that describes the whole thing. Why not? Certainly not because of a conscious choice by the Haskell Committee. Rather, it just never seemed to be the most urgent task. No one undertook the work, and in practice the language users and implementers seemed to manage perfectly well without it.

Indeed, in practice the static semantics of Haskell (i.e. the semantics of its type system) is where most of the complexity lies. The consequences of not having a formal static semantics is perhaps a challenge for compiler writers, and sometimes results in small differences between different compilers. But for the user, once a program type-checks, there is little concern about the static semantics, and little need to reason formally about it.

...

Nevertheless, we always found it a little hard to admit that a language as principled as Haskell aspires to be has no formal definition. But that is the fact of the matter, and it is not without its advantages. In particular, the absence of a formal language definition does allow the language to evolve more easily, because the costs of producing fully formal specifications of any proposed change are heavy, and by themselves discourage changes.

From this context, everything that Rust is doing looks perfectly fine and normal.

depressing

I wish we were evolved enough to know how to have our cake and eat it, too.

Do programmers ever care if a system they uses is "formalized"?

And if not (obviously) then why do you?

Wrong question

It doesn't matter if programmers care. But as a client of one of their products, say, as a passenger on a plane, or connected to a live support machine, or trusting all my private life and riches to an encryption mechanism, I might care.

Uncomfortable

Are you really asking this question in good faith, or is this an ironic/cynical/rethoric point you're making because you're familiar with the discussion on formalization but want to take it in other directions by questioning some specific assumption in the replies?

This sub-subthread (not the discussion with Sean, but raould's post and everything below it) constantly has me asking myself whether I'm being trolled, and whether it is worth spending time participating in a discussion that seems purposeless.

I thought the benefit of formalism in principle (and its cost, and thus the fact that a compromise exists) had been established for decades -- that we didn't have to argue for it anymore. Some LtU participants argue that the specific formalisms that have been developped could be replaced with something better going in another direction (eg. John Shutt on type systems), and that's interesting. But it's quite different from suggesting that anyone would believe that "formalism is the worst thing that ever happened to programming", an idea I can't quite follow.

I'm a programmer, not an academic

There are relatively well designed languages and systems and poorly designed one, and I don't think formalization had much of a role to play in making the good ones good. An intelligent designer who had experience in software engineering had that role.

The programming languages I use have mostly not been formalized. The ones that need formal descriptions, need them because they've gotten too complex to understand and frankly are possibly defective in that way. I don't want to try to figure out any code written C++ template metaprogramming or any other hard to reason about corner of the C++ language. It's complex for no good reason and the templates are unreadable for no good reason. And it's so complicated that each little clarification that's been explored had to be explored after all of the remaining compilers for it failed a test and had to be revised... And the reason there are so few compilers is that the language is too complicated to understand.

A straightforward language could accomplish more, as 2 days of macros in scheme or lisp beats 2 years of pounding away in C++ templates and recruiting the help of a room full of Boost library experts.

I don't want to figure out overly complex Haskell type-systems, I don't see the reason for such excessively meticulous typing when all I want is a program that does the job.

Formalisation keeps you honest

You don't formalise something when it gets too complex. You formalise something too avoid it getting too complex. Even "intelligent" designers tend to gloss over a lot of hidden complexity that only bites back later, once the system grows. Formalisation helps tremendously to avoid complexity because it forces the designer to realise, and stay honest about, all that complexity. Thus, design and formalisation work best when they go hand in hand. (Which isn't to say that formalisation can't get in the way, but there is a value to it.)

Implementation also works to

Implementation also works to keep things simple, but formalizations are quite similar to implementations in a differently constrained language.

I'm not aware of many PL designers that use formalization hand in hand with design; they do exist, especially when focus is on safety and verification.

On PL designers who use formalization hand in hand with design

True, they aren't that many who dared doing this, early on.

Bertrand Meyer and his language, Eiffel, come to mind (~ 1990).

(ISBN-10: 0134985109, 0132479257)

And yet he still got

And yet he still got polymorphic catcalling wrong.

We're still in the Stone Age of computer science...

... like Alan Kay says.

lower or middle

It'd be nice to think we're in the Middle Stone Age. We may be in the Lower. I don't see cave paintings yet.

How formalized and type safe are Alan Kay's systems?

From his style I suspect he'd be the first one to say that formalism is the worst thing that ever happened to programming.

He's currently trying to prove that you can write a complete computer system in what 1/1000 the usual space? 1/10000? I'll bet he does that by avoiding types and just doing the minimum it takes to get the algorithms working.

Formalization at VPRI?

VPRI publication list.

The work on KScrip/KSworld, the declarative GUI language, is embedded in Squeak, and I'm not aware of extensive formalizations of it, it is (in the papers I've seen) evaluated by demonstrating examples.

On the contrary, the work on Babelsberg, an object-oriented constraint programming language, has been formalized last year by its very designers, which report large benefits of formalization in clarifying and simplifying the language -- even though they have not published soundness proofs yet.

Developing a Formal Semantics for Babelsberg: A Step-by-Step Approach

Tim Felgentreff, Todd Millstein, Alan Borning, 2014

Babelsberg [4] is a family of object constraint languages, with current instances being Babelsberg/R (a
Ruby extension), Babelsberg/JS [5] (a Javascript extension), and Babelsberg/S (a Squeak extension). The
Babelsberg design has evolved alongside its implementations, driven in part by practical considerations
about the expectations of programmers familiar with the underlying host language. This fact, along with
the complexities of integrating objects, state, and constraints, have led to a number of the semantic choices
being muddled with implementation specifics. There have also been a number of long-standing, confusing
issues with respect to constraints and object identity, how to represent assignment, and the appropriate
restrictions on expressions that define constraints. In an effort to understand these better and to provide a
complete design that instances of Babelsberg can implement, we give here a formal operational semantics
for Babelsberg.

[...]

We’ve found it helpful to approach the problem incrementally, first devising a formal semantics for a very
simple constraint imperative language, and then building up to a full object constraint language.

[...]

Some significant clarifications and simplifications of Babelsberg that have come out of this work on formal-
izing the language are:

  • A clearer understanding of the interactions among constraints on identity, types, and values.
  • The addition of structural compatibility checks to tame the power of the solver with respect to changing
    object structure and type to satisfy constraints.
  • The addition of value classes. (Instances of value classes are immutable objects for which object identity
    is not significant.) Value classes play a key role in giving a clear specification of the requirements
    for expressions that define constraints (in particular that they must be side effect free), while still
    supporting useful programming idioms.
  • A set of restrictions on identity constraints that make it easier to reason about object identity and
    type. In particular, any change to the identity of the object referred to by a variable flows clearly from
    a single assignment statement, and is deterministic — there are never multiple correct solutions to the
    identity constraints. This also implies that any change to the type of a variable must similarly flow
    from a single assignment statement and be deterministic.

He claims to be using math

More specifically, his plan was to define reusable mathematical abstractions, which would be reusable so often to reduce the amount of code duplication.

I think his point on formalisms would be that we haven't figured out the right formalism (and aren't going to do that so soon) — that's, AFAIU, Kay's argument for late binding.

Elements of Programming

All the stuff in Stepanov's Elements of Programming seems pretty well thought through and directly draws on the history of algorithms, which can be seen as a 3500+ year narrative. As such its not that we haven't figured anything out, but that people seem to ignore what we have figured out, or do not like what has been figured out for some reason.

I think this kind of formalism is different to PL formalism. Arguments for late binding etc are not about the formal construction of the language itself (using type theory) but are about the formal construction of the math-libraries and builtins provided by the language. My comment about maths above, and the comment I am replying to appear to be about the latter, whereas the discussion so far seems to be about the former.

Rust is certainly normal,

Rust is certainly normal, but implementation work on Haskell seems formalized much more often (and in clear papers) than work on many other languages. Standard ML is more formalized than Haskell, but it seems an example of a formalization freezing a language (but we're looking forward to Successor ML).

Also, a good dynamic semantics of call-by-need seems still an active (and hard!) research area. At the very least, I'm guessing type soundness for Haskell can ignore the difference between call-by-value and call-by-need.

a posteriori not a priori

Formalizations come later, not sooner, and inform on design indirectly after the fact (fix bugs, inform future evolution). What I think gasche was looking for was up front formalization that informs design much sooner.

If Rust is successful, PL theorists will probably find it worthwhile to formalize parts of it (the interesting ones, at least).

Successful *type system*

I think the strong difference in our views come from the kind of work we're most interested in. You explore innovative computational paradigm, *dynamic semantics*. Certainly in this area formalizing the statics can come later (one good example would be Oz/Mozart, which pushed the boundaries in many respect without a type system -- but it had a detailed formalization of the dynamic semantics).

But I don't think you can innovate with a new *type system* and formalize it only later. If it's not formalized, it's almost certainly unsound, and designing the language as if it was sound¹ without actually ensuring it is feels... sloppy, rather than innovative.

¹: some people purposefully define unsound type systems these days, but it's their responsibility to know precisely why and design the rest of the language accordingly.

To me this feels like saying "who cares whether my new P!=NP proof is actually correct in the details, the big ideas should be liberated first, actually checking the proof should come only later".

Funny that you mentioned

Funny that you mentioned it....

I am working on a new type system and will not formalize it up front (implementation is the only goal ATM). I totally get what you are saying, it does feel "sloppy" especially since I know nothing about its soundness. But soundness isn't the primary goal, programmer feedback and heavy inference are. And if it works, the demo will be really cool.

who cares whether my new P!=NP proof is actually correct in the details, the big ideas should be liberated first, actually checking the proof should come only later

If you do a proof, it should be correct. I don't write proofs, I don't think of types as proofs: that misses the entire point of providing the programmer with decent feedback without making them pay dearly for it.

Types a not proofs

Types are not proofs, they are theorems. Programs are proofs. For example:

f :: Int -> Int

Is the theorem that there exists some function called "f" that when passed an integer returns an integer, here is a proof by example:

f x = x + 1

Of course there are many other proofs for this particular theorem :-)

Got it. Types are supposed

Got it. Types are supposed to be parts of the proofs, or something like that :)

In the case of Rust...

...the memory model for the safe part of Rust is simple enough that I don't think there will be any hard-to-fix unsoundness in the design. It's just affine lambda calculus, plus borrowing to make living with linearity easier, plus ML-style effects, polymorphism and typeclasses. Nothing too fancy is going on here.

The interesting/hard part of Rust is the semantics of code in unsafe blocks. Basically any safe code can be called from an unsafe block, which means that they need to spec (a) the operational semantics of unsafe code, and (b) the intended semantics of Rust types in terms of the unsafe operational semantics, so that you can prove if unsafe code is okay.

What makes this tricky is that since unsafe blocks are intended for implementing concurrent primitives, the semantics of unsafe code has to talk about weak memory models, but at a sufficient level of abstraction that both x86 and ARM are covered, and in a way compatible with what LLVM does. So basically you'll want to do Kripke logical relations over a weak memory model to give the semantics of Rust's types, to even figure out what the proof obligations of unsafe code are.

This would make a good PhD thesis!

Not in my experience

Formalizations come later, not sooner, and inform on design indirectly after the fact (fix bugs, inform future evolution).

It's been my experience that a formalization (of a kernel language) pretty much has to come first when exploring new evaluation mechanisms. Otherwise it's too easy to design evaluation mechanisms that work some of the time, but require too many ad-hoc decisions to complete the design. This is why (for example) there are so many different FRP libraries, none of which work quite right.

Wouldn't FRP be a bad

Wouldn't FRP be a bad example though, since it has been heavily formalized at the onset?

Parts of it

...parts of FRP have been formalized from the outset. But before it can be used for real, there are a lot of issues that must be fully addressed -- for example, causality, memory leaks, connections to existing GUI toolkits. Formalization has been very helpful in figuring out what techniques are and aren't necessary.

For example, most implementations of dynamic FRP have used dataflow engines to do dynamic dependency management, but I showed in my ICFP 2013 paper that this is not actually necessary, because every reactive program you actually want to write is causal, and hence only depends on old values. So you don't need a dataflow engine at all; simple memoization is enough. I couldn't have figured this out without the effort put into formalization.

So I see formalization and

So I see formalization and theory as being useful for improvement; no doubt about it. And you aren't going to have an optimal implementation/design on day one, but once it is out there and people find it useful, it will improve incrementally.

I don't think Conal's implementations of FRP ever used a dataflow engine, right?

I'm not sure

The model implementations in his papers usually haven't. However, he's also built production implementations which were integrated tightly into shipping systems --- eg, the DirectAnimation stuff in IE4(!) --- and I don't know the details of how he managed the integration of events and FRP. (I have to admit I was primarily thinking of systems like froc, FrTime and Scala.React, though, where the selling point is the ability to integrate with existing (usually callback-y) APIs.)

Anyway, I agree it's an iterative process, but honestly the interplay of formalization and implementation is deep enough that arguing about priority really seems like arguing about whether the chicken or egg comes first. I don't think formalization is purely post-hoc rationalization, since it applies powerful pressure to keep the design simple and comprehensible, but at the same time you can't really get a good design without some reasonably realistic programs to serve as a benchmark for whether the language is good enough.

In other words, Rust probably benefits an awful lot from Servo.

I thought direct animation

I thought direct animation was inspired by but not exactly FRP. Then there is what midori was doing before.

Formalizations usually come later due to ability and interest of those doing the work. It doesn't say much about the utility, just that we spend our time where we think the best bang for the buck is (whether that is true or not, it is subjective).

Implementations are nice since they do something. I find prototypes help me evaluate and refine ideas more effectively than working things out on paper (beyond sketching, the computer is right there dammit!), and I never got into Coq. But then there are people whose creativity works differently.

This sounds interesting, I want to learn it

I'm reading a KScript and KSWorld paper right now, but I've never read anything on frp before, so I'm going to have to figure out exactly what they're talking about.

Since my current toy project has me digging into a simple system (lua with wxwidgets) to try to make it more powerful (I'm writing a lua to lua translater that will add a few things I'm interested in, including full continuations), I wonder if the frp ideas can be adapted to that library too.

I'll grant you

that if you're implementing something much less straightforward than applicative order evaluation you may need some mathematical analysis first. I still don't understand how the Curry language's "needed narrowing" works, for instance, but you want someone to prove that it works before you use it.

Hm

I personally suspect those papers were overly self-deprecating. In any case, the status of the formalization of Haskell's type system is much better than described here. I think it's wrong to say that Haskell evolves ahead of formalization "a lot". If they don't chime in to give their perspective, I'll ask one of the Haskell people when I have a chance.

From this context, everything that Rust is doing looks perfectly fine and normal.

Certainly, but I expected Rust to be over-achieving as it does in many aspect. I think Rust is a resounding success in terms of cooperation between the software industry (... Mozilla) and the free software community. It could also be a resounding success in terms of cooperation between those communities and academia (you are probably right to quote MSR as the success story here), but so far this has not fully panned out. I expect this will improve in the future, but asking about it seems the best way to know.

(They also got a fairly interesting distributed cooperative language design process, in a way I hadn't seen done before.)

Design Type System First

I currently think its a good idea to start by designing the type system for a language, and I think in terms of type system features. Part of this is I want to consider compilation as a sequence of type - safe AST morphisms, a typed version of a nanopass compiler. After all the value level language is just lambda-calculus so everything interesting is in the type system :-) For example, if I want imperative features I need to allow monads of some kind, if I want direct access to memory I need linear types, etc.

And maintaining formalizations?

How to maintain (mechanized) proofs seem still an embryonic research field (it seems to me Chlipala is arguably creating it).
Maintaining paper proofs seems even less understood — *if* you have a good intuition, you'll catch the cases needing redoing, but I find that a big if.

My only real experience is having maintained an Agda formalization while writing one paper. We didn't really formalize a complete paper proof, and we changed the modularization of the architecture midway (we have a formal proof with extension points). We refactored Agda code often — that's why I'll understand anybody who eschews such a task.

In fact, I believe paper proofs tend to share with Agda proofs a fundamental modularity problem: you need to inline definitions across module boundaries, so changing a definition requires changes at call sites (Conor McBride was on this problem). A non-leaky interface for automated tactics (à la Chlipala's crush) seems an even harder problem.

Ownership and Referencing

C++ has answers to those questions. I didn't realise there was a problem with 'sizeof' or calling 'size/length' on STL containers, and ownership and locking is nicely handled by RAII. unique_ptr and shared_ptr are for ownership. Plain pointers are now only used for a non-owning reference, like for a backlink in a tree. Rust has weak pointers for this, so you should be able to encode those trees.

In particular, allocating a

In particular, allocating a new object and returning a reference to it it from a function is common in C++ but difficult in Rust, because the function doing the allocation doesn't know the expected lifetime of what it returns.

Couldn't the function just return the unique pointer, with a generic lifetime reference?

Such hidden returns are troubling.

How is that different from hidden returns coming from all the exceptions?

Is Rust for OS level programming?

I can't think of a use for such a low level language other than OS programming, low level library writing and maybe game programming (just because they has to be fast).

For normal programming there's hardly anything OTHER than ancient primitive languages which can have buffer overrun problems, executing the stack problems etc.

For the most part I'm not interested in a language without a full gc. I'd rather deal with gc problems by having a good incremental, parallel gc, or even some attempt at a real-time-ish one. Or even an advanced one that can handle huge data sets by collecting long lived data in independent blocks in parallel with minor gc. Ie garbage collection is so hard that it's rarely done well, but that doesn't mean that it should be thrown out.

Most code is still in C

Most of the worlds code is still in 'C', then 'C++'. Most people still care about performance, and garbage collected languages are in the minority. You have to write the garbage collector in some language, so language development is still a target. There are many performance critical areas, like financial programs, AI etc. For example when developing a computer-vision application on Android I had to use native 'C' as Java just wasn't fast enough, and it was worth it for the extra 'intelligence' I could get per-frame on video. In a competitive market, having that bit extra intelligence, responsiveness etc will always favour primitive-languages, as you develop once, but the user experiences the application many times.

In a related point, when I suggested running Python on the ESA "OPS-Sat" satellite, I got some very strong responses back that garbage collected languages did not belong on satellites (even in the application layer) due to the unpredictability of runtime.

The problem with garbage collected languages....

Is that it's extremely hard to write garbage collectors in them.

You pretty much can't make a memory manager, of ANY kind, in a language that restricts your access to raw pointers for the sake of keeping its own garbage collection consistent. You wind up writing the grotty bits in assembly language or composing vectors of raw machine code to do memory management and then having your compiler spit them out by rote, because the semantics they represent are unrepresentable in the language the compiler is written in.

I don't like the idea of unrepresentable semantics. If there's something I know the machine can do, then a language that does not permit me to instruct the machine to do it is an incomplete language.

I don't object to garbage collection on the grounds of performance nor memory footprint. I use garbage-collected languages by preference in fact. I even *develop* a garbage-collected language! But at least to me they are now and always have been incomplete languages, and there are important things I cannot do without using a more complete but less convenient language like C.

By that definition...

...any language other than machine code would be "incomplete". Including C.

What you call "unrepresentable semantics" is simply abstraction, and as such a feature. There is no proper abstraction when you cannot restrict possible observations. A "complete" language by your definition, with no "unrepresentable semantics", is just a synonym for "leaky abstraction".

Leaky Abstractions for the win!

Exactly! And C is the absolute champion at having leaky abstractions!

I like abstractions, but I like abstractions that get the hell out of my way when I command them to.

Lisp has better abstractions. Common Lisp, in particular, has a few special constructions where you can command some of those to get out of your way if you're trying to do something lower level. But usually to get the degree of leakage you need for a memory manager you have to go to C or equivalent.

Leaky abstractions for the lose

There are at least three major reasons why you want a language that is a non-leaky abstraction: portability and security and (high-level) optimisations. You could also add usability and modularity, in the sense of robustness, scalability, and debugability. C is a disaster on almost any of these.

But even if you don't care, I don't understand how C can be considered the gold standard. There are many things you don't have access to in C, such as the stack, code addresses, the PC, registers, CPU flags, etc. All of these actually are fundamental problems for some use cases. For example, you cannot express tail calls, call/cc, exceptions, overflow checks, etc in C. (C-- tried to fix some of that).

At the same time, for most purposes, that's a good thing! I don't even want to start imagining the whole new classes of security problems we'd see in contemporary software with any of the above being broadly available. Pointer arithmetics is terrible enough. Let's be frank: programmers will use any dangerous feature you throw at them, justified or not.

The right tool for the job. General purpose languages should not give routine access to risky low-level details whose use is only justified in 0.01% of cases. There should be a very high barrier to entry.

Science and art

Seems to me we shouldn't try to make this an either-or between a general-purpose language that disallows things only occasionally wanted, versus a general-purpose language with leaky abstractions. You lose too much if you try to describe the criteria here with just probabilities of wanting access, and risks of access. When you want to break an abstraction, this suggests that the abstraction has the wrong shape. Finding a right shape could take decades. Both searching for the right shape, and living with the wrong one in the meantime, are artistic challenges.

maybe just need more oversight layers

C is the champion at leaky abstractions because you can see everything as an array of octets, then read and write any set of bits directly, modulo permissions. Often the bit patterns are what hardware and software use to simulate all the computation, so validity of all computations are compromised by access.

(Sometimes this is good, when you can write a simulation that runs faster than the one you found. Practical difference between real and simulated is how fast it goes, at the cost of other side effects and risks, of course. Every abstraction leaks the nature of internal simulation when you can get the bits.)

you cannot express tail calls, call/cc, exceptions, overflow checks, etc in C.

You can express them, but C offers no support to help. I think you mean C won't understand it when you express them, because they are missing from C's abstraction, largely because the abstraction isn't very comprehensive. :-)

I sympathize with your viewpoint though. The rest of this is a story intended to terrify folks who suppose C can be very safe without a super-lint analyzer. This is a true story, from two or three months ago.

A C dev made a change adding a suffix to a population of string values (octet buffers containing null terminated C strings). It crashed in tests, inside free(), and I was asked what it meant. I explained that, so far, that has always been heap corruption in my experience. What were the changes? We went over everything. First I said, all these calls need a maximum size limit, by using strncpy() and snprintf() instead of unsafe versions. Then I said the size parameter should come from sizeof(fieldName) so the compiler knows the value. Most of the buffers were declared as fixed sized buffers in structs. In one spot, the compiler complained: it was a pointer allocated from malloc(), not a fixed sized buffer. The dev had written two octets past the original memory allocation's end. He wanted to know: is that bad?

I explained, if you were unlucky, the very next byte after the end of a block might be heap bookkeeping info, used by an allocator when a block is freed, so changing it crashes, as it did here. Any extra padding at the end is just luck or grace, and there might not be any. At this point I expected the face-palm gesture, but instead he didn't get it, and looked at me with suspicion instead. I gather it wasn't part of his mental model that everything in the runtime was just bytes laying around where you can whack them.

How?

you cannot express tail calls, call/cc, exceptions, overflow checks, etc in C.

You can express them,

Really? How?

but C offers no support to help. I think you mean C won't understand it when you express them, because they are missing from C's abstraction, largely because the abstraction isn't very comprehensive. :-)

I think I meant what I said. ;)

It's not 100% portable or

It's not 100% portable or standards-conforming, but sigfpe had an article on continuations in C, which works pretty well in practice. Exceptions via continuations follow trivially.

Tail calls can be achieved via trampolines. I don't know about overflow checks, except by declaring a new type that wraps integers and exposes only inline functions for its operations.

I wouldn't say C can "express" these, but you can certainly simulate them to some degree.

The one I'm impressed with

but which is only really useable for generated code, is what chicken scheme does. It's impressive because the generated code is so fast.

There are two papers "Cons should not cons" parts 1 and 2.

Basically it says: compile your program into continuation passing style, but let the stack grow and use the stack as the garbage collector's nursery. That way it doesn't matter if your compiler does tail call elimination (which is sometimes slower anyway). Check the stack for getting near overflow, then empty it with a minor gc and a longjump and start over.

Luajit always does tailcalls correctly, so writing an extended-lua-to-lua compiler that recasts some functions in continuation passing form so that I can write non-deterministic programs.

Workarounds

Sure, but I hope we can agree that these qualify as (rather horrible) workarounds at best.

What if the transform to cps were hidden?

I think the idea of having a single system where part of the program is continuation-compatible and other parts use methods (like stack call and return) that aren't sounds surprisingly useable.

And it has the advantage that there is so much code, so many libraries, so many languages aren't aren't compatible with efficient continuations.

There are limited uses for full continuations (especially if you have threads), say search, and not only does it make sense that only part of a program would need it but it makes sense that you'd want the extent of the continuations limited anyway.

This could be a model for a way to have other incompatible evaluations schemes in one program.

with explicit data structure modeling

I talk about continuations in C so much, some folks likely think I trapped you into asking. I'll answer several times, but you only need a first, short, abstract version. I wish Rust would do this, but they decided to bail on green thread runtimes. Most of my remarks are about control flow in C, but the "model explicitly" idea also applies to overflow.

Really? How? [express tail calls, call/cc, exceptions, overflow checks, etc in C.]

I'm used to the question being why instead if how; the motivation is scaling to very large numbers of async control flows, via fibers for lightweight process code simulating unix style tools, without using snarls of funky callback functions. This next paragrah is the short answer to how:

You can tackle it abstractly, starting from a problem statement about how to model the desired semantics explicitly as a data structure. Then use your model instead, in terms of a new data structure written in C. It's cheating in so far as you sidestep the standard C runtime, but not cheating when you do it entirely in portable C.

You can replace the C stack with explicitly allocated activation frames in the heap, but it imposes a dependency on a green thread (fiber) runtime, so the resulting code can only be invoked by callers who use a cactus stack model. This can be done in 100% portable C, but might require replacing calls to anything that would conflict.

There's no need to use setjmp/longjmp. In fact, you can't even call the standard system versions of those any more, after changing to activation frames in heap-based cactus stacks. But they can be replaced with new analogous calls, where a jump buf represents a continuation in a cactus stack activation frame.

It actually helps that C doesn't want to let you directly interact with the C stack, because that prevents C code from having a dependency on it, so you can't break them when you do a whole program transform on code to use CPS (continuation passing style) with each call indirecting through a trampoline. You move all formal parameters and all local variables into a frame struct, with each variable declared exactly the same (modulo renaming), then rewrite variable usage to be relative to the frame pointer.

Despite being written in portable C, the CPS transformed version can unfortunately look bizarre when "calling" a nested function becomes a return to the trampoline so it can dispatch without the C stack getting any deeper, so there's nothing to unwind.

The remainder of this is one final answer, perhaps a bit long, using a visual construction based on a geometry metaphor. (My native cognitive style is oriented toward visualizing everything as spatial transformation, so this one makes sense to me, but not to other folks I told before.) First, imagine the entire C call tree as a graph, drawn on a piece of paper with main() at the top, with nested function calls going downward by convention.

Now you have a model of the C stack depicted in the XY plane. Then a CPS transform of this graph metaphorically rotates the C stack into another plane, say the XZ plane, while reserving the XY plane for the trampoline to use. A source code rewrite lifts graph nodes from the XZ plane and moves them into the XY plane managed by a green thread trampoline. We still draw the call graph in the XY plane, but now all edge transitions are done by trampoline.

If you call a C routine that has not been so rewritten, it uses the C stack in the XZ plane, so no change occurs to the call graph in the XY plane managed by trampoline. Calls on the XZ plane using the C stack are faster, so we would like to keep some of the call graph in the XZ plane, but only at the leaves. Anything that would require a non-local exit must be explicitly modeled in the XY plane. The C stack completely unwinds on each return to trampoline, so the XZ part for the C stack is always gone when transition in the XY graph occurs.

Because the XY call graph is heap-based in cactus-stack activation frames, you can jump anywhere in constant time without doing anything to the C stack, because the XZ part is empty. So you can implement anything needing such fine-grained control flow including tail calls, call/cc, exceptions, green thread spawning, and fiber context switches.

do a whole program transform

do a whole program transform on code to use CPS (continuation passing style) with each call indirecting through a trampoline. You move all formal parameters and all local variables into a frame struct, with each variable declared exactly the same (modulo renaming), then rewrite variable usage to be relative to the frame pointer.

Unless I've misunderstood something, this technique is well known and was first published by Baker in "cheney on the MTA". Your scheme only differs in that activation frames are heap allocated, and thus you don't require setjmp/longjmp to reset the stack after a minor GC.

Explaining your approach as a diff to a well known technique should save you some exposition in the future.

agree CPS is definitely old news, easy to grasp

I appreciate you supporting the idea by noting it is well known. I thought most PL folks were familiar with it, and that's why the question about how to do it surprised me. The point of my exposition was to make it obvious, using few words, so young language tinkers can give it a go since it sounds cool. To compare it to anything else would have taken more words.

Did I suggest such ideas are novel? Visualization as planes might be odd, but has nothing do to with the scheme. I haven't read Baker's paper. (I don't care, and I think it's obvious.) I'm guessing your reaction must be due to lack of attribution, as if I care what anyone has written about a topic. I detest elitism, appeals to authority, and status-oriented motivations.

The only thing I've read on the topic is Appel's Compiling with Continuations, which I tend to note when I talk about the ideas a lot. That was pretty fun. Everything else can be figured out from first principles, after the basic idea of CPS is understood. I started in the industry at at time when it was normal for peers to work things out easily themselves. No big deal.

I'm guessing your reaction

I thought most PL folks were familiar with it, and that's why the question about how to do it surprised me.

Andreas wasn't asking how to use C as a target for a language that implements such constructs, because any Turing complete language will obviously do. He was asking how to "express" them in C, which typically means providing them for C programmers to use in a natural way.

I'm guessing your reaction must be due to lack of attribution, as if I care what anyone has written about a topic.

My reaction was that your post could have been 4 lines instead of 40 lines: take cheney on the MTA, heap-allocate all frames instead, use a trampoline for tail calls. I cited that paper to check that this is actually what you meant, but if you're not even going to peruse it, then there's really nowhere to go from there.

Attribution isn't an appeal to authority, it serves the same purpose as jargon: an efficient short-hand for technical discussions, but which actually provides the relevant background for those unfamiliar with it, who are actually interested in learning more. That's hardly elitist.

no sweat

I didn't have another language in mind targeting C, just C before and C after CPS transform, everything expressed using C. It's intended to be natural for C programmers when the source before was just C too. Source level debugging might show before and after simultaneously. As far as I'm concerned, this is "in C". Saying something cannot be expressed in C seemed too strong in this case.

You exaggerate how many lines that took. :-) If you drop comments on setjmp/longjmp and visualization, as well as the first short abstract answer, it's two short paragraphs. You're saying the other parts weren't useful, but of course I demur. We disagree strongly on what's efficient. I think making a point in a few clear paragraphs is better than citing papers, especially when I know so many quasi illiterate programmers.

For other folks, I thank you for the citation. Giving me homework is a sort of challenge in a game I don't care about. I read papers for coworkers, and to satisfy my own questions, but not to justify my own remarks that seem clear enough on their own. I have no objection if you keep doing it, but I expect to keep charging elitism in motivation, because subtext is that only folks versed in literature have good comments.

Sorry if that seemed confrontational in some way. My lack of interest in face tends to come out like that, but I don't mean anything by it.

I tried that in java once.

But I'd say I rather failed to make it natural for java programmers :/ so I have a more or less failed project that embeds amb and prolog in Java. I call it failed because I'd never want to use it or finish it - it works I guess... But when I did the same in scheme, horrible unreadable code became simple higher order function calls, and that's not even dealing with the part that uses the continuations.

It might be easier in C than Java where at least you have macros so you can get rid of some boilerplate. Such horrible boilerplate.

.> or c++ where you could spend month getting lost trying to make templates do something and then years rewriting them to work with Boost and Boost to work with them. Sorry I can't resist the dig.

Citations

I for one find paper references useful because I can note them down if I am interested in the topic, download them, and read later. I don't want to disagree with any of your points, just wanted to give a reason why references to papers can be good. I wish all papers were free to access, but there seems to be a definite move in that direction which is good.

especially when I know so

especially when I know so many quasi illiterate programmers.

But do you know so many quasi-illiterate programmers here on LtU? Write to your audience!

but I expect to keep charging elitism in motivation, because subtext is that only folks versed in literature have good comments.

There is an opportunity cost to long comments, and people who would otherwise be interested in your topic will simply pass over yours. If you're ok with that side-effect of a more expository style, carry on!

need new "too short didn't read" acronym

Most LtU posters are quite literate. :-) Sometimes my audience is non-posting lurkers, when I wish an idea was more widely understood, say if it has untapped value.

I generally address readers with above average attention span, who might think before deciding what to do. If I'm only read by the 10% who can pay attention a moment, that's good. I often ignore things that are too short myself. I close web pages right away when they are clearly not about anything, because there's only 30 seconds of content, just enough to say "folks are talking about xyz and so am I, please share with friends".

To say anything worthwhile, some part must be slightly surprising, or else there's no information present in the Shannon sense. To even set this up requires an intro for context, then a point hard to say in ten words, because it's off the beaten path. Things really easy to say are mostly well known, with a me-too quality, so a reader need merely nod. I'd rather a slightly puzzled look than nodding. :-)

You're pretty good at packing a lot into a few words, so your posts tend to be shorter while keeping up a good level of content.

Continuation Passing C

An academic reference to those ideas would be Continuation-Passing C, by Gabriel Kerneis and Juliusz Chroboczek, which has been discussed on LtU before. Gabriel Kerneis's PhD thesis (PDF) has detailed correctness arguments. This was evaluated by writing a Bitorrent server (in the thesis), and using it to reimplement QEMU coroutine primitives (in a later publication).

Not "justified." The word you needed is "REQUIRED."

Things that ought not be used 99% of the time, and are not necessary 99.99% of the time, are still absolutely required that to deal with that last 0.01% of the time. If you can't implement that last 0.01% within the boundaries of your abstractions, then making those abstractions leak-proof has NEGATIVE value.

And that means EVERY component of the system, including the memory manager, file system, process table, virtual memory manager, various resource managers, device drivers, I/O buffers, CPU scheduler, process scheduler, network drivers, interprocess communications buffers, pipes, sockets, kernel-mode security code, binary-mode mapped memory with write-through to file, and all the rest of those grotty underpinnings.

In the best case, your abstractions are perfectly designed. That means you can write all those things without resorting to any code whose meaning depends on things outside the language definition. That is a complete win.

In the second best case, you have some abstractions that are imperfect - ie, they prevent some such things from being implemented - but at least they leak. So you can write all those things by relying on undefined behavior or #ifdef-equivalents or otherwise escaping the control of the language. The implementations however, will be subject to "bit rot."

But in the third case -- all too common since making abstractions leak-proof is FAR easier than making them right by designing in a way that deals with all the cases -- you have imperfect abstractions AND no way to escape their limitations. In this case the language is incomplete.

Unless your abstractions are so perfectly designed that they don't prevent any necessary software from being written, then they had damned well better leak.

What many don't do, not what they can't do.

But even if you don't care, I don't understand how C can be considered the gold standard. There are many things you don't have access to in C, such as the stack, code addresses, the PC, registers, CPU flags, etc. All of these actually are fundamental problems for some use cases. For example, you cannot express tail calls, call/cc, exceptions, overflow checks, etc in C. (C-- tried to fix some of that).

None of these are things you can't do in C: It's just that in C, these are techniques rather than language-level utilities. Access to the stack, for example, is done via setjmp/longjmp. Access to the program counter is done via jump targets. Tail call optimizations are just syntax for loops and state-machine code you have to write directly. Exceptions are just syntax for remembering to check return values for validity every time. Closures are just structs that contain both a procedure pointer and the values of the arguments to the procedure. Objects are just structs that contain procedure pointers.

And so on. Everything that C "doesn't have" is just a standardized, pre-implemented way of doing something that you can also do directly. Usually these utilities are implemented with a bunch of assumptions that may not be valid assumptions for a given program, extensions that aren't needed in the context of a given program, and implementation issues that represent performance failures on which one can improve in the context of a given program.

So whenever it becomes important to address these failings, I wind up dropping to C.

Do its abstractions leak? Hell yes. Only about two-thirds of this stuff can be done without relying on undefined behavior. Reading the longjmp pointer as a pointer and dereferencing it with offsets to get at things on the stack for example, makes no guarantees about how the stack is laid out - so you have to experiment to discover how your compiler does it.

But the language would be incomplete without that leak. Maybe you can come up with a better-designed abstraction for access to variables on the stack and in the stack frames of other calls - but if you haven't, or if the abstraction you've come up with doesn't serve all cases, "fixing" that leak, while easy and tidy, leaves you with a less useful language.

Abstractions and abstraction leaks living together

The approach I'm leaning towards is that if any abstraction can leak against the recommendations of its author, then I'll see if I can design the language to account for that explicitly.

For instance, suppose the user is running the program in debug mode, and the reflection request is coming from the user at a debugger interface. If the language design doesn't describe how this kind of request works, debuggers like this will still be useful enough that users will seek and invent them, but they'll have to resort to extra-semantic workarounds.

In many cases, the system that guards abstractions is simply the language runtime. However, the responsibility can also be distributed across local runtimes on separate machines; it's not easy to reflect on one machine's state from another. Similar to physical isolation is temporal isolation: It's not possible to reflect on a pre-compilation artifact if it's already been compiled away, so some abstraction-guarding happens during compilation. So the hardware is a certain root of reflection responsibility, and whoever owns/manufactures/maintains each piece of hardware has control whether the authors like it or not.

On the other hand, due to the staggering inconvenience of vetting all installed code and crafting custom hardware or VMs, authors have a lot of control over their creations, whether the hardware owner likes it or not. This part of the reflection responsibility is naturally distributed to program authors, such that any author has expertise in their own creations but not much expertise in anyone else's. If we logically, cryptographically capture this kind of responsibility, then we can formalize the ways authors explicitly and accidentally share it. Since an author will have the ability to reflect on their own work, we can get the formalism and collaboration benefits of modularity without tying authors' hands.

I'm slowly building this in terms of code-signed modules and an explicit oracle so a program can make extra-semantic requests of its runtime host. My biggest conundrums are how to deal with some cryptographic details and what format to use for making these extra-semantic requests. :)

(Incidentally, for hardware control, I would like a language possibly leakier than C so that I can manage the behavior of caches.)

Abstractions don't pay the bills

I'm with Ray in that I consider any tool set that doesn't allow you to produce the machine code you want to be incomplete. Explicitly specifying memory layout, ownership or other such policies makes your code more fragile to change, but performance characteristics are important and it's often a worthwhile tradeoff.

Opposite

I consider any tool set that lets you produce whatever machine code you want to be terribly insecure. Don't we have enough broken code?

I would like to see more use of low-level systems languages that support high-level semantic models, such as ATS or the effort on Deca. But I'm not convinced we should be operating directly at the level of machine code abstraction.

I'd also like to see more tooling support for updating the compilers and supporting easy cross-compilation. I think the ability to compile specialized runtimes together with a cross-compiled application would allow higher level apps without nearly as many concerns about low-level representations (for FFI, system calls, hardware DMA, etc.).

Full spectrum

I consider any tool set that lets you produce whatever machine code you want to be terribly insecure.

Insecure against what? Security policies only make sense in context.

I agree we should almost never be writing machine code directly or even manipulating machine code programmatically. But the language should have a path for it. Yes, it's usually bad style. Yes, you should have an easy to forbid it on a project. But I want a compiler suite that's open to advice from the programmer at all levels, including the lowest levels of machine code generation.

Insecure against what?

Insecure against what? Security policies only make sense in context.

David was talking about unrestricted machine code, which can violate nearly all security policies, even OS process boundaries given kernel vulnerabilities. I expect most here would agree that a language that can let you manually specify machine code would be great, as long as it's safe, in that it protects its own abstractions. For instance, such machine code can't perform abitrary pointer arithmetic to inspect the internals of an abstract type.

Why not?

If I want to build an executable by compiling all code normally except using pointer arithmetic to inspect the internals of an abstract type, then why not?

The compiler is a tool for converting code into producing useful output. I doubt many here would find it controversial to have a compiler switch that disables memory safety in exchange for a 5% performance boost. The thing is, that compiler switch is part of the extended language used to interpret that program you wrote. All I want, really, is a better and more integrated compiler / build process.

Controversy is argument ad

Controversy is argument ad populum. Well, making a point of it is.

Writing insecure code isn't controversial. It's the common practice today. Like, the doctor who suggested that surgeons should wash their hands was laughed at, then years later died from infection after a surgery. Right now, it's the secure language designers who are considered crazy. And we ignore the malware and security bugs and the lack of nice features like mobile agents or client-extensible services as a matter of course.

I think compiler switches that affect program semantics are a major issue, since you can essentially be forced to use them based on libraries. For this reason, "optional GC" (an idea raised frequently) is a bad idea, at least in its normal form. Optional memory safety seems dubious to me, but I haven't given it enough thought.

Really?

You really think everyone should be forced to use garbage-collection? Seems a terrible idea to me.

Language based security is an illusion, as it can still be modified at the byte code level with the right tools. Proper security can only be done at the operating system level, and ideally by capability based systems.

Edit: Could we agree to a compromise where provably safe use of memory is exempt from garbage collection, with GC as a fallback if the type-system cannot prove the use to be safe. In this way all RAII objects (where all memory associated with the object is released when the stack based handle goes out of scope) could be exempt?

I still would prefer to write the GC as a library, as it would seem necessary to be able to bootstrap the language whilst maintaining visibility of the GC code.

I still would prefer to

I still would prefer to write the GC as a library

I agree this would be nice, but at the moment we seem stuck with unpleasant hacks like Rust's Rc<T> which requires T to be acyclic, but has no way to enforce that constraint. This kind of undermines the whole 'safe systems language', and is the only place where it's really no better than C/C++.

On a multiprocessor system

Edit: Boehm obviously doesn't rely on safe-points, but it relies on:
1) stopping all of the threads (including ones running foreign code) on an OS level and trusting that that flushes caches.
2) READING THE REGISTERS AND STACK from every stopped thread.
3) it relies on OS level dirty page bits

None of those are optimal, which is why java and .net both use safe points, and .net uses card marking write barriers.

And none of those from either list are formalized into rust are they?

a gc requires safe-points in the generated code! Even if you're not doing a stop-and-collect collector, you still need to flush processor state at phase changes and the gc has to know it. And the only efficient way to do that involves taking some control over thread switches too, though in some operating systems it can be forced by pausing all of the threads even though you don't intend to stop them for longer than acknowleding a phase change(stopping them all is also a horrid thing to have to do because the number of threads can be not all that bounded).

Also because you need ack from each thread, you have to specially handle calls that go outside generated code similar to green threading. You need to put code in all of the mutexes too.

I doubt that you can transparently safe-points into any existing language! Users would have to PUT SAFE POINTS IN.

So how do you write a gc as a library?

Not quite

Optional GC is bad, in a language, at least in its normal forms, i.e. a compiler switch or linking a conservative GC library. But that doesn't mean everyone should be forced to use GC. It just means that people who want GC should use a different language than those who don't want GC. That, or we need to find a better form to control GC than a compiler switch.

I have some ideas on the latter option. Basically, we'd need to control GC at the level of subprograms in a manner that allows subprograms with and without GC to easily be composed. Such approaches might involve DSLs, reliable compiler optimizations akin to SAFL, or potentially an application of modal logics.

If we can infer or annotate some values as having statically computable life spans, e.g. relative to a stack frame or some other condition, that might also prove useful.

I still would prefer to write the GC as a library, as it would seem necessary to be able to bootstrap the language whilst maintaining visibility of the GC code.

It is quite possible to bootstrap GC'd languages. Smalltalk has done so since the beginning. It isn't clear to me how GC as a library would work, unless you mean it for use by a program that models the runtime or compiler.

Language based security is an illusion, as it can still be modified at the byte code level with the right tools.

If we used a secure JavaScript (e.g. Caja), we wouldn't need ad-hoc and dubious defenses like the single origin rule. Mashups would be a lot easier and safer. There are a lot of code distribution scenarios where language based security is easily leveraged.

As far as modifying the byte code: a bytecode, if you choose to use one as the distribution medium, should also be secure. Full abstraction is valuable at least up to distribution. The bytecode should have the same security properties, one for one, as the original language that targets it.

The ability to attack at lower levels than the language (e.g. a backdoor in hardware, or the backdoor hidden in every mobile device) doesn't really diminish the utility of language security. It just suggests our lower levels would also benefit from a more secure computing model.

Bootstrapping smalltalk was a horrible horrible failure

It was done by writing a horribly naive smalltalk to C compiler for a uselessly bad subset of smalltalk that wasn't even well enough designed to allow a non-crippled version of Smalltalk to be written (don't tell me that the original squeak vm was a good implementation). The translator was undocumented and stopped working soon enough through code rot.

All it accomplished was that some tiny, 1/10th finished version of Smalltalk could be debugged on a different, completed Smalltalk system.

What you could say about the squeak project was that it started bad and languished with half-written half-assed code forever.

Ok, that's too bitter.

It was ok for what looks like a kludge of a weekend or two. The result was portable because it was just trivially C in smalltalk syntax. The result was that some programmer who hated C could pretend that his code wasn't C... But it was all very simple, brain dead syntax translations. The crippled Smalltalk couldn't even call real Smalltalk functions.

The rest of the bitterness is really because of the state of the actual Smalltalk libraries. Sigh.

But I tried to look at the bootstrap code 8 years ago or so. Nope, it can no longer run in Smalltalk.

Mist

It looks like there is a new project to bootstrap a Smalltalk variant, being developed in Pharo.

clueless question

I want my GC'd code to use non-GC-aware libraries. How should that work? And not be inefficient or likely to lead to sturm und drang?

Microsoft's answer was:

1) a gc that allows objects to be pinned (which is amazingly expensive to the whole system). I would have just allowed objects to be created in non-moving memory when necessary.
2) Marshaling for objects you don't want to pin - that means "copy them"
3) a language c++/cli that has both regular pointers (for non-gced, pinned objects) and gc references. Yep, both. They use "*" to mean pointer and "^" to mean reference. That's all there is to it.

mixing GC modes

I don't know how mixing GC with non-GC 'should' work, except 'smoothly'. (:

In Haskell, we use a `ForeignPtr` type at the Haskell layer, which may be annotated with a destructor for GC purposes, and a `StablePtr` type if passing references to a Haskell object into the FFI code (which is not a common requirement).

In Spidermonkey JavaScript, there are special calls on the C side to mark roots to prevent GC of values, and this is handled implicitly in many cases (e.g. the method call arrays). And IIRC, there are ways to wrap pointers to C objects with a JavaScript object and destructor so you can pass them into JavaScript.

In V8 JavaScript, I'm pretty sure that C++ RAII is used to manage roots automatically. I only used V8 briefly, so my impressions are rather vague.

At the language layer, if we have GC'd subprograms mixing with non-GC'd subprograms, that might happen at a much finer granularity than 'libraries'. There needs to be some way to allocate memory that is managed by the computation rather than by a separate GC process. (Though, this doesn't need to be exposed to the programmer.) Presumably, we could avoid a lot of indirection - i.e. all the wrapping/unwrapping of types - due to specialization of the runtime. But the basic ideas might be the same.

V8 Dev here.In V8

V8 Dev here.

In V8 JavaScript, I'm pretty sure that C++ RAII is used to manage roots automatically. I only used V8 briefly, so my impressions are rather vague.

Yes and no. In V8 the C++ class Handle<T> allows one to create handles that are essentially an indirection to an object known to the GC. The constructor of Handle searches for the current HandleScope that is part of the thread-local-store for the C++ thread. So, essentially dynamic scopes for handles. A handle is always allocated in the "current" handle scope. If no handle scope is present, you crash with an error. If a handle lives longer than its handle scope, all bets are off. To make global handles, these require explicit operations. The constructor and deconstructor of the HandleScope class manage the stack of scopes through the TLS. You can allow at most one Handle to escape a scope into the outer scope by a CloseAndEscape(Handle) operation on the handle scope.

Since C++ templates are basically sugar for AST expansion, there is no variance, So Handle<B> is not a subtype of Handle<A>, even if A is. For that we have yet another implicit constructor that does a static cast. Somewhere in the constructor is a dead assignment between the two types to force the compiler to generate a type error if you try to break this. Hack.

Here C++ can really screw you, such as the order of evaluation not always being defined, since you do *not* want the C++ compiler to dereference a handle and cache the value just before invoking a potential GC-causing call, since the value is no longer valid after the call. We even have a special syntactic analysis to try to rule this out, but, Hack.

Thanks. I do remember

Thanks. I do remember handles and handle scopes, now that you mention them. I hadn't thought about how optimizers might mess with this.

Here are some things you should be able to do.

Memory safety *SHOULD* be optional. But the switch must not be at the whole-program level. Programs depend for their correctness on most memory having safety or GC. On the occasions when you don't want GC protection, it's because you're doing something unusual and you should be using a different primitive to allocate.

  • Allocating at an exact hardware memory location because this is a driver that mediates the kernel's access to some memory-mapped hardware.
  • Allocating a buffer that will contain or be managed by machine code that refers into it using offsets.
  • Allocating a communications buffer that will be read/written by two or more different programs.
  • Allocating a block to be managed by a non-default memory manager that has semantic or performance characteristics differing from the implementation's default.
  • Allocating the allocation table, or otherwise bootstrapping the memory management for the operating system itself.
  • Generating a routine that will be used *by* the memory manager when allocating a runtime-computed product type with fields at known offsets.
  • Mapping a buffer of bytes that has just been read from or is about to be written to a serial interface onto value of a frozen type - "frozen" in this case meaning having an unvarying memory layout and therefore capable of having its fields at known offsets relative to that buffer's base pointer.

These are all jobs that are different from the relatively trivial operation that GC is supposed to protect. And these are the jobs where standard GC gets in my way. When I want to turn off GC and manage something manually, it's NEVER program wide; it's only with respect to specific buffers where I'm doing something that falls outside its job.

turning the app around

Today, an application typically has a very narrow type at the top level (such as `int main(int argc, char** args)` or `public static void main (args)`). Then our applications call out to highly sophisticated interfaces, e.g. in the form of FFI and system calls.

It is unconventional but not especially difficult to flip this around such that our applications have a much more sophisticated types and interfaces at the top level, but do not directly utilize FFI or system calls. Instead, our applications might use algebraic or monadic effects, or an object capability model.

Doing this has a lot of advantages for composition, process control, security, and portability. It also introduces interesting possibilities where our compiler knows how to handle specific 'types' of applications for different contexts. E.g. you might use one toplevel type to compile a web app, another to compile a web server, and yet another to compile a unikernel. Creating 'type transformer' code would not be especially difficult when the application types are close enough (e.g. android app and web app).

Your observations about optional memory safety, primitive allocations, etc. seem to founded on an assumption that we're using the conventional application model. How much is accidental complexity, a consequence of a badly designed one-size-fits-all abstraction for applications?

In the alternative, any special requirements are just modeled at the application type. If an OS-layer component needs the ability to allocate for DMA hardware (for one example), we might model this as just another monadic or algebraic effect or object capability.

If I want to build an

If I want to build an executable by compiling all code normally except using pointer arithmetic to inspect the internals of an abstract type, then why not?

I don't know what the highlighted portion is supposed to mean exactly. The scenario I was talking about means module B can generate code that can inspect or modify the protected types of module A without any explicit permission grant, thus violating B's invariants.

So I can interpret that highlighted portion as, "the language protects its abstractions but allows you to choose how to compile those abstractions", which is fine, or I can interpret it as, "the language permits you to generate truly unrestricted machine code and allows you to violate any safety properties that would otherwise hold in your program", which I don't think is ok.

You can try to walk a middle path where the permission to generate unrestricted machine code is restricted by a capability, but even this seems unnecessary.

Insecure in common contexts,

Insecure in common contexts, e.g. developing reusable libraries or plugins or applications or device drivers. A hypervisor might be fine with arbitrary machine code.

I think advising a compiler is fine. That's different than injecting arbitrary interrupts and such because you can get whatever machine code you want. Advice shouldn't impact observable semantics. The nature of 'advice' is that it can be ignored, but it might make our lives easier if we follow it. I also wouldn't mind developing compiler plugins to implement the advice, though I'd like some way to formally validate the plugins.

The program is just one input to the compiler

To me, the better question with libraries or plug-ins or drivers is on the receiving end: how do I know this binary blob I just downloaded stays memory safe and plays nicely? Impose a mechanism to fix that problem (like targeting a VM/language subset that enforce those properties) and producing such an artifact becomes an external requirement. As things are, I'm only forced to live within the strictures of your language if I decide to use your language at all.

We might as well rely on my good judgment to use best practices since we have to rely on my judgment to use your language in the first place. Otherwise those strictures will be an argument against picking your language.

From your other post:

I think compiler switches that affect program semantics are a major issue, since you can essentially be forced to use them based on libraries. For this reason, "optional GC" is a bad idea.

I'm all for focusing on semantics and doing things in a clean way. I think it can be done with "optional GC", too, even preserving type safety. Maybe I should work through that example more before making the claim, though...

We can compile to different

We can compile to different targets - e.g. compile to C++, Forth, Java, JavaScript. But if some library calls out to x86 machine code or other targets, this becomes a lot more difficult. Treating a target as an 'external requirement' seems incompatible with a position that we should be able to generate whatever machine code we want.

How our target language handles issues like security, state, linking, types and so on will have a major impact on what kind of semantics we can actually have in our language, or at least what types we can compile. E.g. it's difficult to compile arbitrary functions from a language with ambient authority or FFI to a language without it, or a language with state aliasing to a language with only linear state, or a language with multi-methods and plugins to a language without global state.

To whatever extent you hope to reuse the same modules and algorithms for the different purposes, they must be compatible with the more stringent of the requirements for all the potential targets.

In many ways, designing the logical target for a language is the same as designing its semantics.

I'm only forced to live within the strictures of your language if I decide to use your language at all.

Outside of hobby projects, programming language is rarely a personal decision. It's tied up with what projects programmers are maintaining or extending, what services they're integrating, which massive or highly tuned libraries they're using, IDEs and toolsets and available compilers, etc.. If you want to modify Emacs, you're going to write some e-lisp.

Whether you decide to use my language ultimately depends more on how successful my language is than any opinion you might have about its strictures. The same, of course, is true for every other language. Today, C and JavaScript are perhaps the most important languages that everyone is stuck using (with various levels of indirection).

Strictures won't make a language popular. They are, as you say, "an argument against picking a language". But strictures may enable new service opportunities that make a language successful, e.g. mobile agents and client-extensible services, improved code reuse, genetic programming.

Whatever tool chain you have

Whatever tool chain you have can be kept closed or made open to advice from the language. Code affecting lower layers of the tool chain is inherently less portable / more fragile than ordinary code, but as with all language design some approaches are much nicer than others. Opening up your tool chain doesn't preclude anything (no algorithms become inapplicable nor do any mobile agents, client-extensible services, etc.).

I don't see the increased fragility and decreased portability as reasons not to expose hooks into the compiler. Those costs should be obvious to the developer and there are similar costs associated to proving strong properties of your program. More pernicious are the pressure that this creates on the tool chain to stay the same and the costs of maintaining backward compatibility with the old tool chain APIs. This is why I think you have to view your role as not just opening up hooks into your compiler but as organizing those hooks into a well-factored low level language that won't need to change much. The competitor here is C.

Advice and Influence

Advice to compiler is fine.

Awelon language and bytecode does have such a feature, in the form of annotations. The concrete form of an annotation is `{&foo}`, where the `&` prefix indicates an annotation. As functions, annotations formally have identity semantics. But they might advise various layers (compiler, typechecker, runtime, etc.). E.g. `{&static}` might say that the argument should be computable statically, and `{&par}` might suggest its argument, a first-class function, should be computed in a separate thread when later applied. If you say `{&int16}`, a typechecker or compiler might try to squeeze a number into a sixteen-bit representation and guarantee it is an integer, and perhaps raise an error if it fails to do so. Or maybe I should use `{&type}` and separate the type descriptor, which should be statically computable, as another argument. I agree that developing a good set of annotations "that won't need to change much" is valuable, though fortunately it can be gradual.

But the nature of 'advice' is that it is discretionary. It can be ignored, or removed, and doing so should not affect a correct program's formally observable behavior. (An incorrect program might fail fast with annotations, e.g. on an assertion.)

This is different from the sort of inline assembly you might see in C/C++ code, where the assembly is a primary representation of a function's behavior. Advice offers no strong guarantees that you'll get the machine code you want, unless you also happen to control the compiler and the entire tool chain.

I don't see the increased fragility and decreased portability as reasons not to expose hooks into the compiler.

I do. I'd avoid hooks that significantly increase fragility or decrease portability. Fortunately, the discretionary nature of advice greatly mitigates any impact on these properties. You might have some fragile performance properties for code that depends on compiler-specific or target-specific advice, but at least the semantics would be robust.

Sometimes optional, sometimes not

unless you also happen to control the compiler and the entire tool chain.

If you acknowledge this exception, we're not too far apart. What I want is the ability to start a project mostly worried about abstractions and logical specification. Maybe I toss in a little advice on general memory layout early if it makes a big performance difference. Hopefully I can do this in a way that minimizes the fragility of these annotations and maintains automated type-safety guarantees. Here soft advice is probably more useful.

Then, probably late in the development cycle before shipping a product, I should be able to add more aggressive optimizations that may be somewhat fragile - significant code changes would probably break them. And maybe I've been able to prove them correct or maybe I haven't. That's up to me. This probably requires hard advice since the compiler had best do exactly what I say when it doesn't understand what's going on.

Hard advice

I don't mind hard advice in the "do this or fail fast" sense that you know common compilers acknowledge. E.g. if you note that a particular call had darn well better be tail call optimized, then it would be better if the compiler aborts with "I cannot do this" rather than silently ignoring your advice. Type annotations are another example where hard advice is useful.

But hard advice can still be removed without affecting formal behavior of a program. It is still discretionary.

You seem to be suggesting instead some ability to say "replace this function with my hand-optimized implementation, and don't look too closely at it!".

I could represent such annotations in my bytecode, e.g. provide `{&asm}` two arguments: a statically computable function and a statically computable string with assembly code, to tell a compiler that they should have identical semantics and to please use the assembly in place of the function. This might be improved by adding a proof term (which might simply be 'assume') or using a proof-carrying assembly.

I'm not convinced this is a good idea. But if applied for a specialized purpose, e.g. to support embedded or real-time code, or to get enough performance for your video game to compete, I might reluctantly accept it. I would prefer to eventually replace such cases with more generalized annotations meaning "this code should be suitable for FPGA" or "this loop should be executable in constant space" without insisting anything specifically about how to implement the code.

Idealism vs realism

I don't see the increased fragility and decreased portability as reasons not to expose hooks into the compiler. Those costs should be obvious to the developer

In an ideal world. The reality, though, is what John Carmack said in one of his talks (paraphrasing): every bad feature provided by a language will eventually end up in your code base. And potentially undermine all the care that went into the rest of it.

Idealism vs. realism

Or is the idealism believing that your compiler can do a good enough job of code generation without help from the programmer?

Anyway, this seems like something that can be solved with primitive tooling to enforce project policies (e.g. type safety circumvention requires permissions).

Programmers' hubris

Or is the idealism believing that your compiler can do a good enough job of code generation without help from the programmer?

There are quite a few things which compilers do far better than programmers, e.g. register allocation or inlining decisions, even memory management in most cases. Where programmers can actually affect these decisions, they mostly just get in the way.

For some reason, many programmers tend to underestimate complexity, and vastly overestimate their own ability to deal with it.

Anyway, this seems like something that can be solved with primitive tooling to enforce project policies

Again, in an ideal world, yes. But I rarely see this working out in practice.

Hubris would make a good PL name

There are quite a few things which compilers do far better than programmers

Sure, I agree. But:

- Not everything yet.

- If the programmer is expecting a fast implementation strategy, but is relying on the compiler to find it, he may accidentally write code that prevents the compiler from finding it, and the programmer might not understand why he's fallen off the fast path. It could be that the fast path optimization is no longer possible or the compiler has just become confused.

- There are reasons to specify parts of the program in a lower level language. Explaining how to deal with out-of-memory conditions or other low level failures requires thinking about low level representational issues. A principled approach to integrating two levels of abstraction in these situations is desirable.

Again, in an ideal world, yes. But I rarely see this working out in practice.

Well, you can't automatically identify bad code. The best you can do is try to eliminate obstacles that make bad code easier to write than good code.

DSLs for performance

There are reasons to specify parts of the program in a lower level language.

An alternative is to use DSLs or constrained subsets of a language.

This is especially useful if you want parts of the code to run on GPGPU or FPGA or similar, or wish to ensure a subprogram can be statically allocated, etc..

Why favor lower-level languages in particular? A constrained, declarative, high level language might do the job just as well or better, and will probably be more portable and maintainable.

Low level

Well, sure. By the Alan Perlis' definition of "low level" you always want a high level language. I just mean a fragment of the language that allows explicit control and reasoning over issues like memory layout that are usually abstracted away.

I'm all for focusing on

I'm all for focusing on semantics and doing things in a clean way. I think it can be done with "optional GC", too, even preserving type safety.

Sure, this was one of the goals of intensional type analysis. They got simple copying collectors, but more sophisticated GCs and their nifty optimizations remain difficult.

Different approach

I glanced at that paper and I don't think it's very close to what I have in mind. They seem to be trying to assign semantics to functions in a way that makes them parametric in garbage collection strategy. As I've commented before, I don't like that approach. What I want to do is build a (largely inferred) representation of functions that operate on data in a particular layout in a particular context (including garbage collection policies).

They seem to be trying to

They seem to be trying to assign semantics to functions in a way that makes them parametric in garbage collection strategy.

It's a formal means of defining functions over type structure, so-called poyltypic functions, which enables you to write your own type-preserving GC. Intensional type analysis is used in typed assembly languages for such purposes.

I'm not sure if this is specifically what you're looking for, as I don't recall a thorough discussion of your approach. If you could relate it to something that's been more thoroughly investigated, like some variant of region inference, that would be helpful.

To be honest, I don't really

To be honest, I don't really follow what they're doing. Polytypic functions translate the types, but what translates the code to use the translated types?

My approach is just exposing compilation in a nice way, through a mechanism to infer related values. It would be amenable to formal proof, with all of the complexity that would entail, but when I referred to type safety earlier, I meant that the clients of the garbage collector would enjoy type safety, not that I intended to prove that fact formally.

To be honest, I don't really

To be honest, I don't really follow what they're doing. Polytypic functions translate the types, but what translates the code to use the translated types?

Hmm, perhaps we're not using the same reference material. Stephanie Weirich has done quite a bit of work on this subject, so perhaps one of her later papers will get us on the same page. It showcases the typerec operator which performs a match on a type parameter, and provides a type-erasure semantics for compiling this down to ordinary terms that carry the relevant type information.

There is some overlap with type-indexed functions of various kinds, and most modern work on generic programming, etc. seems to focus on different approaches. Still, intensional type analysis was successfully used in the context of GC, so it seemed relevant to the topic (see [1], [2], and search for "tag-free garbage collection").

My question was about the polytypic garbage collector

Let's say I have a function foo: (Nat -> Nat) -> Nat and I want a garbage collected version of it. I could inspect the types to come up with a new type that the garbage collected function should satisfy, but how do I inspect the code of foo to translate the function itself? I believe I was reading your [2], and I couldn't figure out how that step was supposed to work.

but how do I inspect the

but how do I inspect the code of foo to translate the function itself?

Do you mean the closure or copy the actual machine code of the function? The closure is represented as an existential package. But it sounds like you mean the machine code, which means you want to copy the function and relocate all of its internal symbols?

So the conversion of code

So the conversion of code into the required format happens at some meta level (by the compiler, one assumes) and is outside of the scope of the paper? That's what I was guessing. So that means that this isn't a general mechanism that one happens to be able to use to add garbage collection. The whole scheme is designed to support garbage collection.

All I want to do is expose compilation in a nice way. So you'd write a garbage collector by specifying how to transform expressions into a style that uses a garbage collector (types too).

So that means that this

So you'd write a garbage collector by specifying how to transform expressions into a style that uses a garbage collector (types too).

Having squinted at this a few times, I think you're saying that you want to be able to define (possibly global, unhygenic) AST-AST transformations, possibly from within the program itself? I don't think any existing language supports this level of metaprogramming, although you haven't described how you envision this working, ie. via macros, compiler plugins like Rust, etc.

Roughly

It's not AST transformations exactly. I have an extensible user-facing AST that shouldn't ever be inspected programmatically. Certain AST nodes (expression constructs) produce expressions, which is kind of the internal language used by the compiler. It's these expressions that are translated to related expressions during compilation. The transformations swill be local, hygienic and amenable to correctness proof in principle. So you would say "I need a representation of foo with these properties (garbage collecting, ...)" and the compiler would go find you one by matching rules to the expression of foo.

One cool thing is that this representation is itself a value that you can name and use. This allows you to do the abstraction level mixing that I mentioned in another comment. You can take memory unaware code, pick a memory aware representation for it, and then hook that up to some memory aware driver code that knows how to handle out-of-memory conditions gracefully.

wow,

Who wouldn't want that? It would be cool.

(In the meanwhile, I guess I have to look towards much more manual labor in setting up and using things like the Ravenbrook MPS stuff.)

You can't get ANY security that way.

Language-level security does not work.

Producing whatever machine code you want does not matter in a secure OS.

If any arbitrary sequence of code running in non-kernel mode exploits any security flaw, it is the operating system, and not any language system, that needs to be fixed.

One person with trained memory, can walk around with the entire binary machine code for a few simple utilities in his head. In one spectacular demonstration, I once saw someone walk up to a Windows box and enter a complete 8086 assembler from memory, without using an editor, compiler, or any other software beyond the command line. Literally, typed "echo", then wrote the bytes of an executable file directly from a command line, in raw binary, using the numeric keypad and the alt key, then typed a '>' and a filename. And it ran.

If there exists a vector of machine code that can expose a security problem, you thank the man for pointing it out and FIX THE DAMN SECURITY PROBLEM, you don't dink around trying to cripple whatever software he used to produce the vector of machine code. 'Cause the next guy may be using nothing more complicated than 'echo.'

Language level security

Language level security won't protect you from attacks below the level of the language. But, for a lot of applications - web services, for example - that isn't the main vector of attack. Many attacks come from above the language level, leveraging language-layer vulnerabilities such as buffer overflows and undefined behavior. In other contexts, such as JavaScript web applications, there are also attacks from within the language layer, i.e. where we should have mutually distrustful code and enforceable security policies. Even when a module isn't malign, it may have vulnerabilities or bugs, so it's useful to reason about it as though it were not fully trustworthy. Language level security is very effective in any of these cases.

Language level security can work very well in a language designed for it, such as E language or F* (F-star). If language level security doesn't seem to work today, that's only because most popular languages aren't designed for it.

OS-level security is welcome, of course. But it's often less than solid. State is global and difficult to reason about. Communications between processes are difficult to track, comprehend, or debug. Isolation is nearly impossible due to covert timing channels or timing attacks. There is an endless stream of vulnerabilities in critical applications. Heartbleed, Shellshock, and Ghost are just a few that have made headlines over the last year. If there is such a thing as a secure operating system, it's about as popular today as secure programming languages.

OS-level security is less useful to programmers than language-layer security because composing multiple small OS-level processes is no longer a primary means of achieving functionality. We abandoned the stdin-to-stdout as a primary composition model over thirty years ago. Plugins, libraries, and scripts are now primary means of extension and hence a common source of vulnerabilities and potential vector for attack.

Re: problem with garbage collected languages

Is that it's extremely hard to write garbage collectors in them.

You pretty much can't make a memory manager, of ANY kind, in a language that restricts your access to raw pointers

I disagree with this assertion. It's easy to model memory management in a GC'd language. I frequently model temporal-frame-based memory models (for reactive systems) and less frequently region-based memory in GC'd languages.

What I get is a system where the language-layer GC will reliably collect frames or regions that the model says I'm finished with. It's a lot easier to mix and match memory models for different subprograms and transfer data between them if I have an underlying GC.

If you're speaking of implementing the language-layer GC, that's a language bootstrap issue. I grant bootstrapping a language runtime with GC is more complicated than bootstrapping a language runtime without GC. But I think the fundamental issues are the same.

Punch a hole

Well, you do have to punch a hole through the abstractions of the language to implement the GC. The question is how big of a hole?

The GC for Virgil is written in Virgil. For this we need these things:

1. Pointer type for raw access to memory.
2. Stack layout metadata including reference maps to find roots in the stack.
3. Heap metadata including reference maps for objects.

The surface area of the first one is pretty small. The sticking points are really 2 and 3, where the GC has to have close cooperation with the compiler/runtime system. I chose a pretty dense representation of this metadata to keep the binary space overhead small. That was quite tricky to get right but in the end keeps metadata size below 10% of binary size.

The Virgil GC does not allocate real objects on the heap, though it does use some preallocated objects. It's lower level code, so one has to be careful and use the new space as a queue like a typical Cheney collector and do pointer operations to determine what's a forwarding pointer vs and object header, etc. But in debugging mode it's no problem at all to make nice printing and tracing code that uses strings and objects in the GC.

The nice part is that with just "Pointer" and these agreed upon data structures populated by the compiler, the GC is just Virgil source code. No C or assembly routines are needed, although the fast path of bump-pointer allocation is emitted inline by the compiler.

I like the idea

of a programming system that exposes everything - so that nothing is totally opaque.

I'm not with the people who say "don't allow programmers the ability to screw up" because in an organization where that's a problem I would never want to work anyway.

In my mind the problem with exposing low level details like the stack is that it makes it hard to analyze code and optimize it.

Maybe you have levels to the system, so in a sense the GC is at the level of the compiler and optimizer, but perhaps you find a way to make that level programmable too.

Counterexamples

T and Squeak both have garbage collectors written in the language (a non-allocating subset, obviously). In T you can have a hardware or software interrupt between any two generated machine instructions, including an out-of-memory event, without the possibility of corruption.

Jikes RVM does some cool

Jikes RVM does some cool things also in terms of Java-implemented GC.

Some previous mention

There was some previous discussion of Rust here, though it was fairly brief.

My interested has waned because of the scope deflation. There was a significant emphasis early on of the well-structured concurrency-- I was excited to see how they implemented their chained failure ideas. Unfortunately, these goals have been largely abandoned (at least in the core language). Not to say they aren't producing an interesting or useful language, but it's no longer hitting use cases in my implementation domains.

If you have something that

If you have something that kind of works but is buggy, you can fix it. If you have nothing at all, then there is nothing to fix.

Once something has become a mess, its very hard to tidy again

Once something has become chaotic, order is very hard to restore order (for example C++). Starting from a small minimal state of order, and extending in an orderly way is the only way in my experience to preserve order.

I find however that it often takes several attempts to understand the problem domain well enough to write a small ordered solution (or extension), and that you end up starting an ordered extension only for it to fall into chaos. Each time you start over, you get a bit further towards the goal, and with enough perseverance an ordered solution is generally possible, at least for every problem I have encountered so far.

Order is easy to restore;

Order is easy to restore; just do it again. Programming is a way of thinking; just because the artifacts live doesn't mean they are permanent. As long as you keep it honest, you can dive in deep into the unknown and still be somewhat productive. Besides, some experience has to feed into whatever theory arises.

Misleading Title

Yes, I did say that starting again and maintaining order from the start is the only way I have been able to maintain order, and you have not disagreed with that, however the title "Order is easy to restore" is misleading, as it is not really restoring order, but starting again and consistently maintaining order.

Let me try and make myself clearer: To get an ordered system it is easier to restart from scratch than try and refactor a chaotic system into an ordered state. With a language design, that would mean starting the language design again from scratch, so it would no longer be the same language, unless you only published the 'ordered' core.

I would probably like to see things go like this, several chaotic language prototypes lead to an ordered published Language-v1.0. Then several chaotic prototype extensions lead to an ordered published Language-v2.0. If you ever publish something chaotic, and people start to use it, recovering order for that language is impossible without breaking existing applications, and annoying the user-base.