## Best successor to Scheme?

I do not mean to start a completely subjective navel-gazing thread. I mean to learn, "if I loved Scheme when I learned it (and I really did, back in college in 1989), what else could I look into in these modern times?" (apropos the other current Scheme thread.)

If you have thoughts, please mention something like: The language name, the language web site, the bullet point reasons you think it is at all related to where Scheme comes from spiritually, the bullet point reasons you think it is "better" than Scheme, the bullet point cons (if any ha ha!) in the language.

(Or tell me this is a very bad idea for a thread, and delete it...)

Wishing to simultaneously learn and yet avoid any rat-holing flame-wars. :-)

## Comment viewing options

### Slim pickings

I spent a lot of time looking for a scheme like language that has the power of scheme and suits me better. I didn't find one, which is why I'm on Lambda the Ultimate and working on my own language.

Our own John Shutt has an elegant language with more power than scheme, but I have my doubts that there will be an efficient implementation of it, and I'm not a fan of typing in s-expressions either.

### Maybe I should be the one

to write an efficient Kernel compiler when I'm done the interpreters and compilers I'm currently working on.
I almost know how to do it.

I have two insights and one open question:
1)Kernel allows you to do horribly confusing things, changing the interpretation of a symbol in the middle of use - no one will do that.
2) Environments can be described by hash values. If you make a large enough hash, then you can leave out actually checking the environment and just trust the hash - in some astronomically unlikely case you'll crash the machine, so what?
3) the hard part is to get the mapping of variables figured out as early as possible, to simulate having a compile process, even a jit one is earlier than is simple for Kernel.

### no one will do that.As I

no one will do that.

As I was just remarking over on the other thread (yonder), the Kernel philosophy calls for empowering programmers by providing generality that will work when used in ways that are allowed but not anticipated.

The Kernel design is meant to allow stability of bindings to be proven in particular cases. The unanswered question being whether that facet of the design can be made to work as intended.

### No one has ever had a language where you COULD do that

because breaking that mapping between symbol and where it comes from on the fly is confusing and too inefficient.

So optimizing as if no one will ever do that will be a win.

I'm not saying leave out the super slow fallback if people do something weird. But they'll pay the price.

And the question isn't "will the language be used as intended" the question is "if it's not optimized, will the language EVER be used"

You didn't say the question is whether the feature will be USED as intended but whether it can be optimized as intended.

### My question is

Is there a formalism you could use that will prevent people from making unstable mappings.

My own plan is to do fexprs where the mapping is immutable.

You set the mapping once, then you can't change it.

I've always disliked immutable variables, but this is immutable mappings in an environment. I've found my first case where I love immutability!

Even just being immutable is deciding as late as possible, I originally was planning on making them so "you decide before hand" in some sense. But that would break the spirit of Kernel.

REALLY late as possible would be:
1) every time a symbol comes up it gets a mapping
2) it is an error for the same instance of the same atom to get a different mapping next time it's evaluated, but it's legal for the same name/atom in the same s-expression to get a different mapping as long as it's stable
3) question, does saving a continuation before the mappings are instanciated mean that they can be reinstanciated differently from that continuation?
4) all this is BAD in that it's confusing. A symbol should mean the same thing every time it's used in an expression or block of code. It should be stable across space as well as time.

AND if recursion is used for looping, then the mappings will never be stable in the loop.

I REALLY like the idea that you have a different formalism. Something equivalent to letting the fexpr insert a lambda that names the variables for their mapping.

### How do you distinguish

How do you distinguish between bindings that are to be immutable and ones that are to be mutable?

Perhaps things would be clearer with a somewhat more concrete example. Consider the standard binding of list. That's a binding in the ground environment. It is, in fact, a stable binding, because it's in the ground environment. When you mutate an environment (for example, via $define! or $set!, which are equi-powerful), you are altering its local bindings, with no effect on any ancestors of that environment. The ground environment is the parent of global environments; so even if you do change the binding of list in a global environment, the binding in the ground environment is untouched. And there is no way for a user program to acquire the ground environment itself as a first-class object, so there is no way for a user program to mutate the ground environment itself. Thus, no matter what you do to any global environment, the binding of list in the ground environment is stable, effectively immutable. Now, suppose you're using a Kernel interpreter — a read-eval-print loop that evaluates expressions in a global environment — and you $define! an applicative f whose body looks up list. What binding does it access? Well, if you created f with a call to $lambda at the top level of the REPL, its static environment is the global environment of the REPL, and that global environment could always be altered later by an other expression entered into the REPL; so f's statically visible bindings are not stable, even though the bindings of the static environment's parent, the ground environment, are stable. But suppose, instead, that you put the $lambda inside a call to $let-safe, like this:

### In that case

"Also, what if you have a combiner whose behavior is meant to depend on how some particular symbol is bound in the dynamic environment from which it's called?"

You mean it uses the outer environment?

That's still stable.

That's not pathological.

### e.g.

If I write

($define! f ($let-safe ()

($define!$lookup
($vau (s e) d (eval s (eval e d)))) (wrap ($vau () e

### Yes, I'm sure it's the same problem.

I'd bet a year's salary that we ran into the same kind of hygiene problems.

There are multiple 'sets' of operations that make a complete and general model of computation - but if you have more than one such set available, some of the features essential to one or more can turn out to have remarkably poor interaction with others. 'quote' turns out to such a construct. It interacts very well with homoiconicity, lambda, apply, eval, and first-class procedures.

But not with fexprs, first-class environments, and lazy evaluation. But the things it interacts poorly with create a different general model of computation, rendering it unnecessary.

Very well put.

### Right..

I think evaluation semantics is can of worms. I'm trying to avoid tying operational semantics to value semantics.

This doesn't make sense to me. You're talking about three different semantics. Operational semantics I get but what do you mean with the rest?

I personally decided I am going to drop quotation and opt for a solution with an explicit rewrite order. But I didn't figure it out yet how I am going to do it.

### I was lumping in 'evaluation

I was lumping in 'evaluation semantics' with 'operational semantics'. I don't have an evaluation order defined for the semantics of my values and functions. During representation selection, a calling convention will be adopted that's compatible with the semantics of the function. For example, if the parameter to a function is a non-bottom Integer, then you can use call-by-value to pass that value.

### The meaning of an encoded

The meaning of an encoded program as a construct should be completely static (and thus, as bad as macros are, F-exprs look worse).

Wouldn't completely static meaning preclude stateful variables? The question of macros-versus-fexprs is more about hygiene, which is of concern even if all symbol bindings are immutable; and of course which of macros or fexprs is "better" on hygiene depends on various other design priorities.

### Thanks

I think I've had a misunderstanding of fexprs that you just caused me to find and clear up. Specifically, all fexprs in Kernel can in principle be evaluated at compile-time, correct? The only problem would be if separate compilation prevents you from understanding the environment in which something will be executed? I probably shouldn't have mentioned fexprs since my only knowledge of them comes by osmosis from listening to you, but hey, at least I provoked you to respond. I'll upgrade my assessment of fexprs to merely as bad as macros :).

Stateful variables can be modeled fine in my system. I'll try to post more this weekend for raould.

### Alas, not necessarily.

Alas, not necessarily. It is possible to write Kernel code such that at compile time it can't be known whether the combiner of a combination will be operative or applicative, therefore can't be known whether the operands of the combination will be evaluated. This is true even if environments are actually immutable. The simplest way of doing this is to simply specify an operator that you don't know the type of. The possible problems and possible solutions get a bit subtle, on this point. consider this definition:

($define! p ($lambda (f) (f ...big hairy operands...)))

Even if parameter value f is a combiner (rather than, say, an integer, which would cause an error), it could still be either applicative or operative. If f is operative, then ...big hairy operands... will be passed to that operative unevaluated. Which is probably not intended.

This is, of course, bad coding practice; if an argument is expected to be applicative one ought to apply it, so that passing an operative would cause an error: ($lambda (f) (apply f (list ...big hairy operands...))). One could imagine a number of approaches to minimizing this error, perhaps involving a compile-time warning (which one would then probably want to be able to spot-disable once the programmer has looked at the particular code and rendered a decision that yes, they really did intend to do this strange thing). It is perhaps worth mentioning, in this regard, that when a compound combiner is constructed, the operands to $vau are preserved as immutable copies — thus, ...big hairy operands... is immutable; so even if f does access it unevaluated, at least f can't change it.

My concern over these hygiene considerations is generally limited to how best to minimize the likelihood of accidental bad hygiene, rather than how severely pathological bad hygiene can get. It's possible to write pathological code in any serious language, so if the worse possible is even worse with fexprs I'm not too fussed. There are situations where it's desirable to guarantee certain pathologies can't occur, and I do have a weather out for possible future devices for that, such as guarded environments or first-class theorems; but in the meantime I simply accept that such guarantees are not within the design's current purview.

### Treachery and Doom

Okay, that's the understanding that I had previously. You're calling that hygiene, which ... this isn't quite the way I understand hygiene issues around macros. Can I view it as the same issue of unintended capture?

Is "container polymorphism" (ability to pass an operative and an applicative to the same kind of lambda) really useful for you? Why don't you have a separate namespace for operative variables?

### translation emancipation

I strove to clarify a bunch of these basic terms in my dissertation; "first-class value", "reify", "side-effect"; but found "hygiene" especially slippery. What I came up with:

Hygiene is the systematic separation of interpretation concerns between different parts of a program, enabling the programmer (or meta-program) to make deductions about the meaning of a particular program element based on the source-code context in which it occurs.

(Fwiw, Kohlbecker et al. credit Barendregt for the term hygiene.)

The idea of a separate namespace for procedures is a key point of disagreement between Scheme and Common Lisp. Most often nowadays when someone refers to "Lisp 2" they probably mean "Lisp with a separate namespace for combiners". I oppose segregation; I don't think it really supports first-class-citizenship, and this is poetically the same principle that leads me to oppose the logical segregation of compile-time macros from the run-time system. I did introduce the $-prefix naming convention in Kernel to help reduce accidents, but stopped short of enforced constraints. Examples of applicatives that work on both applicatives and operatives arise early amongst the Kernel primitives — wrap and unwrap. ### S-T-R-E-T-C-H-I-N-G " I did introduce the$-prefix "
I really wish you'd picked a character nearer to the home row that didn't need a shift key, like '/' or even better ';'

/lambda ;lambda lambda; or just plain lambda!

### I also used a '$' prefix, but for something else. I used a '$' prefix for a separate class of entities called 'sigils'. Sigils are tokens that can't be bound to any value, and I use them as argument-class separators etc to eliminate a lot of non-evaluation parens from the language. Having a separate namespace but using tokens that do not syntactically differ just seemed like asking for trouble and complication.

When I did decide I needed a separate namespace (to call out dynamic as opposed to lexical scoping), I made it lexically distinct with *bracketing* *stars*, consistent with the way Scheme names its standard dynamically-scoped variables such as *current-input-port* etc.

### That's Common Lisp. Scheme

That's Common Lisp. Scheme has neither dynamic variables nor rabbit ears.

### Eh, whatever

It's within the lispy-language tradition, either way.

I was using both for a long-ish while; looks like now I'm getting them mixed up.

And I beg to differ about scheme. The semantics of current-input-port, etc, are most definitely those of dynamic-environment variables.

### Almost

Actually current-input-port itself is an ordinary lexical variable, typically bound in the global environment. What it is bound to is a first-class object, a box with dynamic scope behavior. Check out the implementation of SRFI 39.

Unfortunately, the dynamic-bind procedure given there, which binds a list of arbitrary boxes to new values during the execution of an arbitrary thunk, isn't exposed by either SRFI 39 or R7RS-small, which provide only the macro version that specifies a body rather than a thunk. It's analogous to Common Lisp progv, but with boxes instead of variables. First-class dynamically bound boxes are a resource that we haven't yet begun to figure out how to exploit properly.

### Yes

http://lmeyerov.github.io/projects/socioplt/viz/rank.html#

### Randomly distributed

on the question "it's easy to tell at a glance what code in this language does"

Common Lisp is at the bottom of the list, but Scheme is at the top third point.

Which makes me think that very few people have used Scheme, and they're partisan.

That list (with some minor languages left out) goes
Python (good choice for ease of understanding)
C# (I would think there's some complexity there, but so popular it gets pushed up maybe)
Go (designed to be simple to read)
Java (designed to lack abstraction and be verbose)
Smalltalk (right on!)
Lua (great syntax, a few confusing constructs, but those are rarely used)
Objective C
Scala
O'caml
Ruby
Scheme
PHP
TCL
Fortran (I would have put at the top for easy to read, given what it's for)
Javascript
Delphi
C
R
...
at the bottom
C++
Common Lisp
Cog
Perl
Prolog
Forth

For "I enjoy playing with this language but would never use it for real code" Scheme is the number one answer, followed by Prolog, R and Common Lisp

### Trying to get back on track

"The language name, the language web site, the bullet point reasons you think it is at all related to where Scheme comes from spiritually, the bullet point reasons you think it is "better" than Scheme, the bullet point cons (if any ha ha!) in the language." Please look at these as examples and opine. I wish this could be a spreadsheet inlined that we could all edit. These are in no particular order, ok?

### saving on syntax

"Lua popular, has a JIT option / 'tables' are weird"

The great and horrible thing about Lua is that in order to simplify syntax they combined all data structures into one structure that leaks abstraction like mad.

So tables are lists and arrays and queues and hash tables and classes (with weird symantics so that add and compare are meaningfully extendable) and objects and proxies and to some extent environments. And among all these weird additions you lose the ability to copy a table except when you know what combination of things it really is and what the additions are.

But it sure gives you a simple syntax for building any data structure.

### Depends what you want

In, say, 1990, Scheme served a variety of uses for different people. Depending on which of them you want, today there are different things to pick:

- If you want a core language to express simple computational ideas on paper in, then Scheme remains an excellent choice. For example, the Reasoned Schemer is written in Scheme, and it's a fine choice for it.

- If you want a Lisp with a sensible core design that you can use to get real work done in, there are several good choices -- I'd especially mention Racket and Clojure, depending on your needs and platforms, but Gambit, Guile, etc are also good.

- If you want a highly reflective language in which you can write useful programs, I recommend Ruby, or maybe JavaScript (but less so).

- If you want a simple PL to extend in your next academic paper, which your reviewers will have heard of, then JavaScript is your best choice.

- If you want an untyped language that everyone already teaches to undergraduates, then Python is for you.

- If you want a simple language to write an interpreter for, then Scheme continues to be a great choice.

However, if you want a language to play the role that Scheme played 20 years ago, then no such language exists, and won't exist no matter what people do -- the world has changed, rather than Scheme.

### xtlang

This relatively young language might be of interest here:

extempore and xtlang

### cool

Please expand upon that thought: the bullet point reasons you think it is at all related to where Scheme comes from spiritually, the bullet point reasons you think it is "better" than Scheme, the bullet point cons (if any ha ha!) in the language.

### axiomatic language

This relatively old (1982) language may also be of interest:

axiomatic language

spiritual similarities:
* (extremely) minimal
* s-expr syntax
* metalanguage capability

pros:
* pure specification language - one defines the external behavior of a program without giving an implementation algorithm
* completely declarative
* more general, in the sense that relations are more general than functions

con:
* implementation grand challenge - requires automatically transforming specifications to efficient algorithms

### I like that idea.

I really like that idea. But specifying *ONLY* external behavior at the top level is sort of a non-starter. Specifying the behavior at the level of routines, or even going down into the guts and specifying at the level of individual clauses, though, could be the "right thing." It would be a *lot* easier though - perhaps even feasible - to take the behavior specification and transform it into acceptance tests.

As for actually running programs, it would be the ultimate case of "The Language Is The Library." The users could make behavior specs, and the UI could tell them "oh, somebody already implemented that" and call in the appropriate library. Or not, and then when the user who specified it implements it, (and the system proves it) The UI asks permission to put it into the library in case anyone else ever calls for that behavior spec (or one proven equal to it) ever again.

And the library grows (bloats!) incrementally via the magic of crowd-sourcing. And as it grows, people would gradually be able to specify behavior at a higher and higher level, until specifying only the external behavior will start to actually work in some cases.

You'd need background threads running under every UI, at full speed to try to prove implementations against specs, or prove the equality of specs so that implementations of one could be used for the other. Things relevant to the local projects with first-priority of course, but never letting a cycle go by without working on something from somewhere. It would be a globally distributed compiler, running full speed on every user's system. Implementations could be proved by testing exhaustively in some cases, with cases distributed to thousands of users. Or if the testing is extensive but not yet exhaustive, you'd have a continually updated probability score that the implementation meets the spec, and some devs might choose to accept those library entries on the basis of six-nines probability or higher. To be replaced instantly by an upgrade if an implementation that is properly proven to comply with the spec becomes available.

It would be a worldwide, distributed, crowd-sourced compiler, running 24/7 and simultaneously on every user's code.

But 1982 wasn't ready for it, because it just plain wasn't possible then. Distributed libraries weren't possible without the internet, automated theorem proving wasn't ready for anything non-trivial, nobody had ever built a distributed application, and the compute power for proving things or attempting to test things as exhaustively as possible just plain didn't exist.

It's kind of an awesome idea, actually.

### You could take it further than that.

If you have behavior spec as a proof target, you might be able to stick some sort of automated implementation into the mix. Neural net eats the behavior spec, comes up with -- something -- and if it gets proven correct it gets added to the library (and the set of "positive" examples for the neural net's continuing self-training). Of course it would almost always be wrong. But worldwide, distributed.... it might be right enough of the time to matter.