Best successor to Scheme?

I do not mean to start a completely subjective navel-gazing thread. I mean to learn, "if I loved Scheme when I learned it (and I really did, back in college in 1989), what else could I look into in these modern times?" (apropos the other current Scheme thread.)

If you have thoughts, please mention something like: The language name, the language web site, the bullet point reasons you think it is at all related to where Scheme comes from spiritually, the bullet point reasons you think it is "better" than Scheme, the bullet point cons (if any ha ha!) in the language.

(Or tell me this is a very bad idea for a thread, and delete it...)

Wishing to simultaneously learn and yet avoid any rat-holing flame-wars. :-)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Slim pickings

I spent a lot of time looking for a scheme like language that has the power of scheme and suits me better. I didn't find one, which is why I'm on Lambda the Ultimate and working on my own language.

Our own John Shutt has an elegant language with more power than scheme, but I have my doubts that there will be an efficient implementation of it, and I'm not a fan of typing in s-expressions either.

Maybe I should be the one

to write an efficient Kernel compiler when I'm done the interpreters and compilers I'm currently working on.
I almost know how to do it.

I have two insights and one open question:
1)Kernel allows you to do horribly confusing things, changing the interpretation of a symbol in the middle of use - no one will do that.
2) Environments can be described by hash values. If you make a large enough hash, then you can leave out actually checking the environment and just trust the hash - in some astronomically unlikely case you'll crash the machine, so what?
3) the hard part is to get the mapping of variables figured out as early as possible, to simulate having a compile process, even a jit one is earlier than is simple for Kernel.

no one will do that.As I

no one will do that.

As I was just remarking over on the other thread (yonder), the Kernel philosophy calls for empowering programmers by providing generality that will work when used in ways that are allowed but not anticipated.

The Kernel design is meant to allow stability of bindings to be proven in particular cases. The unanswered question being whether that facet of the design can be made to work as intended.

No one has ever had a language where you COULD do that

because breaking that mapping between symbol and where it comes from on the fly is confusing and too inefficient.

So optimizing as if no one will ever do that will be a win.

I'm not saying leave out the super slow fallback if people do something weird. But they'll pay the price.

And the question isn't "will the language be used as intended" the question is "if it's not optimized, will the language EVER be used"

oops misread

You didn't say the question is whether the feature will be USED as intended but whether it can be optimized as intended.

My question is

Is there a formalism you could use that will prevent people from making unstable mappings.

My own plan is to do fexprs where the mapping is immutable.

You set the mapping once, then you can't change it.

I've always disliked immutable variables, but this is immutable mappings in an environment. I've found my first case where I love immutability!

Even just being immutable is deciding as late as possible, I originally was planning on making them so "you decide before hand" in some sense. But that would break the spirit of Kernel.

REALLY late as possible would be:
1) every time a symbol comes up it gets a mapping
2) it is an error for the same instance of the same atom to get a different mapping next time it's evaluated, but it's legal for the same name/atom in the same s-expression to get a different mapping as long as it's stable
3) question, does saving a continuation before the mappings are instanciated mean that they can be reinstanciated differently from that continuation?
4) all this is BAD in that it's confusing. A symbol should mean the same thing every time it's used in an expression or block of code. It should be stable across space as well as time.

AND if recursion is used for looping, then the mappings will never be stable in the loop.

I REALLY like the idea that you have a different formalism. Something equivalent to letting the fexpr insert a lambda that names the variables for their mapping.

How do you distinguish

How do you distinguish between bindings that are to be immutable and ones that are to be mutable?

Perhaps things would be clearer with a somewhat more concrete example. Consider the standard binding of list. That's a binding in the ground environment. It is, in fact, a stable binding, because it's in the ground environment. When you mutate an environment (for example, via $define! or $set!, which are equi-powerful), you are altering its local bindings, with no effect on any ancestors of that environment. The ground environment is the parent of global environments; so even if you do change the binding of list in a global environment, the binding in the ground environment is untouched. And there is no way for a user program to acquire the ground environment itself as a first-class object, so there is no way for a user program to mutate the ground environment itself. Thus, no matter what you do to any global environment, the binding of list in the ground environment is stable, effectively immutable. Now, suppose you're using a Kernel interpreter — a read-eval-print loop that evaluates expressions in a global environment — and you $define! an applicative f whose body looks up list. What binding does it access? Well, if you created f with a call to $lambda at the top level of the REPL, its static environment is the global environment of the REPL, and that global environment could always be altered later by an other expression entered into the REPL; so f's statically visible bindings are not stable, even though the bindings of the static environment's parent, the ground environment, are stable. But suppose, instead, that you put the $lambda inside a call to $let-safe, like this:

>>> ($define! f ($let-safe () ($lambda ...)))

This creates a fresh global environment (a child of the ground environment, with no local bindings), and this clean global environment becomes the static environment of f. Since this clean global environment is not accessible from anywhere else, it provably will not be mutated, and f inherits a provably stable set of ground-environment bindings from its static environment. If f provably does not alter its own local binding of list, then provably a local lookup of list will find the standard applicative list as bound in the ground environment.

Is there anything more that could be done in the language design to further encourage binding stability? It seems to me the weakest point in all of this is that if f were to call another user-defined combiner q, and q unexpectedly mutates its dynamic environment, that would destabilize the local environment of f; so either we also have to prove that q is suitably well-behaved, or we have to be careful how the call is done. For example, if the call were written as (apply q ...), it wouldn't matter whether q mutates its dynamic environment because its dynamic environment would be a throw-away empty environment, protecting the local environment of f. At this point I'm not quite sure what to do, because as-is the language design isn't entirely living up to my slogan "dangerous things should be difficult to do by accident" — it's too easy to make a mistake here that destabilizes f's local environment — but we really want a solution that is non-restrictive and further enhances the simplicity as well as generality of the language. I've mused on whether some sort of "guarded environments" feature, dual to the existing guarded continuations feature, might provide a way out, but it would need to be marvellously simply to use, and I'm just not seeing it yet.

I'll spend time on this later

Maybe we should have a Kernel thread.

I've installed a Kernel in a linux vm to play with...

But my thoughts were more about pathological functions like this.

Say a fexpr gets code to run that has a free variable 'a'.

Instead of adding a to an environment, it creates 5 environments, gives 'a' a different identity and value in each and then, while it's executing the code, switches between the environments at random.

Or maybe it works with other pathological functions to create differently shaped environments with different tree structures and 'a' instanciated in different parts of the tree in each environment so that even the topology of the lookups are different. Then in some nested routine it switches between these environments at random.

By the way the point of hashing environments is to save on creating stack topologies and code for them even in the case of pathological functions and to make it possible to have something like stacks instead of hashes to look up variables in, despite Kernal's crazy flexibility.

I wasn't talking about whether environments mutate

I was talking about whether different environments could be applied to the same symbol. It's the mapping "symbol"-"environment to look it up in" that I want to be immutable.

lexical scope?

I'm thinking you mean, a particular instance of a symbol always maps to the same environment, rather than, all instances of a particular symbol always map to the same environment. It seems "a particular instance of a symbol always maps to the same environment" is another way of saying "strictly lexical scope". Lexical scope is not always wanted; sometimes it's essential to a combiner's purpose that it access its dynamic environment. Off hand, a couple of classes of such combiners come to mind. Modularity-engaging devices, such as $let-like operatives, $provide!, $import!. And control structures orthogonal to modularity, such as $cond.

consistancy

" all instances of a particular symbol always map to the same environment."

All instances in the same block of code should always map to the same environment.

And the way Kernel works, in a loop you keep asking the environment over and over on the same block of code, so you need consistency over time, even though the mechanism, running user code, doesn't insure that.

All instances in the same

All instances in the same block of code should always map to the same environment.

An interesting thought. How do you define "block of code", for this purpose?

Also, what if you have a combiner whose behavior is meant to depend on how some particular symbol is bound in the dynamic environment from which it's called?

I'm in a hurry

So this is just off the cuff but obvious.

"How do you define "block of code", for this purpose?"

All passed to the same fexpr.

Less flip

It's the same block of code if it intuitively LOOKS like the same block of code to a human being.

It's the same block of code

It's the same block of code if it intuitively LOOKS like the same block of code to a human being.

In fact this seems quite reasonable, as far as it goes; the practical purpose is to not surprise a human being who looks at the code, therefore the sense of "same block of code" that actually matters for the practical purpose is the sense that bears on the expectations of that human being looking at the code. Practical, but without the sort of precision entailed by an implementation; so it does leave the question of implementation hanging.

It only takes a moment's thought to make it precise

1) if it's surrounded by let or lambda, it's the same block of code
2) if it's surrounded by a combiner that the user has been told is equivalent to a let or lambda, then it's the same block of code.

The point was that since the interpreter of Kernel runs everything through a hook to user code to map the environment, there is no guarantee that any such stability will result. That is unless I misunderstood the way Kernel works, and that's possible since I haven't used it yet.

And the big problem with unprovable stability is that you can't map to machine code that's within an order of magnitude of being efficient.

since the interpreter of

since the interpreter of Kernel runs everything through a hook to user code to map the environment, there is no guarantee that any such stability will result.

It's possible that you're understanding how it works in theory but... perhaps... underestimating its potential for stability in practice. Hm, let's see.

You write a great big Kernel expression, perhaps ($provide! ...), and tell the Kernel compiler it's a program. In theory, the behavior of Kernel, or for that matter of any other Lisp, is defined by how the expression would be evaluated at runtime by an interpreter. That is, any compilation at all must be done in such a way that the behavior will be the same as (though hopefully much faster than) what a pure interpreter would do with the expression. In a Lisp with special forms rather than fexprs, there's a great deal a compiler can deduce about the interpretation of the expression. In Kernel, in the general case, all you can say for sure is that the interpreter would identify the expression as a combination, lookup the operator $provide! in the environment, and either generate an error if the result of the lookup isn't a combiner, or do whatever the combiner says to do with the (unevaluated!) cdr of the expression. But that's unrealistically general. For one thing, you actually know something about the initial environment in which $provide! is looked up, so the compiler can do that lookup ahead of time; and it'll never be looked up again, so that's that. And since you know what combiner the lookup is going to produce, you can use the definition of that combiner to do at least the first bit of the processing of the cdr of the expression. And so on, and so on. There's likely to be lots of steps, small and large, in the interpretation that you can deduce at compile-time. Not, perhaps, as much as you would have been able to deduce in the special-forms-instead-of-fexprs Lisp, and the worst case will start to look very like just running an interpreter without any compilation at all, but it seems a bit of an exaggeration to say of this that Kernel would have to resort to pure interpretation for "everything". The big unknown here is, how much of an exaggeration.

In that case

"Also, what if you have a combiner whose behavior is meant to depend on how some particular symbol is bound in the dynamic environment from which it's called?"

You mean it uses the outer environment?

That's still stable.

That's not pathological.

e.g.

If I write

($define! f ($let-safe ()

   ($define! $lookup
      ($vau (s e) d
         (eval s (eval e d))))

   (wrap ($vau () e
      ($lookup quux e)))))

are all of these symbol-to-environment mappings stable?

JIT Compiler

I don't know much about Kernel, but I vaguely understand first-class environments. It sounds to me that something like this could make sense:

- What if you could not manipulate a currently active environment (so they are always immutable), but you could build a new environment inheriting from the current one and call functions with it (maybe this is how it already works). We would define a currently active environment as anything in the environment stack. When you call a function with an environment would get pushed onto the environment stack and become immutable.

In this case whenever a function is called we know that all function calls are stable with regards to the current environment, and can JIT compile the function before calling it, memoising the current environment. As once we call a function with an environment it becomes immutable calling the same function with the same environment should always use the same function bindings. The JIT compiler would build a table of function IDs and environment IDs to determine if functions have already been compiled.

However we are always free to make a copy of any environment and mutate it before calling a function with it as the environment, or as an argument. We prevent bindings changing for any function above us in the call-stack, which seems a sensible property.

look within

Write a gnu emacs replacement for the modern era in scheme, starting from an interpreter at about the complexity level of SIOD, but not necessarily the same techniques. Study SCM's implementation a little bit. Don't worry about benchmarks except for very specific things like display update. Postpone writing a generalized compiler as long as possible. Review everything you can find about Genera, Concordia, and the Symbolics documentation viewer. In the back of your mind imagine a future with some compilation, with a compositor attached as a back end to not rely on any existing widnow system. Build a scheme "lisp machine" environment, in short, but keeping it simple enough for a small team to pull it off by emphasizing simplicity rather than squeezing out every last cycle of performance.

It's not easy but we have plenty of good evidence it can be done by a small team with little more knowledge than what people already knew 3 or 4 decades ago.

Full Dynamism but explicitly enabled and limited.

I think a good design principle would be unlimited dynamism - the sort of thing that's absolutely impossible to write a compiler for in general - plus facilities for specifically enabling that dynamism if and only if you're going to use it. The enabling language becomes a warning for other programmers that such-and-such may actually happen, like redefining cons or whatever.

Likewise specific facilities for disabling dynamic facilities that are on by default. The disabling language becomes a promise to other programmers, as well as a request/command to the system to signal errors if you fail to abide by the constraints, and finally a means of giving the compiler information it needs to create efficient code. ("well, we would have produced an error before this if the value had been anything other than an integer, so integer-specific code cannot possibly cause an error here...")

It also becomes a means for controlling semantically-troublesome imports. For example saying that you're importing only variable-pure functions from such-and-such module should automagically prevent you from calling something that changes a variable scoped outside itself, and saying that you're importing only side-effect pure functions from such-and-such module will automagically prevent you from calling something that may throw an exception, open or close a file, do I/O, make non-GC memory allocations or frees, etc.

All these optional limiting/enabling declarations should document what the code without them would be doing (or not doing) anyway, not changing the meaning of anything otherwise valid. It should be possible to strip them out and get code that as the same semantics as when they are present. (define cons (foo ...)), or importing anything that does it, may throw a compile-time error if you don't have #I_AM_AN_IDIOT declared, but if redefining primitives is enabled at all, then the semantics of 'define' should be exactly the same as for other variables.

Likewise with limitations on dynamism such as type declarations. When you declare that an array will only ever be used to hold integers you're allowing the system to produce optimized code for that array, and requesting that the truth of your declaration get proved at compile time if possible. Or that you should get a warning at compile time and an error at runtime if the constraint is violated, otherwise. But you should still be using the same syntax for array references, etc, and they should still have the same semantics even if, under the hood, they're mapping to versions of the routines that would only work on integers.

What the Scheme ecosystem needs

As far as I am concerned R5RS Scheme is perfectly fine. What the Scheme ecosystem really needs is some boring but better engineering. Something like:

  • a decent package system + easy access to remote packages
  • a fast and efficient compiler (JIT is fine)
  • libraries, libraries, libraries! For doing common, useful stuff like access to filesystem, networking, GUI, crypto, web server/client, common data structures and algorithms, graphics, compression, math, regexp, unicode, testing, debugging, FFI, concurrency, synchronization primitives, database, syscalls, ...
  • portability

Any successor to Scheme would have a similar need.

I enjoy writing code in Go (it feels like C with closures and concurrency) but so often I think if only such engineering was poured in a Scheme ecosystem....

We have most of those things

We don't have a cross-implementation package system yet, though we have a plan for one. Most substantial Schemes have their own package systems.

Gambit and Chicken are decently fast compilers that produce decently fast C. Most of the slowness in compilation is due to the gcc or clang back end. AFAIK nobody has tried to make them work with a Plan-9-style compiler, but it shouldn't be that hard.

As for libraries, I'm working as hard as I can on them, but it's hard to overcome the curse of Lisp.

Not the only cause of lisp's curse

I'll grant that List is easy to write software in.

But it's also, despite denials, damn hard to read other people's code in Lisp because s-expressions are a bad idea for human comprehension - they're better for machine processing but how often do you need that?

So Lisp hackers probably aren't good at working together because they can't read each other's code easily!

Thus the scripting languages are more productive.

s-expressions are a bad idea

s-expressions are a bad idea for human comprehension

Got any research to back that up, sonny-jim?

It's funny

that you called Perl "write only code" and can't see that for most people scheme is the same.

Do you have "research" to back up what you said about Perl?

rat holing

(i do feel like my original hope as sketched in the op has sorta been lost, which is not too surprising, so i am a little wary of contributing to this thread, BUT in defense i do think syntax will always be a valid question wrt usability of programming languages.)

I for one have gone full circleish. I do not like how macros are done in non-sexpr languages. It becomes too ugly and verbose and unattractive.

Thus I would at least want sexprs and macros in that ilk to be the base, always available, and then the mexprs on top of that to be gravy for those who want to work that way. But things must round trip. This thread right here is sort of a larger version of the epic movie from the 50s called Tabs vs. Spaces when in fact all our code should be going through transformation steps so that we always see code how *we* want to see it, each and every one of us, and gordian-knot the whole argument.

I personally do want static type checking, inference, annotations, etc. along with the sexprs.

But this can become a rat hole subject for sure. I would like to suggest people rather talk about higher level things that are related like:

  • What should the macro system be, including possibly "none"?
  • How should the syntax not/be pared down, streamlined, sanitized, sanity-ized?
  • What kinds of scopes, visibilities, 'permissions', etc. should be allowed?
  • What kinds of im/mutability should be supported? (Especially vis-a-vie a type system.)

laundry

"Thus I would at least want sexprs and macros in that ilk to be the base, always available, and then the mexprs on top of that to be gravy for those who want to work that way. But things must round trip"

agreed. Round trip! I'm working on that in a language by the way, or planning on it.

Static type checking:
I'm happy to see a few languages that don't have that, since obsessions over typing is everywhere. I like seeing a few untyped languages.

I want a powerful macro system, and I want the current system killed dead dead dead. I guess you have to take out the macro system before you can replace it.

Immutability:
If you don't want to change a field, then don't change it! I don't need it enforced.

What I was trying to say

What I was trying to say is that IMHO the best successor to Scheme is Scheme + better engineering! Conversely, any new language will have the same issues.

Still, in response to some of your questions:

  • May be immutability can be provided by a function (const value).
  • May be mutability can be provided by a function (copy value).
  • type annotations can be useful for optimization, assertions, FFI and IO. But no static typing!
  • Visibility: (export name ...). If records are allowed, allow the export keyword in a nested structure.
  • Permissions: I assume you mean who is allowed to access a function or object? I don't know how to do capabilities in Scheme. At any rate in today's world some sort of distributed capabilities would be desired, which will likely fall outside the scope of the language.

Now putting on an engineer's hat I'd like to see the following (not sure if these require any language level support):

  • Support for distributed computation. May be allow a "calling convention" annotation on packages/modules? May be you can shoehorn in capabilities here!
  • Support for a sandboxed computation that can be moved to another address space. Ability to nest sandboxes. A sandbox communicates with outside world only through provided ports.
  • Concurrency. I mean something like (parallel-apply closure args...). May also need synchronization primitives if two threads are allowed to share objects.

Macros are terrible

We should have a flexible encoding scheme and constructs that interact with their environment through well-defined interfaces. The meaning of an encoded program as a construct should be completely static (and thus, as bad as macros are, F-exprs look worse). Once you've gotten away from macros, there's not much reason to use S-expressions.

I almost resisted posting this, because it's high on opinion and low on evidence, but oh well.

+1

This is great, the kind of stuff I think shows LtU in the best light. Looking at the paradigms and seeing if they really work, or if there's something even better. So even though I have no idea what you mean, I for one am glad you wrote it down here. Heck, can you start a new thread with concrete examples of what you mean? If you can walk us through things in a fake evolutionary style from pre-macro languages to macro languages to all the different kinds of macros (hygenic or not, sexpr or mexpr) to generalized multistage languages and then off to what you are talking about in response, that would be very interesting to read.

He's thinking like a

Dijkstra or Knuth, where programs are totally static so that they can be analyzed. It's the view of a computer scientist, not an engineer who wants flexible tools.

Don't touch Knuth!

Knuth could do both. He could analyze software/hardware and write reports on the spacetime complexity of taking the square root of a number, as well as write a specialized macro programming language called TeX.

Now we have category theorists which can do neither, for which I blame Dijkstra.

Hm

I'm still hoping that LtU could somehow become an interesting place again. My current stance is to ask people for interesting content to post, advising that they do *not* try to read the comment sections.

I personally have nothing against circles and circles of discussions on s-exprs for readability and whether macros are a good idea.

I would be grateful if you could avoid gratuitously attacking people (here: category theorists) during the discussion, though. Being improductive is fine, being insulting is not.

Yeah. Whatever.

We already had this discussion. Nobody was insulted and you don't get to determine other people's opinions.

Not really

I think there's value in making tools that have clean semantics. It leads to additional flexibility, not less. It also tends to align with tools that are amenable to static analysis, but I think you'd be surprised at how little I'm interested in static analysis myself.

As an example, I've heard you complain about immutability. Haskell is a pain (I doubt you'll disagree), but it has the right general idea - model mutability. Once you get the semantics right, you can express things concisely that are verbose and annoying in current languages.

Hmm

That sounds like an invitation to violate site policies with a top level post with nothing more than my opinions on macro systems. :) I'll try to respond with a better explanation this weekend.

Macros are terrible (redux)

Well, I was thinking about giving more details about a specific alternative, but I don't think I'm ready to do that. Composing language fragments is delicate. I think I'll just elaborate a little on my comment above, complaining about macros and hinting at an alternative. Hopefully this helps ...

The basic principle that macros violate is this: every construct should be assigned a single semantic value as a construct at the appropriate level of abstraction. Syntactic encoding is not the right level of abstraction, but that's the level at which macros work. A macro might instantiate a syntactic block in several contexts, resulting in several semantic interpretations of the same syntax. This is analogous to #include polymorphism, which is terrible.

This complaint applies to both hygienic and non-hygienic macros. But there are additional well known problems with whichever of those you pick. If you pick hygienic macros, you're limited in power because you can't interact with constructs that provide or consume an environment. Non-hygienic macros are just terrible, able to arbitrarily inspect encoding details of parameters.

What we want is to be able to write something like this (in pseudocode):

    syntax  while condition:Expr body:Block where
        ...
        semantics definition
        ...

When the language encounters a while statement, it notices that the context for the first parameter is an expression, and uses the following encoding to build an expression construct, and similarly the next section of encoding is built into a Block. The semantics of this while construct then interacts with its sub-construct parameters through well-defined interfaces for Expr or Block.

If this was a for-construct instead of while, the construct might explicitly accept a Symbol as its first parameter and feed that to the Block as an available binding.

In other words, we need to split macros up into providing an environment in which their child encodings can be interpreted as constructs, and then an explicit interaction between those constructs. Because the latter interaction will deal only with constructs and not their encodings, S-expressions will lose most of their advantages.

As suggested by the psuedo-code, all of this should be statically typed.

Once you get to each piece of user entered syntax as belonging to a single construct, you can make an construct aware IDE. This leads to a powerful way to extend an IDE and I think would even make a nice OS interface.

The hard part of this is designing good interfaces that can be extended in useful ways, but that seems unavoidable. General language features don't naturally compose. Picking construct protocols that are well-factored requires work.

ASGs

I want everything to start off from the perspective of the abstract semantic graph, I sometimes think. (Whither bloody Intentional Computing, anyway?)

Have you seen anything that is along the lines of what you are writing here? What is the closest, if anything? Would you think you could take existing languages and "fix" them with this, or would you want it from whole cloth?

Meta-object protocol

Starting from the ASG doesn't solve the problem if macros are allowed to inspect that ASG. The idea here is to use interfaces (more or less OO stlye) between a syntax constructor and its children. It's very similar in some ways to the meta-object protocol used in CLOS (I have the book, but I haven't been though all of it). That system was obviously based on a macro system, though. I'm not a Lisp expert, but I think they could have built what I'm talking about instead. My knowledge of other languages also isn't as broad as some of the others around here, so maybe someone else has seen something along these lines.

I'm planning this for my own language. You could probably do this in many languages, but I have a handful of things that I'm "fixing", so I'm starting from whole cloth.

Type traits and function traits

This sounds a lot like type traits and function traits in C++? You also solve the unevaluated argument problem with function objects, which actually allow better optimisation because C++ can inline them (as compared to function pointers).

Hmm

You mean the template idiom? You could maybe emulate what I'm talking about by building all syntax out of templates. I'm not sure. It sounds horrible.

Not templates

Function objects allow code to be passed into functions by separating creation from execution. Consider:

class myfn {
    int x, y;
    myfn(int x, int y) : x(x), y(y) {}
    int operator()(int z) {
        return x * z + y;
    }
}

So this can be constructed on the stack and passed to other code, so it serves as "quoted" code.

template <typename F> int test(F f, int b) {
    return f(b);
}

auto a = myfn(3, 4);
auto c = test(myfn, 5);

Of course this still has only lazy checking that the passed type 'F' is suitable. However function-traits allow checking the function is of the correct type.

template<typename F,
         typename std::enable_if<
             F::arity == 1 &&
             is_same<F::argument<0>::type, int> &&
             is_same<F::return_type, int> 
         >::type* = nullptr>
int test(F f, int b) {
    return f(b);
}

The template stuff looks fairly bad, but is essentially just providing constraints on the type of function that is expected.

The interesting part is that imperative languages there is a difference between the function (value) and the result if the function. 'f' means something different from 'f()'.

In functional languages you can achieve the same by having a function that takes a single void argument, or that can be partially applied like this:

f x y () z = z * x + y
test g b = g () b
test (f 3 4) 5

The void argument is only really necessary if there are no further arguments to apply. However you still need a primitive that only evaluates code blocks conditionally like an 'if' statement, and it depends on the semantics of partial application in such a way that optimisation might cause problems.

In conclusion the imperative languages differentiation between a function name and executing the function seems to be the right way to do this that plays nicely with compilers and optimisation, rather than the functional way of making a function name and it's value indistinguishable unless you use special argument types (fexpr). To me it seems better to always see when a function is executed, rather than have it depending on the definition of the arguments in other functions. In both cases it is clear we want to distinguish function names (unexecuted functions) from their results. Quoted code is just an anonymous function "name".

I see

I'd rather handle inlining through representation selection, leaving the semantics with an ordinary function parameter. This is more about extensible syntax.

I think evaluation semantics is can of worms. I'm trying to avoid tying operational semantics to value semantics.

Optional named function arguments.

Optional named function arguments seems a good way to do that, so you can have optional additional arguments identified by names. For example "if a {then b {else c}}" would have allow "if (a > b) then a" etc... You just need a short hand way to declare function objects. Rust uses ||, so you could write something like "if (a > b) then (|| print 'a') else (|| print 'b')" and it would only print the chosen branch. I would be happy with 'if' behaving like a other functions.

In case it was not clear above, I prefer the imperative approach of having explicit evaluation of functions, to the fexpr approach of having referential transparency, but functions that sometimes take unevaluated arguments.

Why?


if (x > 1) then print "x > 1"

So your language will print "x > 1" when x = 0? I predict that this will be unpopular.

I'm a little lost on context. What are you saying optional function arguments are a good way to do? For functions with side-effects , I'm very explicit about when evaluation occurs (think Monads). For functions or values without side-effects, I have no concept of evaluation.

Function arguments must be expressions

Actually I think you wouldn't allow the example you gave, as functions would only accept expressions as arguments, and to pass a statement block you would have to wrap in a lambda anyway.

lambdas and other quotes

Your example reminded me of smalltalk
exp ifTrue: [ print 'this is a lambda without arguments'. ] ifFalse: [ print 'this is another one'. ] Which is syntactic sugar (but the grammar not a macro) for something (schemelike) (send exp ifTrue:ifFalse (lambda() (print ...)) (lambda() (print...)))

I like that explicitness.

Fexprs need a different kind of quoting than a lambda, they need one that captures the structure of the code as data and captures the environment like a lambda. There is no primitive in lisp that does that.

I do like the explicitness, but implicit quoting as a kind of macro is also ok with me. That's yet another possibility. What if the quoting was known at compile time, what if a fexpr is a macro?

If you want a first class function you explicitly quote, if you want syntactic sugar you use a non-first class macro.

That won't kill anyone, especially if you made a first class version that needs explicit quoting available too!

Though in the language I'm designing I'm leaning toward a real fexpr if I can do it with that runtime without too much slowness - but it might not be possible.

Maybe not quote?

In trying to avoid quoting I reduced it to the following problem. Say you have some combinators:

P = 0

Q P = 1
Q x = x

Now how does Q P reduce? You don't know, right? But what if I say that Q is a macro? Then you do know, right?

So, I am thinking of going for a scheme where expressions may be reduced in separate stages where in one stage, some code is data (isn't reduced), and in the next stage is code again (is reduced).

That can be made very efficient since that simply needs a compiler where combinators are compiled to code, which either reduce or not (i.e., the subroutine the combinator is compiled to is called, or not.)

Not sure how to make it work yet, though.

lifting combiners

John had the idea to name combiners differently than reducers but he didn't enforce it.

What if you enforced the difference syntactically but still make combiners first class?

People seem to call this sort of difference "lifting".

Say a function application is F e
but a combiner application is C:e

I like that principle, lots more things should be lifted than are.
Calling a function is different than applying a combiner is different than calling a function with a continuation.. All should have separate syntax. They're first class but they're not interchangeable.

We can have 4 kinds of non-interchangeable calls
call
call with continuation
call with environment and quoted arguments
call with continuation, environment and quoted arguments

Hold your horses!

I don't know (yet). John tries to fit in pretty abstract ideas into a very concrete operational model. I work from the opposite direction, start of with a very abstract view and then try to design a language and operational model around that.

But I assume rewrite semantics, so I am not thinking about call stacks and continuations.

Though I might have reduced the problem to an absurdity. No idea yet.

The point

of separating out calls that can save a continuation is:
1) they can come back through the same code multiple times - the change the meaning of the outer program
2) they are a nonstandard kind of call, not using the stack so the implementation may be different

I think a notation that captures it is
f<<parameters

The double arrow << is to represent that the call can come back more than once!

Look out trying to type double left arrow, this site won't display it and will not display text after it. You have to type &lt;&lt;

I for one am fascinated

I for one am fascinated by... and wary of... this different-call-syntaxes idea. Seems like it might be a very cool way to avoid accidents; on the other hand, I'm wondering about two things that seem like potential sticking points:  (1) the potential for sliding into syntactic complexity; and  (2) how it interacts with dynamic typing.

If I'm following, you'd write "(f ...)" for an applicative call (well, okay, maybe you have something else in mind re parentheses, though I favor them myself, but that seems a somewhat seperate issue); "(f : ...)" for an operative call; and "(f << ...)" for a call that can capture its continuation — I like the multiple-angles-to-indicate-multiple-values mnemonic.

  • I'd be quite worried if these notations didn't combine cleanly. How would you write an operative call that can capture its continuation? "(f :<< ...)", perhaps? Or would that not catch the eye well enough? Perhaps "(f ::<< ...)"?
  • Would it be a dynamic type-error to call an operative using the applicative syntax or vice versa?
  • Would it be a dynamic type-error to call something that captures its continuation using a non-continuation-capturing syntax? Seems like it'd have to be, since otherwise the thing you're calling would behave wrong due to the wrong call syntax, a source of accidents.
  • I take it a non-continuation-capturing call could have something buried somewhere inside what it does that captures a continuation? It's not clear to me what's to be gained then from the special notation, and it would seem highly problematic to use the special call syntax for every call that could have a buried continuation-capture (shades of the trouble with monads). I guess I'm not really sold on this special-call-syntax-for-continuation-capture notion, but I might well be missing something.

continuation call

It would be a dynamic or compile time type error to make the wrong call.

Part of the reason is as I've explained before:
1) calls that don't take a continuation can be normal stack based calls or inlined and therefore more efficient
2) this could be built by cps transform on any language with tail call elimination - only transforming the calls that can take a continuation
3) this gives you a kind of clarity for what a reifiable delimited continuation IS. If you want to call a chain that can use a continuation you make a new continuation (Continuation:new()) then start a chain on it.
4) Most importantly it solves the "this construct changes the meaning of the whole program around it" problem. If only we had a solution like that for the other such construct, threads.
- Though I'm starting to imagine a partial solution for that is based on shipping data around instead of sharing it. Create different arenas of data that are distinct, then ship the whole arena from one thread to another atomically.
Like the problem with cps transforms and stack, that idea came from an implementation problem, from the efficiency limitations of soon to be obsolete ARM v7 processors.

Combining types of calls:

There could also be a syntax for combined calls with the understanding that they would be less efficient and might have weird but hopefully well defined properties.

I'm missing sleep so I'm not even up to the task of laying out the many possibilities of how to handle an implicit continuation in a function that wasn't passed a continuation vs. one that was.

In this case, no it absolutely can't

John Shutt: I take it a non-continuation-capturing call could have something buried somewhere inside what it does that captures a continuation?

No it can't, not one that can escape it, and that's the point.

In this model you are explicitly passing continuations - this is a model that works with conditional cps transformation on a language that doesn't implement continuations. It allows BOTH kinds of calls to coexist.

And in this model, the USER has to create a continuation to pass to the first function in the chain that uses a continuation.

In this model both ends of the continuation are reified (or more with branches)! Both ends have to be active to jump into the continuation. If no one is waiting on the return value, then to jump into the continuation, you better tell it where the other end is going or it has to be you.

Abnormal transfer of control

Abnormal transfer of control — call it continuations, exceptions, whatever — is something I'm still convinced we don't know how to do right. In such situations my usual suspicion is that we're using a seriously wrong conceptual framework for the whole context within which the question occurs.

If I'm following you rightly, a continuation can only move outward through a function call if the function call syntax explicitly permits such. That raises the "trouble with monads" problem: in a "pure" functional language using monads to model side-effects, introducing a side-effect in a low-level function forces a change to the signature of all the functions that call it, all the way up the abstraction ladder. My own foray into such stuff in Kernel was guarded continuations, though what I found most interesting about guarded continuations was the deeper questions they suggested.

Well, the subexpressions are all evaluable so...

It's a matter of PL design. Lisp writes

(if (a) (b) (c))

meaning, evaluate A, then evaluate B or C depending on the outcome.

Would it be meaningful to express the same thing with named arguments like

(if test:(a) consequent:(b) alternate:(c))

Just so people who don't remember (or don't care) about argument ordering could instead write

(if alternate:(c) consequent:(b) test:(a)) ?

with something as frequently used as 'if' it makes nonsense that anyone wouldn't remember the argument order, but in general? Yeah, that happens.

And would it make sense that another programmer, who completely trusts her own knowledge of the argument orders and finds the additional notation annoying, could just turn off the display of argument names in the calls and get (canonically ordered) arguments? Yeah, that could happen too if the language supports both syntaxes, depending on the IDE.

In fact the argument names might not even exist as part of the source code at all: The IDE could munge them into the display, or accept them as a written form for code it would save to file differently.

Not there yet

No, I don't see it. How are you going to implement a 'macro' system with your approach? I still think you're too concrete.

The idea to reduce it to an absurdity is to show what properties we are studying. Again.

P = 0
Q P = 1
Q x = x

Q P?

You immediately observe non-confluency (due to), reflection, different rewrite strategies.

Roughly, probably, you can implement a macro system iff you know what combinators you're going to evaluate in a preceding pass. Which simply coalesces with a nonstandard rewrite strategy. But in the combination with introspection you get non-confluency too, which is either a) bad, b) neat, c) a requirement.

I simply don't know yet. Certainly not thinking about continuations, yet.

(edit: fixed a typo.)

Addendum:

Of course, you can also study hygiene which enters after the TRS is extended with nameless abstraction.

P (\x . e) = (\x . P e)
P (f a)    = f
P v        = v

Q (\x . x) = x

P as to show that introspection is useful, Q as to show that introspection is problematic.

The way people actually use macros

confluency is never a problem.

Macros aren't a probramming language they're a way to make code a little more terse.

No Macros

I find macros operate at the wrong level of abstraction, C++ templates are almost as bad, although better than 'C' macros. I would like a language that gives the expressive power of macros through first class functions, and does not need to provide macros. But maybe thats just me.

Fexprs and quoting.

In creating a lispy language with fexpr-like semantics, I simply did not find any very good semantics for quote.

However, in a lispy language with fexpr-like semantics, I have never found a situation that requires quoting.

I wound up sticking several subtly different versions of it in a library of procedures that are considered semantically problematic, but which I included specifically because my goal was to create a translation target for source code originating in many languages, and many languages do use problematic constructions.

So why do I have libraries devoted explicitly to providing constructs I specifically consider to be problematic? My goal with the lisp was to provide a formalization that explicitly describes the complete semantics of code regardless of programming language, at the level of explicit formalisms. Down to the level of the libraries explicitly describing exactly the mathematics corresponding to binary operations of underlying hardware when and as they affect the language semantics, as in assembly language.

My idea (still unrealized) is that when these things are made explicit in a translation target, code that depends on the semantics of different languages, different machines, or even different compilers, when FULLY described in that language, can be made fully interoperable. What I want to do is have code regardless of its origin expressible (via a simple structure-preserving automatic translation) in a single language and combined, as desired, in programs. And reasoned about, refactored, deduplicated and optimized in that language.

Separating

what do you think of my idea of syntactically offsetting fexpr calls (and ones that can capture a continuation).

Have 4 kinds of calls (for the four combinations) instead of one.

It complicates things but it means you can't ever be surprised by different semantics. If an expression is captured, you know it from the call. If a continuation is captured and can come back again, you know it from the call.

Every thing is still first class or maybe nothing is.

Still thinking about it...

I'm still thinking about it, actually.

It violates my expectations that code following a procedure call could be executed twice without being visually distinct from code that will only cause the following code to be executed once, so calling it out with distinct syntax would definitely help reduce confusion.

OTOH, the question doesn't come up because I find continuations annoying and don't use them hardly at all. As I said, they often violate my expectations. I'm considering them semantically questionable because they tend to expose too much of implementation details and often affect the semantics of code that was written without using them or thinking about the consequences of other code that uses them. And while fexprs can be used in equally confusing ways, they are relatively straightforward in application. I -- fear never being able to untangle the semantic problems that combining them could cause.

The most use I make of continuations, is to pass some functions a choice of continuations to call - ie, in some cases return to this point, but in other cases return to that point. Which reduces them to a special case much easier to reason about.

Under the hood, it's the same code that specializes functions which may return different types - if returning an integer, you continue at re-entry point A which is statically compiled to deal with integers, and if returning a character, you continue at re-entry point B, which is statically compiled to deal with characters. It's exactly the same case with something that looks up an item in a dictionary: If you find the item, return it to re-entry point A, and if you don't (you are returning a value of the "error" type) you return to re-entry point B.

You could have an obfuscated-scheme contest. But if continuations and macros were both allowed - especially the new style macros that can violate hygiene - it would be like using a nuke for rabbit hunting. Throw in fexprs and/or implicit lazy evaluation, and it's more like a K/T impactor. I trust the good taste of programmers a lot - I'm providing a lot of sharp tools and if they write pathological things, the system will behave in pathological ways. But I'm trying to make it hard to write pathological things *accidentally*, and I think continuations combined with some of the other constructs I'm working with violate that rule.

Well I've used continuations a lot

Ok, maybe to a certain extent I was playing rather than using, but I needed them to:
To implement a logic language.
To do search, satisfy constraints.
To parse a grammar or generate from a grammar, or fill in the blanks from a grammar.

It's a matter of what paradigms of programming you use.
If you're in a continuation-rich setting, then maybe you'd like continuations but want them better controlled.

Now for fexprs, I imagine them as:
1) an engine for code transformation - not trivially, but as a way of programming
2) As an engine for equation transformation. Symbolic math. Constraints.

But the point of offsetting them isn't for the sake of the programmer, it's for the sake of the compiler and speed etc.
Whether the programmer will like it, I'm not sure but I don't think it will be too harmful.

One can always supply combined forms too, with the understanding that they will be less efficient - and they're offset from the rest of the code too.

I think there's a principle "offsetting is good"

fexprs + quote = bad hygiene

In creating a lispy language with fexpr-like semantics, I simply did not find any very good semantics for quote. However, in a lispy language with fexpr-like semantics, I have never found a situation that requires quoting.

As I explored various facets of Kernel's evaluation model, I found that pretty much every time I wanted to demonstrate bad hygiene, the easiest way to do it was to use an operative ($define! $quote ($vau (x) #ignore x)). With enough repetitions of that lesson, and recognizing that fexprs naturally favor an explicit-evaluation philosophy while quote naturally favors an implicit-evaluation philosophy, I eventually concluded that mixing fexprs and quote nurtures defective hygiene.

Yes, I'm sure it's the same problem.

I'd bet a year's salary that we ran into the same kind of hygiene problems.

There are multiple 'sets' of operations that make a complete and general model of computation - but if you have more than one such set available, some of the features essential to one or more can turn out to have remarkably poor interaction with others. 'quote' turns out to such a construct. It interacts very well with homoiconicity, lambda, apply, eval, and first-class procedures.

But not with fexprs, first-class environments, and lazy evaluation. But the things it interacts poorly with create a different general model of computation, rendering it unnecessary.

yes

Very well put.

Right..

I think evaluation semantics is can of worms. I'm trying to avoid tying operational semantics to value semantics.

This doesn't make sense to me. You're talking about three different semantics. Operational semantics I get but what do you mean with the rest?

I personally decided I am going to drop quotation and opt for a solution with an explicit rewrite order. But I didn't figure it out yet how I am going to do it.

I was lumping in 'evaluation

I was lumping in 'evaluation semantics' with 'operational semantics'. I don't have an evaluation order defined for the semantics of my values and functions. During representation selection, a calling convention will be adopted that's compatible with the semantics of the function. For example, if the parameter to a function is a non-bottom Integer, then you can use call-by-value to pass that value.

cool

Please post a new top post about it on LtU if/when appropriate and/or let me know where i can sign up for your newsletter (seriously).

The meaning of an encoded

The meaning of an encoded program as a construct should be completely static (and thus, as bad as macros are, F-exprs look worse).

Wouldn't completely static meaning preclude stateful variables? The question of macros-versus-fexprs is more about hygiene, which is of concern even if all symbol bindings are immutable; and of course which of macros or fexprs is "better" on hygiene depends on various other design priorities.

Thanks

I think I've had a misunderstanding of fexprs that you just caused me to find and clear up. Specifically, all fexprs in Kernel can in principle be evaluated at compile-time, correct? The only problem would be if separate compilation prevents you from understanding the environment in which something will be executed? I probably shouldn't have mentioned fexprs since my only knowledge of them comes by osmosis from listening to you, but hey, at least I provoked you to respond. I'll upgrade my assessment of fexprs to merely as bad as macros :).

Stateful variables can be modeled fine in my system. I'll try to post more this weekend for raould.

Alas, not necessarily.

Alas, not necessarily. It is possible to write Kernel code such that at compile time it can't be known whether the combiner of a combination will be operative or applicative, therefore can't be known whether the operands of the combination will be evaluated. This is true even if environments are actually immutable. The simplest way of doing this is to simply specify an operator that you don't know the type of. The possible problems and possible solutions get a bit subtle, on this point. consider this definition:

($define! p ($lambda (f) (f ...big hairy operands...)))

Even if parameter value f is a combiner (rather than, say, an integer, which would cause an error), it could still be either applicative or operative. If f is operative, then ...big hairy operands... will be passed to that operative unevaluated. Which is probably not intended.

This is, of course, bad coding practice; if an argument is expected to be applicative one ought to apply it, so that passing an operative would cause an error: ($lambda (f) (apply f (list ...big hairy operands...))). One could imagine a number of approaches to minimizing this error, perhaps involving a compile-time warning (which one would then probably want to be able to spot-disable once the programmer has looked at the particular code and rendered a decision that yes, they really did intend to do this strange thing).

It is perhaps worth mentioning, in this regard, that when a compound combiner is constructed, the operands to $vau are preserved as immutable copies — thus, ...big hairy operands... is immutable; so even if f does access it unevaluated, at least f can't change it.

My concern over these hygiene considerations is generally limited to how best to minimize the likelihood of accidental bad hygiene, rather than how severely pathological bad hygiene can get. It's possible to write pathological code in any serious language, so if the worse possible is even worse with fexprs I'm not too fussed. There are situations where it's desirable to guarantee certain pathologies can't occur, and I do have a weather out for possible future devices for that, such as guarded environments or first-class theorems; but in the meantime I simply accept that such guarantees are not within the design's current purview.

Treachery and Doom

Okay, that's the understanding that I had previously. You're calling that hygiene, which ... this isn't quite the way I understand hygiene issues around macros. Can I view it as the same issue of unintended capture?

Is "container polymorphism" (ability to pass an operative and an applicative to the same kind of lambda) really useful for you? Why don't you have a separate namespace for operative variables?

translation emancipation

I strove to clarify a bunch of these basic terms in my dissertation; "first-class value", "reify", "side-effect"; but found "hygiene" especially slippery. What I came up with:

Hygiene is the systematic separation of interpretation concerns between different parts of a program, enabling the programmer (or meta-program) to make deductions about the meaning of a particular program element based on the source-code context in which it occurs.

(Fwiw, Kohlbecker et al. credit Barendregt for the term hygiene.)

The idea of a separate namespace for procedures is a key point of disagreement between Scheme and Common Lisp. Most often nowadays when someone refers to "Lisp 2" they probably mean "Lisp with a separate namespace for combiners". I oppose segregation; I don't think it really supports first-class-citizenship, and this is poetically the same principle that leads me to oppose the logical segregation of compile-time macros from the run-time system. I did introduce the $-prefix naming convention in Kernel to help reduce accidents, but stopped short of enforced constraints.

Examples of applicatives that work on both applicatives and operatives arise early amongst the Kernel primitives — wrap and unwrap.

S-T-R-E-T-C-H-I-N-G

" I did introduce the $-prefix "
I really wish you'd picked a character nearer to the home row that didn't need a shift key, like '/' or even better ';'

/lambda ;lambda lambda; or just plain lambda!

I also used a '$' prefix, but for something else.

I used a '$' prefix for a separate class of entities called 'sigils'. Sigils are tokens that can't be bound to any value, and I use them as argument-class separators etc to eliminate a lot of non-evaluation parens from the language. Having a separate namespace but using tokens that do not syntactically differ just seemed like asking for trouble and complication.

When I did decide I needed a separate namespace (to call out dynamic as opposed to lexical scoping), I made it lexically distinct with *bracketing* *stars*, consistent with the way Scheme names its standard dynamically-scoped variables such as *current-input-port* etc.

That's Common Lisp. Scheme

That's Common Lisp. Scheme has neither dynamic variables nor rabbit ears.

Eh, whatever

It's within the lispy-language tradition, either way.

I was using both for a long-ish while; looks like now I'm getting them mixed up.

And I beg to differ about scheme. The semantics of current-input-port, etc, are most definitely those of dynamic-environment variables.

Almost

Actually current-input-port itself is an ordinary lexical variable, typically bound in the global environment. What it is bound to is a first-class object, a box with dynamic scope behavior. Check out the implementation of SRFI 39.

Unfortunately, the dynamic-bind procedure given there, which binds a list of arbitrary boxes to new values during the execution of an arbitrary thunk, isn't exposed by either SRFI 39 or R7RS-small, which provide only the macro version that specifies a body rather than a thunk. It's analogous to Common Lisp progv, but with boxes instead of variables. First-class dynamically bound boxes are a resource that we haven't yet begun to figure out how to exploit properly.

Yes

http://lmeyerov.github.io/projects/socioplt/viz/rank.html#

Randomly distributed

on the question "it's easy to tell at a glance what code in this language does"

Common Lisp is at the bottom of the list, but Scheme is at the top third point.

Which makes me think that very few people have used Scheme, and they're partisan.

That list (with some minor languages left out) goes
Python (good choice for ease of understanding)
C# (I would think there's some complexity there, but so popular it gets pushed up maybe)
Go (designed to be simple to read)
Java (designed to lack abstraction and be verbose)
Smalltalk (right on!)
Lua (great syntax, a few confusing constructs, but those are rarely used)
Objective C
Scala
O'caml
Ruby
Scheme
PHP
TCL
Fortran (I would have put at the top for easy to read, given what it's for)
Javascript
Delphi
C
R
...
at the bottom
C++
Common Lisp
Cog
Perl
Prolog
Forth

For "I enjoy playing with this language but would never use it for real code" Scheme is the number one answer, followed by Prolog, R and Common Lisp

Trying to get back on track

"The language name, the language web site, the bullet point reasons you think it is at all related to where Scheme comes from spiritually, the bullet point reasons you think it is "better" than Scheme, the bullet point cons (if any ha ha!) in the language." Please look at these as examples and opine. I wish this could be a spreadsheet inlined that we could all edit. These are in no particular order, ok?

saving on syntax

"Lua popular, has a JIT option / 'tables' are weird"

The great and horrible thing about Lua is that in order to simplify syntax they combined all data structures into one structure that leaks abstraction like mad.

So tables are lists and arrays and queues and hash tables and classes (with weird symantics so that add and compare are meaningfully extendable) and objects and proxies and to some extent environments. And among all these weird additions you lose the ability to copy a table except when you know what combination of things it really is and what the additions are.

But it sure gives you a simple syntax for building any data structure.

Depends what you want

In, say, 1990, Scheme served a variety of uses for different people. Depending on which of them you want, today there are different things to pick:

- If you want a core language to express simple computational ideas on paper in, then Scheme remains an excellent choice. For example, the Reasoned Schemer is written in Scheme, and it's a fine choice for it.

- If you want a Lisp with a sensible core design that you can use to get real work done in, there are several good choices -- I'd especially mention Racket and Clojure, depending on your needs and platforms, but Gambit, Guile, etc are also good.

- If you want a highly reflective language in which you can write useful programs, I recommend Ruby, or maybe JavaScript (but less so).

- If you want a simple PL to extend in your next academic paper, which your reviewers will have heard of, then JavaScript is your best choice.

- If you want an untyped language that everyone already teaches to undergraduates, then Python is for you.

- If you want a simple language to write an interpreter for, then Scheme continues to be a great choice.

However, if you want a language to play the role that Scheme played 20 years ago, then no such language exists, and won't exist no matter what people do -- the world has changed, rather than Scheme.

xtlang

This relatively young language might be of interest here:

extempore and xtlang

cool

Please expand upon that thought: the bullet point reasons you think it is at all related to where Scheme comes from spiritually, the bullet point reasons you think it is "better" than Scheme, the bullet point cons (if any ha ha!) in the language.

axiomatic language

This relatively old (1982) language may also be of interest:

axiomatic language

spiritual similarities:
* (extremely) minimal
* s-expr syntax
* metalanguage capability

pros:
* pure specification language - one defines the external behavior of a program without giving an implementation algorithm
* completely declarative
* more general, in the sense that relations are more general than functions

con:
* implementation grand challenge - requires automatically transforming specifications to efficient algorithms

I like that idea.

I really like that idea. But specifying *ONLY* external behavior at the top level is sort of a non-starter. Specifying the behavior at the level of routines, or even going down into the guts and specifying at the level of individual clauses, though, could be the "right thing." It would be a *lot* easier though - perhaps even feasible - to take the behavior specification and transform it into acceptance tests.

As for actually running programs, it would be the ultimate case of "The Language Is The Library." The users could make behavior specs, and the UI could tell them "oh, somebody already implemented that" and call in the appropriate library. Or not, and then when the user who specified it implements it, (and the system proves it) The UI asks permission to put it into the library in case anyone else ever calls for that behavior spec (or one proven equal to it) ever again.

And the library grows (bloats!) incrementally via the magic of crowd-sourcing. And as it grows, people would gradually be able to specify behavior at a higher and higher level, until specifying only the external behavior will start to actually work in some cases.

You'd need background threads running under every UI, at full speed to try to prove implementations against specs, or prove the equality of specs so that implementations of one could be used for the other. Things relevant to the local projects with first-priority of course, but never letting a cycle go by without working on something from somewhere. It would be a globally distributed compiler, running full speed on every user's system. Implementations could be proved by testing exhaustively in some cases, with cases distributed to thousands of users. Or if the testing is extensive but not yet exhaustive, you'd have a continually updated probability score that the implementation meets the spec, and some devs might choose to accept those library entries on the basis of six-nines probability or higher. To be replaced instantly by an upgrade if an implementation that is properly proven to comply with the spec becomes available.

It would be a worldwide, distributed, crowd-sourced compiler, running 24/7 and simultaneously on every user's code.

But 1982 wasn't ready for it, because it just plain wasn't possible then. Distributed libraries weren't possible without the internet, automated theorem proving wasn't ready for anything non-trivial, nobody had ever built a distributed application, and the compute power for proving things or attempting to test things as exhaustively as possible just plain didn't exist.

It's kind of an awesome idea, actually.

You could take it further than that.

If you have behavior spec as a proof target, you might be able to stick some sort of automated implementation into the mix. Neural net eats the behavior spec, comes up with -- something -- and if it gets proven correct it gets added to the library (and the set of "positive" examples for the neural net's continuing self-training). Of course it would almost always be wrong. But worldwide, distributed.... it might be right enough of the time to matter.