LtU Forum

Racket is ‘Scheme all the way down’ with v8 on Chez Scheme

Chez Scheme’s (nanopass) compiler produces native code x86+x86-64, ARM (incl Raspberry Pi to the Apple M1).

Racket on Chez status

Racket v8 release

High level languages with optimal code generation

One of the issues that arises during the implementation of programs that need to have high performance, is that a large percentage of time is spent doing and debugging optimizations.

These range from reorganizing data, reorganizing control flow, and sometimes even going as far as using inline assembly.

This is work seems to be very error prone and quite difficult to do, with a lot of specialized knowledge that is necessary to achieve good performance.

A good example of the difficulties of this process can be seen in the series handmade hero.
From about episode 590 to about episode 618. Much of the work done in the episodes is about doing optimizations.

How can we make it easier for compilers to help generate faster code automatically?

I found out about superoptimization, but this seem to be very limited in capability. Specifically, it is very slow, can't deal with control flow and can't do anything about data structures.

Regarding this last point, most languages force us to decide on them, and so a compiler can't even do much about it. A different strategy is used in Compilation of Bottom-Up Evaluation for a Pure Logic Programming Language, where a Turing complete Datalog inspired language is proposed. With this approach, the data structures are selected by the compiler.

What other research looks at this problem, and what other ideas do you have?

Is relational algebra + mapreduce a good way to approach this problem?

How can we have program descriptions that compile into optimal or close to optimal code? Or at least how can we decouple optimizations from the problem description?

P.S. This is a bit of a sequel to the Why is there no widely accepted progress for 50 years?. The answers I got in that thread were very thought provoking and helped to clarify much of my thinking. Thank you very much to all that participated in the discussion!

Concurrent System Programming with Effect Handlers

From 2017 cf. github repo of materials for CUFP 17 tutorial about it

Stephen Dolan Spiros Eliopoulos Daniel Hillerström Anil Madhavapeddy KC Sivaramakrishnan and Leo White

Abstract. Algebraic effects and their handlers have been steadily gaining attention as a programming language feature for composably expressing user-defined computational effects. While several prototype implementations of languages incorporating algebraic effects exist, Multicore OCaml incorporates effect handlers as the primary means of expressing concurrency in the language. In this paper, we make the observation that effect handlers can elegantly express particularly difficult programs that combine system programming and concurrency without compromising performance. Our experimental results on a highly concurrent and scalable web server demonstrate that effect handlers perform on par with highly optimised monadic concurrency libraries, while retaining the simplicity of direct-style code.

A problem about programming with macros vs Kernel F-exprs

I love Kernel F-exprs which looks better than ordinary macro in almost every ways,
especially the elimination for the need of a reflective tower, or separation of phases, if I understand correctly.

However, I have trouble trying to implement the following using F-expr

(defmacro f () 
 (make-array 1)) ;; Common Lisp style macro
(define g (lambda () (list (f) (f))))

The two (f) above each expand into a different array instance.
Calling (g) will then return a list every time, with the same two different array instances.

I can't think of a way to replicate this behavior using Kernel F-exprs.
It seems that when an F-expr f is called there's no way to know where
in the source code is it called.
If f accepts some arguments, a dirty hack is to have a hash table mapping
f's unevaluated argument to array instances and hope the implementation
doesn't hash-cons the source code, but in this case the argument is just
one unique #nil, so that doesn't work.

Any ideas?

Loop and recursion

Generic loops are problematic to reason about and understand when you read the code, and therefore prone to cause bugs. I think it would be beneficial to forbid generic loops, and only have iterators.

Recursions can also be problematic since the stack usage can become unpredictable. The exception is of course tail recursions, since those can re-use that current stack frame.

If the language would prohibit generic loops and recursions except for tail recursions, would it still be useful as a generic programming language or would those restrictions make some common designs impossible?

Object graph 'integrals'

Is there some mathematical theory that considers ordered 'integrals' over object graphs? I've selectively checked category theory books and papers, and it looks that they are working with graph-to-graph operations, rather than with 'integral' object graph operations. Papers in graph theory considers some path 'integral' operations in ad-hoc manner like in delivery problem, but I've have not found generic considerations for graph-scoped 'integral' operations.

Such graph integral would be high-order function that produce graph function basing on node specific function and it will combine node specific function using some combinator basing on links in graph.

For theory behind dependency injection such graph 'integral' operations are needed, but I've have not found any papers or books on this simple-looking thing, possibly because I'm using wrong keywords. For example, creation of graph of objects and unwinding graph of objects in dependency injection are such 'integral' operations. We need to execute some object-dependent operation for each object in operation-specific order governed by graph. For example, for graph unwind operation we need to destroy objects that have no dependent objects first. For configuration 'refresh' there is a need to update only affected objects.

Cognition and Coding

Fascinating research from MIT on human cognition and coding. The brain uses spatial navigation as well as math/logic areas to understand programs. Our apparent need for at least some spatial processing when programming, may lend additional support for an explicitly spatial syntax and semantics for programming languages.

The full article is published here.

looking for dependent research proof system language implemented in C++

I am looking for a dependent research proof system language implemented in C++.

Typer: ML boosted with type theory and Scheme

Typer: ML boosted with type theory and Scheme

Abstract
We present the language Typer which is a programming language in the ML family. Its
name is an homage to Scheme(r) with which it shares the design of a minimal core language
combined with powerful metaprogramming facilities, pushing as much functionality as
possible into libraries. Contrary to Scheme, its syntax includes traditional infix notation,
and its core language is very much statically typed. More specifically the core language is a
variant of the implicit calculus of constructions (ICC). We present the main elements of the
language, including its Lisp-style syntactic structure, its elaboration phase which combines
macro-expansion and Hindley-Milner type inference, its treatment of implicit arguments,
and its novel approach to impredicativity.

Haskell-Like S-Expression-Based Language Designed for an IDE

Haskell-Like S-Expression-Based Language
Designed for an IDE

(has to have my current absolute super favourite first sentence ever.)

Abstract

The state of the programmers’ toolbox is abysmal. Although substantial effort is put into the
development of powerful integrated development environments (IDEs), their features often lack
capabilities desired by programmers and target primarily classical object oriented languages.
This report documents the results of designing a modern programming language with its IDE in
mind. We introduce a new statically typed functional language with strong metaprogramming
capabilities, targeting JavaScript, the most popular runtime of today; and its accompanying
browser-based IDE. We demonstrate the advantages resulting from designing both the language
and its IDE at the same time and evaluate the resulting environment by employing it to solve a
variety of nontrivial programming tasks. Our results demonstrate that programmers can greatly
benefit from the combined application of modern approaches to programming tools.

XML feed