LtU Forum

Objective-S

Morally kinda similar to ArchJava, apparently.

Objective-S is an architecture-oriented programming language based loosely on Smalltalk and Objective-C. It currently runs on macOS, iOS and Linux, the latter using GNUstep. By allowing general architectures, Objective-S is the first general purpose programming language. What we currently call general purpose languages are actually domain specific languages for the domain of algorithms. Objective-S includes an Objective-C compatible runtime model, but using a much simpler and consistent Smalltalk-based syntax. Unlike Smalltalk, Objective-S has syntax for defining classes and so can be file-based and is "vi-hackable".

Paper: Tyranny of call-return

Paper: Procedure Calls Are the Assembly Language of Software Interconnection: Connectors Deserve First-Class Status

Java / CPython Language Bridge

I've implemented support for two-way compatibility between Java and Python/CPython, meaning that Java can import standard CPython modules and use code as if it were Java, and Python can import Java classes and use them as if they were Python classes.

Imports in both languages are done with standard import statements.

How It Works

Java code imported into Python is subjected to reflection and then Python wrapper classes are generated using the Python API that provide runtime translations between the two language runtimes. A native Python module is provided that provides the "magic import" functionality from Java to Python (more info).

Python code imported into Java is introspected using the Python C API and then bytecode is generated for Java wrapper classes that provide a similar runtime translation as the imports in the other direction. A custom compiler based on javax.tools is provided (QoreJavaCompiler) as well as runtime dynamic code generation support for wrapper APIs in a custom class loader (QoreURLClassLoader) - more info.

This is all accomplished using a third language called Qore that provides the functionality of a "language bridge" and also manages the references to data in each separate runtime. Because Qore has a deterministic garbage collector, strong references to Python and Java data are managed in Qore and released when there are no more valid references to the data (in Qore - therefore various tricks are used to manage references transparently including sing thread-local data; there is also the possibility of using custom reference handlers to maintain the validity of objects using different language runtimes and garbage collection implementations). A high level description of the approach is here.

While support for importing Java into Python has been released, importing Python into Java will be released in April 2021 but is currently working and stable.

The idea is to facilitate the mixed use of enterprise technologies (Java) and AI / data science technologies (Python).

All of the source code required has been released under permissive open source licenses (MIT).

I hope it could be interesting and useful for someone - happy to provide more information if there's interest.

John Shutt, creator of Kernel and an LtU regular, dies at 56

Apparently he was also a wikinews contributor as they have an obituary: Wikinews mourns loss of volunteer John Shutt

On Friday, Wikinews learned Dr John Nathan Shutt, a long-time contributor to both Wikinews and Wikibooks, died on Wednesday. Editing under the name Pi zero, he was at the time the top contributor to Wikinews by edit count, and came third on English Wikibooks. Dr Shutt was 56 years old.

Edit: John's post tracking page

Racket is ‘Scheme all the way down’ with v8 on Chez Scheme

Chez Scheme’s (nanopass) compiler produces native code x86+x86-64, ARM (incl Raspberry Pi to the Apple M1).

Racket on Chez status

Racket v8 release

High level languages with optimal code generation

One of the issues that arises during the implementation of programs that need to have high performance, is that a large percentage of time is spent doing and debugging optimizations.

These range from reorganizing data, reorganizing control flow, and sometimes even going as far as using inline assembly.

This is work seems to be very error prone and quite difficult to do, with a lot of specialized knowledge that is necessary to achieve good performance.

A good example of the difficulties of this process can be seen in the series handmade hero.
From about episode 590 to about episode 618. Much of the work done in the episodes is about doing optimizations.

How can we make it easier for compilers to help generate faster code automatically?

I found out about superoptimization, but this seem to be very limited in capability. Specifically, it is very slow, can't deal with control flow and can't do anything about data structures.

Regarding this last point, most languages force us to decide on them, and so a compiler can't even do much about it. A different strategy is used in Compilation of Bottom-Up Evaluation for a Pure Logic Programming Language, where a Turing complete Datalog inspired language is proposed. With this approach, the data structures are selected by the compiler.

What other research looks at this problem, and what other ideas do you have?

Is relational algebra + mapreduce a good way to approach this problem?

How can we have program descriptions that compile into optimal or close to optimal code? Or at least how can we decouple optimizations from the problem description?

P.S. This is a bit of a sequel to the Why is there no widely accepted progress for 50 years?. The answers I got in that thread were very thought provoking and helped to clarify much of my thinking. Thank you very much to all that participated in the discussion!

Concurrent System Programming with Effect Handlers

From 2017 cf. github repo of materials for CUFP 17 tutorial about it

Stephen Dolan Spiros Eliopoulos Daniel Hillerström Anil Madhavapeddy KC Sivaramakrishnan and Leo White

Abstract. Algebraic effects and their handlers have been steadily gaining attention as a programming language feature for composably expressing user-defined computational effects. While several prototype implementations of languages incorporating algebraic effects exist, Multicore OCaml incorporates effect handlers as the primary means of expressing concurrency in the language. In this paper, we make the observation that effect handlers can elegantly express particularly difficult programs that combine system programming and concurrency without compromising performance. Our experimental results on a highly concurrent and scalable web server demonstrate that effect handlers perform on par with highly optimised monadic concurrency libraries, while retaining the simplicity of direct-style code.

A problem about programming with macros vs Kernel F-exprs

I love Kernel F-exprs which looks better than ordinary macro in almost every ways,
especially the elimination for the need of a reflective tower, or separation of phases, if I understand correctly.

However, I have trouble trying to implement the following using F-expr

(defmacro f () 
 (make-array 1)) ;; Common Lisp style macro
(define g (lambda () (list (f) (f))))

The two (f) above each expand into a different array instance.
Calling (g) will then return a list every time, with the same two different array instances.

I can't think of a way to replicate this behavior using Kernel F-exprs.
It seems that when an F-expr f is called there's no way to know where
in the source code is it called.
If f accepts some arguments, a dirty hack is to have a hash table mapping
f's unevaluated argument to array instances and hope the implementation
doesn't hash-cons the source code, but in this case the argument is just
one unique #nil, so that doesn't work.

Any ideas?

Loop and recursion

Generic loops are problematic to reason about and understand when you read the code, and therefore prone to cause bugs. I think it would be beneficial to forbid generic loops, and only have iterators.

Recursions can also be problematic since the stack usage can become unpredictable. The exception is of course tail recursions, since those can re-use that current stack frame.

If the language would prohibit generic loops and recursions except for tail recursions, would it still be useful as a generic programming language or would those restrictions make some common designs impossible?

Object graph 'integrals'

Is there some mathematical theory that considers ordered 'integrals' over object graphs? I've selectively checked category theory books and papers, and it looks that they are working with graph-to-graph operations, rather than with 'integral' object graph operations. Papers in graph theory considers some path 'integral' operations in ad-hoc manner like in delivery problem, but I've have not found generic considerations for graph-scoped 'integral' operations.

Such graph integral would be high-order function that produce graph function basing on node specific function and it will combine node specific function using some combinator basing on links in graph.

For theory behind dependency injection such graph 'integral' operations are needed, but I've have not found any papers or books on this simple-looking thing, possibly because I'm using wrong keywords. For example, creation of graph of objects and unwinding graph of objects in dependency injection are such 'integral' operations. We need to execute some object-dependent operation for each object in operation-specific order governed by graph. For example, for graph unwind operation we need to destroy objects that have no dependent objects first. For configuration 'refresh' there is a need to update only affected objects.

Cognition and Coding

Fascinating research from MIT on human cognition and coding. The brain uses spatial navigation as well as math/logic areas to understand programs. Our apparent need for at least some spatial processing when programming, may lend additional support for an explicitly spatial syntax and semantics for programming languages.

The full article is published here.

XML feed