LtU Forum

Object graph 'integrals'

Is there some mathematical theory that considers ordered 'integrals' over object graphs? I've selectively checked category theory books and papers, and it looks that they are working with graph-to-graph operations, rather than with 'integral' object graph operations. Papers in graph theory considers some path 'integral' operations in ad-hoc manner like in delivery problem, but I've have not found generic considerations for graph-scoped 'integral' operations.

Such graph integral would be high-order function that produce graph function basing on node specific function and it will combine node specific function using some combinator basing on links in graph.

For theory behind dependency injection such graph 'integral' operations are needed, but I've have not found any papers or books on this simple-looking thing, possibly because I'm using wrong keywords. For example, creation of graph of objects and unwinding graph of objects in dependency injection are such 'integral' operations. We need to execute some object-dependent operation for each object in operation-specific order governed by graph. For example, for graph unwind operation we need to destroy objects that have no dependent objects first. For configuration 'refresh' there is a need to update only affected objects.

Cognition and Coding

Fascinating research from MIT on human cognition and coding. The brain uses spatial navigation as well as math/logic areas to understand programs. Our apparent need for at least some spatial processing when programming, may lend additional support for an explicitly spatial syntax and semantics for programming languages.

The full article is published here.

looking for dependent research proof system language implemented in C++

I am looking for a dependent research proof system language implemented in C++.

Typer: ML boosted with type theory and Scheme

Typer: ML boosted with type theory and Scheme

Abstract
We present the language Typer which is a programming language in the ML family. Its
name is an homage to Scheme(r) with which it shares the design of a minimal core language
combined with powerful metaprogramming facilities, pushing as much functionality as
possible into libraries. Contrary to Scheme, its syntax includes traditional infix notation,
and its core language is very much statically typed. More specifically the core language is a
variant of the implicit calculus of constructions (ICC). We present the main elements of the
language, including its Lisp-style syntactic structure, its elaboration phase which combines
macro-expansion and Hindley-Milner type inference, its treatment of implicit arguments,
and its novel approach to impredicativity.

Haskell-Like S-Expression-Based Language Designed for an IDE

Haskell-Like S-Expression-Based Language
Designed for an IDE

(has to have my current absolute super favourite first sentence ever.)

Abstract

The state of the programmers’ toolbox is abysmal. Although substantial effort is put into the
development of powerful integrated development environments (IDEs), their features often lack
capabilities desired by programmers and target primarily classical object oriented languages.
This report documents the results of designing a modern programming language with its IDE in
mind. We introduce a new statically typed functional language with strong metaprogramming
capabilities, targeting JavaScript, the most popular runtime of today; and its accompanying
browser-based IDE. We demonstrate the advantages resulting from designing both the language
and its IDE at the same time and evaluate the resulting environment by employing it to solve a
variety of nontrivial programming tasks. Our results demonstrate that programmers can greatly
benefit from the combined application of modern approaches to programming tools.

The AST Typing Problem (a bleg, also "why did attribute-grammars fail?")

TL;DR I'm looking for open-source-available multipass compilers in the spirit of what "shap" wrote about, as inputs to a project to understand whether attribute-grammars could help make them more tractable.

A decade ago, user "shap" posted about "The AST Typing Problem" ( http://lambda-the-ultimate.org/node/4170 ) and this generated a great discussion. In that discussion, a number of commenters also chimed-in about their own problems in writing multipass compilers. There was a post in 2019 at the OCaml Discussion Forum ( https://discuss.ocaml.org/t/an-ast-typing-problem/3677 ) that raised the same issues. The situation hasn't actually gotten better (AFAICT), and I'd like to revisit the subject.

I have a belief (which might or might not be right) that attribute-grammars might be an expressive medium that could produce sufficiently modular descriptions, that avoid some of the problems raised in shap's post. the problem with this belief, is that attribute grammars were tried decades ago, and arguably failed. I'm not going to sugar-coat this: on this forum, with lots of hard-core compiler-writers, nobody seriously considers them as an implementation technology. But, I'm not saying they failed for *technical* reasons: that might or might not be the case, and resolving that is part of why I'm writing this post.

I'd like to understand why, and the best way I know of, for doing so, would be to take some of the multipass compilers that "shap" and the commenters mentioned, 10 years ago, and translate them to the AG formalism, to see what happens: are they uglier, messier, harder to get right? [and for sure, merely translating an already-written compiler isn't a full test, but it's a start] To that end, I'm writing this post to ask for pointers to such multipass compilers, that I might use as experimental subjects.

I'm in the process of "software archeology", writing an AG evaluator-generator, based on published writings about previous systems (so far, I've only found one that still works -- Eli from Waite at Colorado) and so experimental subjects to which I could apply the system I'm devloping would be very useful,

Thanks in advance

Looking for VM's or AM's for functional languages.

I am wanting to review source code for virtual or abstract machines for statically typed functional languages.

Candidates would be both native implementations in C, C++, Rust, etc. Or assembler, or higher source language bootstrapped languages.

Many thanks in advance.

Upward and downward polymorphism in object oriented languages.

Let's say, for the sake of argument, that I have defined a (virtual) base type 'bird' containing a pure virtual function 'sing', so every (non-virtual) derived type of bird must provide an implementation of that function.

I derive three classes, 'wren' where it results in fairly complex twittery noises, and 'crow' where it results in relatively uniform 'caw' sounds, and 'lyrebird' which selects from a long list of other birdcalls, cell phone ringtones, doorbells, chainsaws, deisel engines, etc etc etc...

I then make a list of birds. I want to iterate down that list calling 'sing' for each bird. This is an example of 'downward polymorphism' because I want members of the base class to invoke the behavior of the child class.

The same thing is true if I provide an implementation of 'sing' in the base class, which plays 4 minutes and 33 seconds of silence. When I iterate down my list of birds, where there are no virtual birds, it is no more helpful. Instead every bird is silent. This is 'upward polymorphism,' where any member of a derived class behaves as a member of its parent class, ignoring implementations of functions overridden in the derived class.

I happen to think that upward polymorphism is almost useless. What's the point of having a class whose behavior varies according to subtype if you then have to keep track of what subtype some particular instance is in order to get its correct behavior?

I wind up implementing downward polymorphism myself. This means a *variable* field in the base class containing the function pointers, an implementation (for the base class) that transfers control through that pointer, and a constructor for each derived class that writes its own function pointer into that variable (via a series of nasty binary cast operations on copies of the 'self' and function pointer which my compiler has to be beaten over the head to allow).

In other words, using OO and polymorphism in C++ almost exactly the same nasty, unsafe way I use it in C. The only advantage, from my point of view, is the ability to provide the interface and calling conventions people expect if it's ever to be integrated into the same program with various libraries. The disadvantage is that c++ compilers require more beating over the head, ignoring more warnings, etc, to be tortured into allowing it.

Why isn't there a nice, safe, compiler-supported implementation of downward polymorphism? Is it really as dangerous and bizarre as the reactions of implementors and compilers seem to imply? Is the way I'm doing it by hand really better than just providing a language construct, or even a template library, that does it automatically?

Honest to god, transferring control through a pointer isn't significant overhead.

Limits of Computability

There are many models of computation: E.g. Recursive functions (Goedel/Herbrand), Turing maching and lambda calculus.

These models can be used to explore the limits of computability. The lambda calculus is a very good model to explore the limits of computability.

I have written a little paper demonstrating these limits. It avoids a lot of math and should be easy to read without loss of precision.

Specialized File/Disk Systems for Actor Environments

Hi Folks,

It occurs to me that every once in a while someone takes a radically different approach to file system design - to better match disk (or now SSD) i/o to the nature of the processing going on. Database File Systems come to mind. Maybe Hadoop FS?

It strikes me that large clouds of actors - as in Erlang - might better be supported/represented by something other than a standard file system. (This is in the context of starting to design a system that might run on Erlang on bare iron, which leads to the question of whether there are some storage optimizations to consider.)

Any thoughts, references, etc?

Thanks,

Miles Fidelman

XML feed