LtU Forum

Limits of Computability

There are many models of computation: E.g. Recursive functions (Goedel/Herbrand), Turing maching and lambda calculus.

These models can be used to explore the limits of computability. The lambda calculus is a very good model to explore the limits of computability.

I have written a little paper demonstrating these limits. It avoids a lot of math and should be easy to read without loss of precision.

Specialized File/Disk Systems for Actor Environments

Hi Folks,

It occurs to me that every once in a while someone takes a radically different approach to file system design - to better match disk (or now SSD) i/o to the nature of the processing going on. Database File Systems come to mind. Maybe Hadoop FS?

It strikes me that large clouds of actors - as in Erlang - might better be supported/represented by something other than a standard file system. (This is in the context of starting to design a system that might run on Erlang on bare iron, which leads to the question of whether there are some storage optimizations to consider.)

Any thoughts, references, etc?

Thanks,

Miles Fidelman

What is a type?

I have a hard time grasping what a type really is. To demonstrate my problem, let's assume we do everything with constraints instead.

So to show if an identifier or constant can be used in a given context, just compare the constraints. Do they allow the usage? Then fine, otherwise, error.

Does this mean that the sum of constraints is the type?

How then, would you explain fundamental types, like numbers and strings? It needs to be either, not both. A function accepting both, probably has two distinct implementations, or is converting one of them to the other. For example, printing might convert the number into a string. Constraint doesn't really handle conversions (or do they?), so apparently having numbers and strings seem to me to be something else. Is this what is meant by a type? Having a "something", which then could be converted into another "something"? And constraints should be added ontop of this?

Or is the type really all of the above?

Tackling the Awkward Squad for Reactive Programming

https://2020.ecoop.org/details/ecoop-2020-papers/19/Tackling-the-Awkward-Squad-for-Reactive-Programming-The-Actor-Reactor-Model

Sam Van den Vonder, Thierry Renaux, Bjarno Oeyen, Joeri De Koster, Wolfgang De Meuter

Reactive programming is a programming paradigm whereby programs are internally represented by a dependency graph, which is used to automatically (re)compute parts of a program whenever its input changes. In practice reactive programming can only be used for some parts of an application: a reactive program is usually embedded in an application that is still written in ordinary imperative languages such as JavaScript or Scala. In this paper we investigate this embedding and we distill “the awkward squad for reactive programming” as 3 concerns that are essential for real-world software development, but that do not fit within reactive programming. They are related to long lasting computations, side-effects, and the coordination between imperative and reactive code. To solve these issues we design a new programming model called the Actor-Reactor Model in which programs are split up in a number of actors and reactors. Actors and reactors enforce a strict separation of imperative and reactive code, and they can be composed via a number of composition operators that make use of data streams. We demonstrate the model via our own implementation in a language called Stella.

Godel and Meta-Circularity

I had an old forum topic where I discussed a couple of ways that I thought we could solve the OS upgrade problem. I proposed a number of ways that I claimed would work around this problem, but all of my proposed solutions ended up being flawed in one way or another. I was pushing around a lump in the carpet.

The OS upgrade problem is how to build machines that allow provably correct self-upgrade. The goal is to allow the OS to install a new version of itself that could be exactly the same as the previous version or might have new additions or optimizations. My idea on how to do this has been to the OS and the lowest level exposed hardware abstraction and require the programmer to supply a proof that his new OS implementation conforms to the OS standard. The standard axioms should include all of the properties that you want every future OS to include, such as all operations terminating in an acceptable time, security requirements, etc.

The most powerful approach to this would be to show the soundness of the OS inside the language, since then you'd be able to show that an arbitrary upgrade is correct. But this would require also modeling the language implementation as part of the OS, which runs us into Godel's incompleteness theorems.

The practical solution in a logic with universes is to to start with 2^64 universe levels and go on with your life. Since each universe models the one below, you'd be able to model an OS with 2^64-1 levels on the first upgrade, 2^64-2 on the second, etc. Each time you upgrade your machine, you will only be able to prove the correctness of a logic with one fewer level, but in practice you'll never run out. This is kind of inelegant, though, so I wondered if we could do better.

I think I now see a way to do it that meets the requirement I imposed above - the OS upgrade should be able to implement itself without *any* change (even just the loss of a single universe out of 2^64). The idea is this: we axiomitize that a certain machine operation returns a number that is guaranteed to be a lower bound on the number of levels. There is a software operation that OS can use to decrease the number of levels (until it reaches zero). The current version of the OS can correctly prove that if it decreases the level count by one before passing control to the new OS, then the new OS will behave correctly (because it thinks there is one fewer level and we can prove that correct).

The trick is to give the user a way to increase the number of levels reported by the machine by pressing a button *in hardware* on the outside of the machine somewhere. When the user wants to upgrade his OS, he can only do it if he has enough levels. Upon running out, the button resets the universe count to 2^64 or whatever.

Taking the machine out of the loop on the level increase is the key to making this work. Any attempt to automate the process by teaching the OS how to press the button itself seems to require additional levels to avoid Godel.

This construction satisfies my goal of building machines that allow provably correct self-upgrade without any loss in consistency strength over time.

Use Cases for Shared-Memory Concurrency?

The async/await thread got me thinking about Coyotos, and the fact that outside of the kernel I couldn't think of any use-case where mutable shared-memory concurrency was motivated in application code. Assumptions:

  • async/await integrated into the application-level languages, and
  • non-shared-memory concurrency with messages in the style of Node worker threads remains available (shared physical heap, non-shared logical heap).
  • the usual separate address space mechanisms remain available

While there are clearly algorithms that benefit from mutable shared-memory concurrency, most have equivalently efficient implementations built on messaging without use of shared memory. This becomes especially true if we add in the ability to observe when data is deep-frozen (which address many cases of "initialize before going parallel) to support immutable shared memory concurrency. For example, we did a high-performance ethernet driver in EROS that utterly relied on mutable concurrent shared memory for performance, but could have been implemented with immutable shared memory if we had had the ability to transfer an observably frozen reference as part of a message. Offhand, I'm not convinced that message ports need to support mutable shared memory data transfers at all.

    By "observably deep frozen", I mean a reference into a previously mutable graph that can no longer be reached (from anywhere) by a reference that permits further mutation. One way to do this is to return a single [transitively] read-only reference to some element of the graph that we can show has not otherwise escaped from the execution context in which it had been mutable, and whose mutable execution context is known to have terminated at the point of return.

    A similar effect can be had from deep-copy, but deep-copy can have prohibitive cost for large object graphs.

I'm sure there will be some obvious answers, but I'm prompted to ask: what kinds of codes or problem types exist that truly require the shared memory concurrent model (i.e. that cannot be efficiently encoded using a combination of async/await and non-mutably-shared concurrent threads with messaging)? Is there a common pattern to them? Is the pool of such codes large enough to warrant first-class design support in programming languages and OS runtimes? How many of those cases would be eliminated if we had reliably observable deep frozenness in the type system or the language runtime?

To be clear, this is mostly an idle ponder on my part. Putting the async/await discussion together with observable frozen-ness in my head got me to thinking on this, so I thought I'd put the question out there for discussion.

Regards and thanks...

Async/await vs coroutines?

I have noticed that in the last decade or so various programming languages have adopted an async/await model of concurrency (Javascript, C#, Rust, Zig, etc.). This surprised me as my experience with Lua had me assume that a coroutine model of concurrency was superior and would win. Obviously I have missed something. What are the advantages of async/await? More explicit? Lower level? Easier to implement? More flexible? How else do async/await and coroutines compare?

Exhaustiveness checks for algorithms that manipulate imperative data structures

Let's start with something that ought not to be controversial, although it probably is: Programming languages exist to express algorithms! Thus, programming languages ought to be judged according to the ease with which they express algorithms, especially the not quite trivial ones. Moreover, algorithms are known to be tricky beasts - you make one slight mistake and the thing does something completely different from what you want. Thus, it is reasonable to emphasize how reliably humans can write correct algorithm implementations as the main criterion for judging languages.

In this regard, one of the greatest inventions in PL history are algebraic data types and pattern matching with automatic exhaustiveness checks. As long as your programs only manipulate purely functional data structures, it is often possible to write them in such a way that the compiler can verify the exhaustiveness of your case analyses. This is a huge win for reliability, because computers are ridiculously better than humans at performing exhaustive case analyses.

Unfortunately, this advantage of pattern matching over if/else chains completely vanishes as soon as you introduce imperative data structures. As the state of an imperative data structure evolves over time, you gain the ability to perform new operations on it, and also lose the ability to perform operations on it that were previously possible, in a way that algebraic data types are “too static” to correctly describe. For example, suppose you are writing a compiler and, for efficiency reasons, you want to perform syntax tree transformations in place whenever possible. In particular, you want to replace variable names with de Bruijn indices in place. So you define your abstract syntax like this:

datatype state = Name of string | Index of int

type var = state ref

datatype term
  = Var of var
  | Apply of term * term
  | ...


However, then it becomes impossible to express invariants such as “all names have been converted to indices”, i.e., “the abstract syntax tree has been converted to an abstract binding tree”. You might want to define a helper function like

fun getIndex v =
  case !v of
      Index n => n
    | Name _ => raise UnboundVariable


This is terrible, because it means that you have failed to convince the type checker that your case analysis is exhaustive, i.e., “no variable is in the Name state by the time you call getIndex”.

For a more serious example, consider the algorithms for computing the strongly connected components of a directed graph due to Tarjan and Gabow. Both algorithms construct an auxiliary stack whose topmost few nodes belong to the strongly connected component of the node currently under consideration. When the strongly connected component has been fully explored, this stack is popped. See here and here, respectively. We know that, at this point, the stack is nonempty, this does not need to be checked at runtime. And yet, in every memory safe language that I know of, the check will be performed, creating a useless unreachable control flow path.

Long ago, when I was a reckless C++ programmer, and thought I could keep every detail in my head, I would have replied “well, then a memory unsafe pop it is!” Nowadays I expect better from programming languages. Are there any programming languages that do a better job of tracking what situations may actually arise when manipulating imperative data structures?

EDIT: I am aware of typestates. They could help with the first example, but I do not see how they help with the second.

Is character as a type meaningless?

Assume 1,2,3 creates a list of three integers, 1 and 2 and 3. Similarly, I could create a list of identifiers like func1, func2, func3. I could also have three characters, 'a', 'b', 'c'.

Having 'a', 'b', 'c' is really like having the string "abc", since strings are an array of characters. For anyone having read how Unicode works, you might have noticed that the definition of a character is kind of tricky. Some characters are composed of several characters, so what do you mean with character? A Unicode code point as Unicode would like to see it? Single UTF-8 element as C and C++ typically see it? UTF-16 element as in Windows?

All this makes me wonder, is there any reason to treat character as something else than just a short string? It would be a sub-type of string, in that it is constrained to only evaluate to a single Unicode character. Would this make sense, or are there pitfalls with this kind of thinking?

Programming in Lambda Calculus

Many texts on untyped lambda calculus are heavily loaded with math.

I have tried a different approach to look at lambda calculus from a programming point of view. The results are posted in a little paper on this website.

As a special add-on, I show how to systematically construct algebraic datatypes in lambda calculus.

Comments, hints etc. are welcome.

XML feed