LtU Forum

A Framework for Gradual Memory Management

I do not know how much interest this community has in the intersection of programming languages, memory management and type systems, but for those intrigued by such topics, you might find this paper on Gradual Memory Management to be worth reading.

It proposes that a language's compiler offer more sophisticated type systems that enable a program to flexibly support multiple memory management mechanisms for improved performance, and do so with safety guaranteed at compile time. The described type systems build up from Rust's lifetime-driven owner/borrower model as well as Pony's reference capabilities (mutability/aliasing permissions). The paper also references Microsoft's experimental work on Midori.

I welcome any feedback or questions.

Programming language Theme-D

I have implemented programming language Theme-D, which is a Scheme-like programming language with static typing. Some properties of Theme-D include:
- Static type system
- A simple object system
- Multi-methods dispatched runtime (and also compile-time)
- Parametrized (type parameters) classes, types, and procedures
- Signature types resembling Java interfaces but multiply dispatched
- A module system
- Two kinds of variables: constants and mutable variables

Theme-D homepage is located at

http://www.tohoyn.fi/theme-d/index.html

Theme-D can also be found at

https://sourceforge.net/projects/theme-d/

I have also ported (a subset of) guile-gnome GUI library to Theme-D. Its homepage is located at

http://www.tohoyn.fi/theme-d/theme-d-gnome.html

and it can also be found at

https://sourceforge.net/projects/theme-d-gnome/

- Tommi Höynälänmaa

The Platonic Solids of Software Construction and Their Realization in C

The Platonic Solids of Software Construction and Their Realization in C

Synopsis -

Here I try to contrive 5 (actually ends up being 6) 'Platonic Solids' of software construction - IE, the fundamental elements of programming that all programmers in all business domains end up leveraging regardless of their general purpose programming language.

As a practical matter, I then demonstrate how different aspects of each element are either supportable - or not supportable in C. A language like C is chosen partially because when we use it to encode these elements, its weak language semantics actually enable us to understand each element in a more isolated way. For discussion at this level of analysis, this turns out to be useful.

However, I warn readers that this gist-article is more conjecture than science, an emerging idea that, if accurate in its notions, is a precursor to a rigorous investigation. That is why I offer it up for evaluation and critique here.

BCS FACS - Annual Peter Landin Semantics Seminar: Compiling Without Continuations, Prof Simon Peyton Jones, 12th Dec, 6pm, Lon

BCS FACS - Annual Peter Landin Semantics Seminar: Compiling Without Continuations

Date/Time: Tuesday 12 December 2017, 6.00pm - 9.00pm

Venue: BCS, 1st Floor, The Davidson Building, 5 Southampton Street, London, WC2E 7HA

Speaker: Professor Simon Peyton Jones, FRS (Microsoft Research)

Cost: Free

Booking: https://events.bcs.org/book/2701/

Synopsis:

Peter Landin (1930 - 2009) was a pioneer whose ideas underpin modern computing. In the 1950s and 1960s, Landin showed that programs could be defined in terms of mathematical functions, translated into functional expressions in the lambda calculus, and their meaning calculated with an abstract mathematical machine. Compiler writers and designers of modern-day programming languages alike owe much to Landin's pioneering work.

Each year, a leading figure in computer science will pay tribute to Landin's contribution to computing through a public seminar. This year's seminar is entitled “Compiling Without Continuations” and will be given by Professor Simon Peyton Jones, FRS (Microsoft Research).

Programme

5.15pm Coffee

6.00pm Welcome & Introduction

6.05pm Peter Landin Semantics Seminar

Compiling Without Continuations

Professor Simon Peyton Jones, FRS (Microsoft Research)

7.20pm Drinks Reception

Seminar details

GHC compiles Haskell via Core, a tiny intermediate language based closely on the lambda calculus. Almost all GHC’s optimisations happen in Core, but until recently there was an important kind of optimisation that Core really did not handle well. In this talk I’ll show you what the problem was, and how Core’s new “join points” solve it simply and beautifully, by effectively allowing Core to express control flow as well as data flow; there are strong links to so-called “continuation passing style” (CPS) here.

Understanding join points can help you are a programmer too, because you can write code confident that it will optimise well. I’ll show you a rather compelling example this: “skip-less streams” now fuse well, for the first time, which allows us to drop the previous (ingenious but awkward) workarounds.

SK in Prolog

A thought experiment I am too lazy to do so I'll ask you folk.

Define SK, define a reduction relation, ask whether two terms are equal/reduce to similar terms.

Can you do this in Prolog? (Asked because of interest in current unification based languages like miniKanren.)

Advancement in TDFA and POSIX submatch extraction

It came up in an old LTU thread about regular expressions. There was an argument whether tagged FA invented by Ville Laurikari can support POSIX disambiguation semantics. It turns out, they can: it is possible to construct efficient Laurikari TDFA with POSIX semantics, as well as with leftmost greedy semantics (details in this paper).

Back in 2007 Chris Kuklewicz suggested an algorithm which is implemented in his Regex-TDFA Haskell library. He also wrote an informal description of his algorithm, but never fully formalized it. Some ten years later I stumbled upon this thread when I was trying implement fast submatch extraction in the open source lexer generator re2c. I revised both the original algorithm by Laurikari and the modification by Kuklewicz and found a number of improvements and bugs in Regex-TDFA.

Language features for tracing JIT?

Are there any special programming language features ("superpowers") that are made possible by specifically exploting a tracing JIT compiler?

It seems to me like tracing JITs are often developed for existing languages (JavaScript, Python, Lua, etc) and tend to focus on efficiently compiling the existing code bases that are written with their traditional programming idioms. I am interested in the opposite approach: what programming idioms and language extensions can help programmers to make the most of their tracing JIT compiler?

I can offer one candidate. Tracing JIT compilers can choose to specialize machine code based on the exact values of parameters that are only available at runtime, and the language can facilitate "hinting" whether a certain variable should be specialized on its value (e.g. generate machine code for the specific case that i = 42) or on its type (e.g. generate machine code for the general case that i is an integer.) Often a certain operations can be compiled much more efficiently when a parameter is constant, for example the size argument to a memcpy or memcmp, and so a potential "superpower" is to apply these optimizations automatically without having to statically generate code that anticipates each possible value.

Is that a legitimate superpower? what are some others? what tricks can tracing JIT users exploit to leave C/C++ hackers eating their dust? :-)

Something I forgot

I think it was discussed here before but it came up in a conversation I had, and I forgot whether I ever got an answer to the following question:

Why doesn't Haskell need a value restriction a-la ML to deal with unsound interactions between mutable state and polymorphism?

Did no-one ever check this or does it have to do with invariance? I forgot.

New simple proof system

I took basic logic and made it as familiar and easy to learn as possible. Proofs look just like proofs in elementary school algebra--a series of steps where each step has a formula on the left and a justification on the right. Formulas are manipulated using rewrite rules (commutativity, etc.) rather than the typical deduction rules. This isn't any sort of limitation. It can implement both intuitionistic and classical logic using only rewrite rules (currently it uses intuitionistic).

I wrote up the result as a sort of game. The first real proof is problem 8, which is just a few steps. Some of the later problems are ferociously difficult (but possible)--no one I've shown it to has gotten close to solving them, so if anyone manages to solve problem 25, 35, 36, or 37, please let me know.

https://sftm.schlussweisen.com

It's completely free.

NOOL 2017

Co-located with SPLASH 2017, there will be another NOOL workshop - see here and here for previous editions.

Please consider submitting an abstract! It's a fun, friendly event and this year it's followed by the LIVE workshop.

XML feed