Semantics

Syntax, Semantics and all that Stuff

Modeling Languages: Syntax, Semantics and all that Stuff (or, What's the Semantics of "Semantics"?)
by David Harel, Bernhard Rumpe

Motivated by the confusion surrounding the proper definition of complex modeling languages, especially the UML, we discuss the distinction between syntax and true semantics, and the nature and purpose of each.

This paper is rather light on greek letters (and category stuff is absent as well). Instead, it aims to introduce the notions of syntax, semantics, and formal definition of languages in plain prose.
If you never understood why bother with formal definitions - consider reading this!

You might want to read a longer version (Modeling Languages: Syntax, Semantics and All That Stuff Part I: The Basic Stuff). Unfortunately, I was unable to find parts II and III (The Advanced Stuff and The Really Hard Stuff) online.

Jumbo Lambda Calculus

Two new papers by Paul Blain Levy, "Jumbo Lambda Calculus" and the extended version "Jumbo Connectives in Type Theory and Logic", are available on his web page. Part of the abstract:

We make an argument that, for any study involving computational effects such as divergence or continuations, the traditional syntax of simply typed lambda-calculus cannot be regarded as canonical, because standard arguments for canonicity rely on isomorphisms that may not exist in an effectful setting. To remedy this, we define a "jumbo lambda-calculus" that fuses the traditional connectives together into more general ones, so-called "jumbo connectives". We provide two pieces of evidence for our thesis that the jumbo formulation is advantageous.

(From the types list.)

Building Interpreters by Composing Monads

Building Interpreters by Composing Monads

We exhibit a set of functions coded in
Haskell that can be used as building blocks to construct
a variety of interpreters for Lisp-like languages. The
building blocks are joined merely through functional
composition. Each building block contributes code to
support a specific feature, such as numbers, continuations,
functions calls, or nondeterminism. The result of
composing some number of building blocks is a parser,
an interpreter, and a printer that support exactly the
expression forms and data types needed for the combined
set of features, and no more.
The data structures are organized as pseudomonads,
a generalization of monads that allows composition.
Functional composition of the building blocks implies
type composition of the relevant pseudomonads.

So actually it is about building interpreters by composing pseudomonads.

PS: I stumbled upon this paper while trying to factor an interpreter into a set of features (and yes, I tried to package them as monads).
After a day of fighting with undecidable instances and rigid type variables I gave up and started googling - well, I was trying to invent a wheel.
Any comments on how pseudomonads relate to arrows (and other generalizations of monads) are appreciated.

Typed Concurrent Programming with Logic Variables

Typed Concurrent Programming with Logic Variables

We present a concurrent higher-order programming language called Plain and a
concomitant static type system. Plain is based on logic variables and computes
with possibly partial data structures. The data structures of Plain are procedures, cells, and records. Plain's type system features record-based subtyping, bounded existential polymorphism, and access modalities distinguishing between reading and writing.

You may want to compare this with The Oz Programming Model (OPM), which

... is a concurrent programming model subsuming higher-order functional and object-oriented programming as facets of a general model. This is particularly interesting for concurrent object-oriented programming, for which no comprehensive formal model existed until now. The model can be extended so that it can express encapsulated problem solvers generalizing the problem solving capabilities of constraint logic programming.

Another paper on OPM is The Operational Semantics of Oz.

In short, the model of Plain is based on that of Oz with the main differences being:

  1. Plain statically types programs using a type system with subtyping, while Oz is latently typed.
  2. Therefore Plain chooses to drop support for unification in favor of a single-assignment operation.

A Monadic Semantics for Core Curry

A Monadic Semantics for Core Curry

The structure of our semantics sheds some light on how the basic components
of the language interact. In particular, we can see that the addition
of non-determinism, logic variables and narrowing can be accomplished just
by making a suitable shift in interpretation monad. We could emphasize
this modularity further by presenting the relevant monads as compositions of
monad transformers.

While being primarily an "interpreter as semantics" paper, it looks like a nice example of a DSL in Haskell.

As a bonus, it also discusses some features of logic programming.

A Tail-Recursive Machine with Stack Inspection

A Tail-Recursive Machine with Stack Inspection. John Clements and Matthias Felleisen, TOPLAS 2004.

Security folklore holds that a security mechanism based on stack inspection is incompatible with a global tail call optimization policy... In this article, we prove this widely held belief wrong. ... Our machine is surprisingly simple and suggests that tail calls are as easy to implement in a security setting as they are in a conventional one.

I don't believe we've discussed this paper before, although it was mentioned in this thread. Tail calls have been a topic of discussion here several times. [1][2][3]

Semantic Distance: NLP Not a Resource Sink

Following the story on Mind Mappers and other RDF comments of late, I thought this NLP slide show (PDF) should get a story. Dr. Adrian Walker offers an interesting perspective in a friendly crayon-colored format, including a critique of RDF. Source site Internet Business Logic has other offerings including an online demo.

Constraint Programming

Constraint Programming

I will not quote this introduction/manifesto/historical overview, as every page of it is worth reading.

It is not only a nice introduction into a promising field, but also a demonstration of how language design issues can be (to some extent) separated from high-level fundamental intuitions.

It is also quite interesting to follow the historical lines of the paper, it reads like an epic!

Ah, and by the way, that's the same constraint programming that underlies Oz.

Mind Mappers

OS and web search vendors are merging desktop search into their offerings. Vendor solutions vaguely worry me. They seem too focused on the home PC and not on business needs, while needlessly bypassing RDF. There's also vendor lock, bad EULAs, privacy negligence, and lost boundaries between OS, applications, and data - proprietary black boxes tempting us into dependence.

That thinking led me to the open-source Mind Raider program. It's one of the few that makes RDF useful for normal people. It compares to Chandler but focuses less on email and calendars. As far as I know, Chandler doesn't expose RDF or even use it, necessarily. However the Mind Raider Big Picture shows similarity to Chandler's vision.

So why should this stuff matter to LtU. Well, compare formal organization between data that only computers inspect and data that people use daily. Many database systems exist to store data in the former category. Employee and customer address data serves little purpose beyond printing paychecks and shipping labels. A human will not care about values except that they not be empty. Granted that people do use databases to track sales figures and other aggregates. Still even those folks use data in the latter category: stray thoughts and reminders, sticky notes, social and business correlations, restaurant napkin sketches, collaborative data, recorded conversations, news clippings. A large cloud of miscellany doesn't rise to the level of application documents or the formality of enterprise systems.

Few systems exist to aggregate and organize that stuff. If your brain suffices, then good for you. The rest of us need a crutch. Some people use spreadsheets to store lists simply because there's little else available. I've used software which imitates sticky notes on screen. It leaves much to be desired. There are dozens of little programs for narrow data types - address books, internet bookmark apps, password managers, photo albums, etc. How do you tell the address book that the photo album has pictures of the guy, and that his web link lives in the bookmark manager? Right now, you don't. And programs never organize data just the way you want. Besides, exceptions to the common format always arise. So the problem is not just searching documents and email, nice as that is, but organizing human details in useful ways. Moleskin notebooks and Dictaphones have been around a long while. It's time for cool software.

Somehow RDF seems primed for the role, but it needs less abstract public relations. Raw RDF may not be the ideal presentation but still seems a likely candidate for the underlying data model. Each individual develops a personal ontology (aka "working style" if you will) over years of time. RDF can capture that, but it will take friendly programs like Mind Raider. What do you think?

An Operational Foundation for Delimited Continuations in the CPS Hierarchy

An Operational Foundation for Delimited Continuations in the CPS Hierarchy
We present an abstract machine and a reduction semantics for the lambda-calculus extended with control operators that give access to delimited continuations in the CPS hierarchy. The abstract machine is derived from an evaluator in continuation-passing style (CPS); the reduction semantics (i.e., a small-step operational semantics with an explicit representation of evaluation contexts) is constructed from the abstract machine; and the control operators are the shift and reset family. We also present new applications of delimited continuations in the CPS hierarchy: finding list prefixes and normalization by evaluation for a hierarchical language of units and products.
...or in other words - a view on the delimited continuations from another side (as compared to "A Monadic Framework for Subcontinuations" or Oleg's posts on Hakell list). I find it useful to learn about the same concept from different sources - and the delimited continuations are still promising to become an important concept.

Also, I suspect that defunctionalized approach might be more straightforward for people coming from imperative languages.

XML feed