LtU Forum

NOOL 2017

Co-located with SPLASH 2017, there will be another NOOL workshop - see here and here for previous editions.

Please consider submitting an abstract! It's a fun, friendly event and this year it's followed by the LIVE workshop.

Type system based on epistemic modal logic?

This is something I’ve been mulling over lately, and I’m curious to hear opinions on it and whether there’s any existing research (I couldn’t find any).

Since type systems based on linear logic are receiving more attention lately in Rust and the forthcoming linear types extension for Haskell, I was looking around at different logics to see if they would make a useful basis for type system features in a programming language. I was aware of modal logics, but hadn’t really looked into them since I read From Löb’s Theorem to Spreadsheets a few years ago.

I came across epistemic modal logic, a logic for reasoning about knowledge. That struck me as potentially very useful for distributed data stores, in which different agents (servers) know different things (have different data), and also know things about the things that they and other agents know.

For instance, if a client asks a server for some data, the server may be aware that it doesn’t have the data in question, by the “negative introspection axiom”, ¬Kiφ ⇒ Ki¬Kiφ, if an agent (i) does not know (¬Ki) some fact (φ), then that agent knows (Ki) that it does not know the fact (¬Kiφ). However, it may know that some other agent does know the fact, and can go and fetch it accordingly, or arrange for the other agent to send it directly to the client.

This could incorporate other modalities such as possibility (◊), necessity (□), “everyone knows” (E), “common knowledge” (C), “distributed knowledge” (D). Just brainstorming, I feel like a type system based on these ideas might be able to enforce things like:

  • You can only make a synchronous request for some data from a server if that server necessarily has the data and can therefore respond immediately. Pseudocode:

    syncRequest (fact : FactID, server : ServerID) : Fact
      requires necessarily (server knows fact)
  • After you send some data to a server, you know that it possibly knows the data and that you can ask for it asynchronously.

    send (fact : FactID, server : ServerID) : ()
      ensures possibly (server knows fact)
  • Any part of a message (metadata or data) can be lost, and the server can recover or negotiate repair to achieve eventual consistency. (I’m envisioning a sort of Erlang-style “fire and forget” message passing system, for example over UDP.)

  • Constraints on server & client knowledge to express tradeoffs between consistency and availability.

I’m just riffing informally here. Let me know your thoughts, or if you can point me to any resources. :)

The question of the possibility of a simple formal foundation to the natural languages.

I think that it can safely be said that the natural languages can transport any formal structure; that we can communicate any mathematical structure using the natural language. That's the essence of metamathematics.

But then we are led to believe that the natural language has no proper formal structure. It is informal. In the sense that it is not possible to feed the Don Quijote to an algorithm that will be able to pinpoint any formal inconsistency implicit in the text (without recurring to any other text or knowledge). An example imaginary inconsistency would be if at one point Cervantes says that Don Quijote always likes lo love Dulcinea, and at another he says that he does not like to love her. The kind of inconsistencies that would destroy any formal structure if inadvertently injected in transit.

These 2 previous paragraphs seem a bit paradoxical to me, since, if there is no procedure to decide whether a natural text is inconsistent (or if there are only heuristic fallible procedures) metamathematics should be impossible, and mathematics could not have been born.

If we take a formal structure expressed in some mathematical formalism, and express it with the natural language, metamathematically, we need to be sure that there are no inconsistencies in any of the cases. In the former case, we can check, and there are algrithms that can check. In the later case, we can check. Can no algorithm generally check?

I know that there is a history to this, from Llull and Leibniz, reaching a summit with Frege, to be toppled by Russell, and followed by the neopositivists etc., and later experiments like the semantic web.

So my question here is: Is there some kind of proof or argument showing that the natural languages cannot be provided with a simple mathematical foundation? Some recognizable fundamental property of the natural language that is inherently inconsistent? (Are both questions the same?)

What I mean with a "simple" mathematical foundation is that it must consist on a core formal theory (that can then be suplemented with a number of ad-hoc rules that provide for shortcuts and phrases in the natural languages). A priori, it should be totally disprovided of semantics. Then we should be able to interpret extensions built on that foundation in natural language texts, taken as abstract structures of words; and then, by proxy, we might provide those extensions with the semantics of the natural language texts in which they are interpreted. (Perhaps we need some a priori semantics, for things like time [edited to add: or I? That would seem to lead us towards Gödel's pit, damm. However, I think that we can settle in principle for a descriptive language without person], that are deeply involved in the basic structure of the natural language?)

So, simple in the sense that Frege's proposal was simple, or that the semantic web is simple (my previous paragraph is meant to be interpreted in their intended use), but a huge neural network with crazy amounts of delicately balanced branches trained by all books ever published is not simple. Notwithstanding the problem that with the neural network solution we are dealing with an ungodly mixture of syntax and semantics.

I really don't know whether I am using the right terminology to phrase my question - or even whether there exists a terminological framework where it can be meaningful and exact. So apologies if I have taken a couple of poetic licences in trying to lay it out.

Question: do you have to climb the tower of interpreters?

You know what is interesting about meta-circular interpreters? The infinite recursion of eval-apply is always broken in the end by the application of some primitives. So in the end, no matter what your computational model is, you are still executing a stream of primitives in your "ground" language. And, usually, "lambda" is all you'll ever need as a primitive. But looking at so many toy interpreters you'd see that "lambda" is not quite so "primitive" and requires a lot of machinery to work.
So instead of providing some means of abstraction directly as a ground mechanism to build everything else around it, I wanted to do a simple thought exercise and to go from another direction: if we have a fixed set of primitives, without means of abstraction, what are the ways to compose/combine them?

Imagine we have a really old CPU without a support for a call instruction, but it has a conditional jump, ALU instructions, loads/stores. It's a simple exercise to define how would a call instruction would look like (in pseudo-assembler):

define CALL(target_addr):
load stack_pointer, tmp
store tmp, the-address-of-the-next-instruction-after-call //don't care now how we can get this
dec tmp
store stack_pointer, tmp
//skipped the frame pointer manipulation
branch target_addr

So, in general, our CPU IS capable of doing necessary steps to create an abstraction mechanism, but there is no way to "plug it in" by the CPU itself unless our preprocessor rewrites all "CALL" tokens into these primitives. Not good.
The only other way I could think of is to interpret another language, which has this CALL as primitive. I.e. the CPU itself becomes such a "preprocessor". But does that mean that the only way to escape deficiencies of your current computational model is to climb the tower of interpreters? (aka interpret another language that has that capability. My favorite examples: a Scheme interpreter with first-class macros in scheme by Matt Might, Kernel interpreter in Scheme by J.Shutt, 3-LISP by Brian Smith)

Are there any other ways, besides interpreting another language? Maybe there is a way to build such a reflexive virtual machine that can modify and extend itself? I haven't seen such yet.

I've recently stumbled upon Ian Piumarta's "Open,extensible composition models" VPRI Technical Report TR-2011-002 which describes a meta-circular evaluator, where instead of your usual big cond-statement in eval you select your evaluator based on the type of expression you are evaluating (same for applicators) and this mapping is available for the programmer. I've found this a very interesting read, maybe someone could comment on this paper, as I've surely missed a lot.

I have surely asked a lot of questions in a very unstructed form, but if anyone would maybe give some useful pointers (something to read on the topic, maybe) it would be greatly appreciated.

isomorƒ: an experimental structured editor for witing/deploying functional code

isomorƒ is attempting to bridge the divide between functional programming, serverless architecture, and cloud-based structured-editing with the grand hope of creating a simplified, beginner-friendly, but powerful IDE experience with easy microservice deployment.

The platform runs on a compact, pure, statically-typed functional AST with all the power in the IDE including a syntactic sugar layer, passively identified reuse ideas, exposte optimization, automatic versioning, and immediate cross-user sharing (all enabled by the guarantees of functional purity / referential transparency).

We currently have a prototype sandbox of the IDE available and a high-level vision at our blog.

We would love any feedback on the sandbox, the idea/implementation, academic/educational/commercial applications or anything else!

Thanks!

hobbes, Morgan Stanley OSS

Over the last few years, I have been developing hobbes -- a programming language, JIT compiler, and database system -- as part of my work for Morgan Stanley. It has become a critical piece of infrastructure in our low-latency, high-volume trading applications, and we have decided to release the source code to the public on github (currently can be built for recent Linux and macOS platforms):

github.com/Morgan-Stanley/hobbes

The database system is a lightweight (self-contained, header-only) library used by applications to efficiently log structured (binary) data over shared memory to minimize application latency and reflect application type structure as accurately as possible in log files. We use this to record a very detailed image of application state over time, which we analyze/query out of band.

The JIT compiler can be used embedded in another application (e.g. to "hot patch" an application with very efficient, precisely typed intercepts) or as a standalone interactive interpreter (e.g.: to monitor and query application log data).

The programming language is a variant of Haskell (HM type inference, algebraic data types, qualified types / type classes) with some adjustments to help reduce boilerplate and derive very efficient machine code. For example, we use "structural" record, variant, and (iso-)recursive types to represent data as it's naturally represented in applications and can be deconstructed generically at compile time (e.g. a record can be printed if its first field can be printed and its rest can be printed, a variant can be printed if its first case can be printed and its rest cases can be printed, a recursive type can be printed if its one-step unrolling can be printed, etc).

We are actively using this on major projects, we will continue to develop this github project as we need new features, and we are interested in engaging others outside of the firm for their thoughts, feedback, and hopefully pull requests. :)

/join the #proglangdesign channel on Freenode

The #proglangdesign channel on Freenode hosts many discussions about programming language design, compilers, interesting algorithms, and anything else useful for language implementation. The channel is pretty active; say 'hello' if you join.

Many of the members are working on compilers, interpreters, whether they are just experimenting to learn, or advancing the state of the art.

Come join us!

More information can be found at our community website, which includes a list of our projects; submit a pull request to add your own.

Just entertainment: Click, click, click!

Just a silly website I thought you people might enjoy. Click, click, click!

Is Datalog negation(¬) similar to the built-in predicate (≠)?

I was reading "Principles of Database & Knowledge-Base Systems, Vol. 1" by Jeffrey D. Ullman. There is a chapter about Datalog negation and as I was seeing the problems of negation I kept thinking that using the predicate ≠ would solve those problems.

E.g.

bachelor(X) :- male(X) & ¬married(X,Y).

would become:

bachelor(X) :- male(X) & married(Y,Z) & X ≠ Y.

but then I see the following:

p(X) :- r(X) & ¬q(X).
q(X) :- r(X) & ¬p(X).

The problem is this has 2 minimal models and if I'm not mistaken so does this:

p(X) :- r(X) & q(Y) & X ≠ Y.
q(X) :- r(X) & p(Y) & X ≠ Y.

Is there an equivalence between these 2 operators? If so, did I miss it or is it not mentioned that it's unsafe to use ≠ with recursion?

Please submit to LIVE! 2017 (SPLASH Vancouver)

LIVE 2017 aims to bring together people who are interested in live programming. Live programming systems abandon the traditional edit-compile-run cycle in favor of fluid user experiences that encourage powerful new ways of “thinking to code” and enable programmers to see and understand their program executions. Programming today requires much mental effort with broken stuttering feedback loops: programmers carefully plan their abstractions, simulating program execution in their heads; the computer is merely a receptacle for the resulting code with a means of executing that code. Live programming aims to create a tighter more fluid feedback loop between the programmer and computer, allowing the computer to augment more of the programming process by, for example, allowing programmers to progressively mine abstractions from concrete examples and providing continuous feedback about how their code will execute. Meanwhile, under the radar of the PL community at large, a nascent community has formed around the related idea of “live coding”—live audiovisual performances which use computers and algorithms as instruments and include live audiences in their programming experiences.

We encourage short research papers, position papers, web essays, tool demonstrations (as videos), and performance proposals in areas such as:

Recent work in REPLs, language environments , code playgrounds, and interactive notebooks.

  • Live visual programming.
  • Programming by example.
  • Programming tools for creative experiences and interactive audio visual performances.
  • Live programming as a learning aid.
  • Fluid debugging experiences
  • Language design in support of the above.

Submissions are due on August 1st and will go through HotCRP @ https://live17.hotcrp.com/paper/new. The workshop is open to various kinds of media: you can write a traditional short paper (PDF), a web essay with embedded videos, a narrated video, or whatever else you think can explain your work best! Content should be consumable in 30 minutes to an hour of casual reader’s time, which means around 5-10 pages for a paper, a video from 10-20 minutes (assuming the viewer would need to pause to contemplate), and essays of around a few pages length. Video and non-paper submissions can by listed as URLs (e.g. to a web page, file locker, or streaming site) in the submission’s abstract.

Any questions or trouble with submitting, please contact mcdirmid@outlook.com. This CFP is hosted @ http://2017.splashcon.org/track/live-2017#Call-for-Papers.

XML feed