LtU Forum

Constructing Sequent Rules for Generalized Propositional Logics

Constructing Sequent Rules for Generalized Propositional Logics

A very short paper having pretty narrow applicability, but somehow it is exactly what I needed today for my programming. Go figure.

The concept of a propositional logic, PL, will be defined and a method, to be referred to as the Kleene Search Procedure, will be used to determine the validity of formulas of PL. This method utilizes a set of sequent rules which are derived in a purely mechanical fashion from the truth tables which are the intended interpretations of the connectives of PL. The results are then used to show how a formal system, GPL, which is a sequent calculus can be constructed with very simple axioms and these sequent rules to yield the valid formulas of PL

Note that there are two "mechanical" algorithms here - one deriving sequent rules from truth tables, and another deriving either a proof ar a refutation of a sequent.

I wonder just how deeply this is related to both Qi and the map coloring theorem proof (didn't read any of those threads carefully yet).

Qi 6.1 released

"Qi is an award-winning Lisp-based functional programming language that offers the best of Common Lisp with the advantages of pattern matching, l calculus consistency, and optional static type checking. It uses sequent calculus notation to define types, and has the most powerful type system of any existing functional language, including ML and Haskell."
http://www.lambdassociates.org/


"Our mission is to gather some of the most talented functional programmers to build the Integrated Functional Programming Environment (IFPE). IFPE is intended to give the functional programmer complete integrated type-secure access to every feature of his computer, from Internet to email. The IFPE will be freely available under the GNU licence."


There is also a quick "15-minute" intro for ML programmers, as well as one for Lisp programmers.


I haven't used it yet, but I noticed it was only mentioned here once in a comment; and it looks like the sort of thing that drives people around here wild.

Encodings (wikipedia)

Noticing the absence of any page on encodings on Wikipedia, I gathered my memories and wrote one. Corrections/additions/critics are welcome.

Oh, and feel free to flame me if you think that LtU is the wrong place for such an announcement.

Why Dependent Types Matter

Why Dependent Types Matter
Conor McBride, James McKinna, Thorsten Altenkirch

Abstract:
We exhibit the rationale behind the design of Epigram, a dependently typed programming language and interactive program development system using refinements of a well known program, merge sort, as a running example. We discuss the relationship to other proposals to introduce aspects of dependent types into functional programming languages and sketch some topics for further work in this area.

via The Types Forum.

How much better is the second edition of "Essentials of Programming Languages"?

I have a copy of the first edition of Essentials of Programming Languages that I have not worked though yet. I was thinking of reading it and experimenting with the code presented in it. What are the important differences between the first and second edition? Is it worth paying for the new edition?

Anyone remember Russell (the language, that is)?

Russell was a functional language developed some 25 years ago to explore the idea of data types as full first-class objects. I'm a total amateur when it comes to programming languages, but I remember thinking at the time that it was pretty cool. A cursory search of Lambda and the web didn't turn up much.

The main reference looks to be:
"Data Types are Values"
TOPLAS, vol. 7, #3 (July 1985)

What happened to Charity?

The latest news on Charity homepage dates to "12 October 2000".

Does it mean the project is dead? If yes - why? Is there any continuation for its ideas? Any clues?

Chemistry, Graph Tranformation and Programming Languages

I’d love to have a LISP equivalent of a graph rewriting system, towards making something that chemists and biologists could use to computationally “play” the universe of molecules and the reactions between them.

To solicit some advice (perhaps provoke a collaboration?) from experts in the field of programming languages, I give to you what could be a considered a summary of a paper I presented at ICGT 2004. It was a great conference (if anyone who attended is reading), but was focused more towards software engineering and I think most of the people there had more important things to do than to listen to me.

The hope is to generalize the notion of synthetic organic chemistry to graph grammars, and create a stronger underlying formalism that would allow us to make useful tools and think of new problems and applications that can be solved by thinking of them in a “chemical” fashion.

What kinds of problems do people in synthetic organic chemistry think about?

Before we talk about that, let me provide a quick intro to chemistry. Chemists generally deal in molecules, which can often be sufficiently described in terms of vertex and edge labeled graphs which I will call chemical graphs. Molecules undergo reactions with other molecules to make new molecules. For today’s sake, reactions can be considered a relation between n-tuples of molecules. Reactions can be partitioned into a set of classes, we will call chemical transformations. Chemical transformations can be denoted by only considering a small “reactive” portion of the reactant molecules and products, they go by names like “Diels Alder Cycloaddition”, “Cope Re-arrangment” and “Mitsunobu Coupling”. Chemical transformations can be considered rewriting rules over chemical graphs.

Some chemists try to make weird and interesting molecules that can be found in nature, usually extracting things from weird creatures deep in the ocean or rare plants. They then try to build those molecules using only chemicals that are commercially available; and the chemical transformations that they know. Sometimes they find new chemical transformations in their quest to build their new molecule. This process can be considered a type of deduction over chemical graphs, where the axioms are the available chemicals and known chemical graph transformations; and the theorems are the target chemical graphs. These deduction paths are called synthetic routes to the chemist. Sometimes these synthetic routes can be tricky or require many (>50!) steps. The problem of asking wether or not a synthetic route exists to you target is equivalent to the semi-group word problem, and thus undecidable. With just a few assumptions, we can consider the problem of determining synthetic routes equivalent to the tiling problem, and is thus NP-hard. There is a good story behind the synthetic route of a molecule that is an anti-cancer agent, called Taxol, originally isolated from the rare yew tree. The last 60 or so years of chemistry has developed to give us synthetic routes to millions of molecules.

How many different molecules are there? The set of all molecules is infinite, the space of all “theoretically” synthetically accessible molecules is also infinite (but a subset of the former). The space of all practically synthetically accessible molecules has been estimated to be around 1062. There isn’t enough time to make each of them, although I think we do actually have enough atoms in the universe to do it. We are often interested in finding new molecules to do interesting things. Some chemists spend their time “fishing” in this set of synthetically accessible molecules. They do this by either thinking of a new molecule that would seem to be amenable to synthesis or make many molecules in parallel and try to use selection to find ones that have interesting properties.

Molecules do lots of really neat things, they are used as dyes, explosives, foodstuffs and medicines to name a few. Computational systems to help us navigate the infinitely large space of molecules would help us find new molecules to do things just as cool, or even cooler.

There is plenty of good software out there that could be used to do the stuff I am talking about. A good list of examples is in the paper, but they all have a very “applied” flavour; none of them are really instances of a more general framework of a grammar over graph rewriting. I wonder if some inter-discplinary thinking might not only help produce the kind of software I am thinking about, but also provide generalizations of concepts that may be useful to larger audience.

Perhaps it is worth noting that chemical analogies have crossed paths with areas of programming languages before, for instance, the chemical abstract machine. I do however consider the analogy “weak”, in that trying to understand CHAMs hasn’t given me any good ideas about chemistry. People seem to be talking a little more about applying concurrency to the description of things like biochemical pathways; but I haven’t seen anything that is really so compelling that it would help an investigator “in the field”. I hope it doesn’t sound like I’m suggesting computer science is guilty of not giving anything to chemistry (obviously not true). In case your feelings are hurt, you may hear about a "DNA Computer" in a science-type blog every so often; just about all those things are absolute rubbish, stay far away.

So if you found any of this interesting or have any ideas, please post below. I'd really love to be able to follow up on my ICGT paper (which was more a glorified tutorial/proposal) with something a little more concrete.

is EOPL available as an e-book anywhere?

I'd like to use some computer based learning tools (one EG is mind mapping) with the book, so I'd prefer to buy the e-book but cannot find any mention of it anywhere.

Garbage collecting computations

Imagine a process starting two computations, and as soon as one of them returns, abandoning the other.

One way to implement this is to require the client process to explicitly request cancellation of the second computation. I see this as similar to manual management of memory, with all the benefits and drawbacks.

Another way is to use something similar to garbage collector - the client just forgets about the second computation, and it will be (eventually) cancelled. I like to think about this as a GC because (as with memory GC) there are no semantic guarantees, just a pragmatic one - the infrastructure will have an option to reduce the consumption of resources, but does not guarantee it will use it (e.g., the resources are in abundance - a lot of free physical memory or idle processors).

Similar to memory GC, there might be ways for the process to detect the infrastructure's decision - in case of memory by means of finalizer code being called, in case of computation by detecting the effects of the computation.

What is not so similar, is the nature of references. In case of memory, clients refer to memory blocks. In case of computations, the direction is reversed - the computation refers to the continuation supplied by the client. That looks like a fatal blow...

I vaguely remember reading about something like that in the context of speculative computations, but was unable to find it again. Any ideas where to look?

XML feed