Lambda the Ultimate - Logic/Declarative
http://lambda-the-ultimate.org/taxonomy/term/13/0
enA unified approach to solving seven programming problems
http://lambda-the-ultimate.org/node/5470
<p >A <a href="http://dl.acm.org/citation.cfm?id=3110252&CFID=805521128&CFTOKEN=83435544&preflayout=flat">fun pearl</a> by William E. Byrd, Michael Ballantyne, Gregory Rosenblatt, and Matthew Might from ICFP: seven programming challenges solved (easily!) using a <I >relational</I> interpreter. One challenge, for example, is to find quines. Another is to find programs that produce different results with lexical vs. dynamic scope.</p>
<p >The interpreter is implemented in miniKanren (of course), inside Racket (of course). </p>FunFunctionalLogic/DeclarativeMon, 04 Sep 2017 18:44:29 +0000Ceptre: A Language for Modeling Generative Interactive Systems.
http://lambda-the-ultimate.org/node/5216
<p >
<a href="http://www.cs.cmu.edu/~cmartens/ceptre.pdf">Ceptre: A Language for Modeling Generative Interactive Systems.</a><br >
Chris Martens<br >
2015
</p>
<blockquote >
<p >
We present a rule specification language called Ceptre,
intended to enable rapid prototyping for experimental
game mechanics, especially in domains that depend on
procedural generation and multi-agent simulation.
</p><p >
Ceptre can be viewed as an explication of a new
methodology for understanding games based on linear
logic, a formal logic concerned with resource usage. We
present a correspondence between gameplay and proof
search in linear logic, building on prior work on generating narratives. In Ceptre, we introduce the ability to
add interactivity selectively into a generative model, enabling inspection of intermediate states for debugging
and exploration as well as a means of play.
</p><p >
We claim that this methodology can support game designers and researchers in designing, anaylzing, and debugging the core systems of their work in generative,
multi-agent gameplay. To support this claim, we provide two case studies implemented in Ceptre, one from
interactive narrative and one from a strategy-like domain.
</p>
</blockquote>
<p >Some choice quotes from the artice follow.</p>
<p >Simple examples of the rule language:</p>
<blockquote >
<p >The meaning of <code >A -o B</code>, to a first approximation, is
that whenever the predicates in <code >A</code> are present, they may be
replaced with <code >B</code>. One example of a rule is:</p>
<code ><pre >
do/compliment:
at C L * at C’ L * likes C C’
-o at C L * at C’ L * likes C C’ * likes C’ C.
</pre></code>
<p >[...]</p>
<p >Note that because of the replacement semantics of the
rule, we need to reiterate everything on the right-hand side
of the <code >-o</code> that we don’t want to disappear, such as the character locations and original likes fact. We use the syntactic sugar of prepending <code >$</code> to anything intended not to be
removed in order to reduce this redundancy:</p>
<code ><pre >
do/compliment: $at C L * $at C’ L * $likes C C’ -o likes C’ C.
</pre></code>
<p >A more complex rule describes a murder action, using
the <code >!</code> operator to indicate a permanent state:</p>
<code ><pre >
do/murder:
anger C C’ * anger C C’ * anger C C’ * anger C C’
* $at C L * at C’ L * $has C weapon
-o !dead C’.
</pre></code>
<p >(This rule consumes <code >C</code>’s location, maintaining a global
invariant that each character is mutually exclusively at a
location or !dead.) Here we see a departure from planning
formalisms: four instances of <code >anger C C’</code> mean something different from one.
Here we are using an emotion not
just as a precondition but as a resource, where if we have
enough of it, we can exchange it for a drastic consequence.
Whether or not we diffuse the anger, or choose to keep it by
prepending <code >$</code> to the predicates, is an authorial choice.</p>
</blockquote>
<p >Concurrency in narration:</p>
<blockquote >
<p >
Two rule applications that consume
disjoint sets of resources from the same state can be said to
happen concurrently, or independently. On the other hand,
a rule that produces resources and another that consumes a
subset of them can be said to be in a causal, or dependent,
relationship. Less abstractly, if resources represent facts associated with particular game entities or characters, then independent rule applications represent potentially concurrent
action by multiple such entities, and causally related rule applications represent either sequential action by a single actor,
or synchronized interaction between two entities.
</p>
</blockquote>
<p >Stages, and a larger example:</p>
<blockquote >
<p >We would like to for some
of these rules to run automatically without player intervention. In our next iteration of the program, we will make use
of a Ceptre feature called stages. Stages are a way of structuring a program in terms of independent components. Syntactically, a stage is a curly-brace-delimited set of rules with
an associated name. Semantically, a stage is a unit of computation that runs to quiescence, i.e. no more rules are able
to fire, at which point control may be transfered to another
stage.</p>
<p >[...]</p>
<p >Additionally, we can test the design by “scripting” certain player strategies. For instance, we
could augment the two rules in the fight stage to be deterministic, fighting when the monster can’t kill us in one turn
and fleeing otherwise:</p>
<code ><pre >
stage fight = {
do_fight:
choice * $fight_in_progress * $monster Size * $health HP * Size < HP
-o try_fight.
do_flee :
choice * fight_in_progress * $monster Size * $health HP * Size >= HP
-o flee_screen.
}
</pre></code>
<p >If we remove interactivity from this stage, then we get
automated combat sequences that should never result in the
player’s death.</p>Logic/DeclarativeTue, 04 Aug 2015 14:03:57 +0000Extensible Effects -- An Alternative to Monad Transformers
http://lambda-the-ultimate.org/node/4786
<p ><a href="http://www.cs.indiana.edu/~sabry/papers/exteff.pdf">Extensible Effects -- An Alternative to Monad Transformers</a>, by Oleg Kiselyov, Amr Sabry and Cameron Swords:</p>
<blockquote ><p >We design and implement a library that solves the long-standing problem of combining effects without imposing restrictions on their interactions (such as static ordering). Effects arise from interactions between a client and an effect handler (interpreter); interactions may vary throughout the program and dynamically adapt to execution conditions. Existing code that relies on monad transformers may be used with our library with minor changes, gaining efficiency over long monad stacks. In addition, our library has greater expressiveness, allowing for practical idioms that are inefï¬cient, cumbersome, or outright impossible with monad transformers.</p>
<p >Our alternative to a monad transformer stack is a single monad, for the coroutine-like communication of a client with its handler. Its type reï¬‚ects possible requests, i.e., possible effects of a computation. To support arbitrary effects and their combinations, requests are values of an extensible union type, which allows adding and, notably, subtracting summands. Extending and, upon handling, shrinking of the union of possible requests is reï¬‚ected in its type, yielding a type-and-effect system for Haskell. The library is lightweight, generalizing the extensible exception handling to other effects and accurately tracking them in types.</p></blockquote>
<p >A follow-up to <a href="http://okmij.org/ftp/Computation/monads.html#ExtensibleDS">Oleg's delimited continuation adaptation</a> of Cartwright and Felleisen's work on <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.5941">Extensible Denotational Language Specifications</a>, which is a promising alternative means of composing effects to the standard monad transformers.</p>
<p >This work embeds a user-extensible effect EDSL in Haskell by encoding all effects into a single effect monad using a novel open union type and the continuation monad. The encoding is very similar to recent work on <a href="http://lambda-the-ultimate.org/node/4481">Algebraic Effects and Handlers</a>, and closely resembles a typed client-server interaction ala coroutines. This seems like a nice convergence of the topics covered in the algebraic effects thread and other recent work on effects, and it's more efficient than monad transformers to boot.</p>FunctionalLogic/DeclarativeTheoryType TheoryMon, 29 Jul 2013 14:53:44 +0000Visi.io
http://lambda-the-ultimate.org/node/4628
<p ><a href="http://visi.io/">Visi.io</a> comes from David Pollak and aims at revolutionizing building tablet apps, but the main attraction now seems to be in exploring the way data flow and cloud computing can be integrated. The <a href="http://visi.io/screencast.html">screencast</a> is somewhat underwhelming but at least convinces me that there is a working prototype (I haven't looked further than the website as yet). The <a >vision</a> document has some nice ideas. Visi.io came up recently in the <a href="http://lambda-the-ultimate.org/node/4626">discussion</a> of the future of spreadsheets. </p>FunctionalLogic/DeclarativeParallel/DistributedSat, 27 Oct 2012 09:36:22 +0000How to Make Ad Hoc Proof Automation Less Ad Hoc
http://lambda-the-ultimate.org/node/4551
<p ><a href="http://www.mpi-sws.org/~beta/lessadhoc/">How to Make Ad Hoc Proof Automation Less Ad Hoc</a><br >
Georges Gonthier, Beta Ziliani, Aleksandar Nanevski, and Derek Dreyer, to appear in ICFP 2011</p>
<blockquote ><p >
Most interactive theorem provers provide support for some form of user-customizable proof automation. In a number of popular systems, such as Coq and Isabelle, this automation is achieved primarily through tactics, which are programmed in a separate language from that of the proverâ€™s base logic. While tactics are clearly useful in practice, they can be difficult to maintain and compose because, unlike lemmas, their behavior cannot be specified within the expressive type system of the prover itself.</p>
<p >We propose a novel approach to proof automation in Coq that allows the user to specify the behavior of custom automated routines in terms of Coqâ€™s own type system. Our approach involves a sophisticated application of Coqâ€™s canonical structures, which generalize Haskell type classes and facilitate a flexible style of dependently-typed logic programming. Specifically, just as Haskell type classes are used to infer the canonical implementation of an overloaded term at a given type, canonical structures can be used to infer the canonical proof of an overloaded lemma for a given instantiation of its parameters. We present a series of design patterns for canonical structure programming that enable one to carefully and predictably coax Coqâ€™s type inference engine into triggering the execution of user-supplied algorithms during unification, and we illustrate these patterns through several realistic examples drawn from Hoare Type Theory. We assume no prior knowledge of Coq and describe the relevant aspects of Coq type inference from first principles.
</p></blockquote>
<p >If you've ever toyed with Coq but run into the difficulties that many encounter in trying to construct robust, comprehensible proof scripts using tactics, which manipulate the proof state and can leave you with the "ground" of the proof rather than the "figure," if you will, in addition to being fragile in the face of change, you may wish to give this a read. It frankly never would have occurred to me to try to turn Ltac scripts into <em >lemmas</em> at all. This is <em >much</em> more appealing than most other approaches to the subject I've seen.</p>FunctionalImplementationLogic/DeclarativeType TheoryFri, 22 Jun 2012 15:41:16 +0000Interactive Tutorial of the Sequent Calculus
http://lambda-the-ultimate.org/node/4529
<p ><a href="http://logitext.ezyang.scripts.mit.edu/logitext.fcgi/tutorial">Interactive Tutorial of the Sequent Calculus</a> by Edward Z. Yang.</p>
<blockquote ><p >This interactive tutorial will teach you how to use the sequent calculus, a simple set of rules with which you can use to show the truth of statements in first order logic. It is geared towards anyone with some background in writing software for computers, with knowledge of basic boolean logic. ...</p>
<p >Proving theorems is not for the mathematicians anymore: with theorem provers, it's now a job for the hacker. â€” Martin Rinard ...</p>
<p >A common complaint with a formal systems like the sequent calculus is the "I clicked around and managed to prove this, but I'm not really sure what happened!" This is what Martin means by the hacker mentality: it is now possible for people to prove things, even when they don't know what they're doing. The computer will ensure that, in the end, they will have gotten it right.
</p></blockquote>
<p >The tool behind this nice tutorial is <a href="http://logitext.ezyang.scripts.mit.edu/logitext.fcgi/main">Logitext</a>.</p>FunJavascriptLogic/DeclarativeTeaching & LearningTheoryThu, 31 May 2012 14:48:52 +0000Milawa on Jitawa: a Verified Theorem Prover
http://lambda-the-ultimate.org/node/4464
<p ><a href="http://www.cs.utexas.edu/users/jared/milawa/Web/">Milawa</a></p>
<blockquote ><p >
Aug 2010 - May 2011. Magnus Myreen has developed a verified Lisp system, named Jitawa, which can run Milawa. Our paper about this project was accepted to ITP 2011.
</p></blockquote>
<p >This is pretty interesting: Milawa was already "self-verifying," in the sense explained on the page. More recently, it's been made to run on a verified Lisp runtime, so that means the entire stack down to the X86_64 machine code is verified. Milawa itself is "ACL2-like," so it's not as interesting logically as, say, Isabelle or Coq, but it's far from a toy. Also, the Jitawa formalization apparently took place in HOL4, so you need to trust HOL4. Since HOL4 is an "LCF-like" system, you can do that to the extent that you trust the LCF process, but it doesn't satisfy the de Bruijn criterion in the same way Milawa or Coq do. Nevertheless, this seems like an important step toward the ultimate goal of having a stack that is verified "all the way down," as it were.</p>FunctionalLambda CalculusLogic/DeclarativeWed, 29 Feb 2012 18:34:45 +0000Beyond pure Prolog: Power and danger
http://lambda-the-ultimate.org/node/4434
<p >One of the sections of Oleg Kiselyov's <i >Prolog and Logic Programming</i> page, on <a href="http://okmij.org/ftp/Prolog/index.html#impure">Beyond pure Prolog: power and danger</a>, points out (i) term introspection (in the guise of the <code >var/1</code> predicate) can be derived from three of Prolog's imperative features, two of which are quite mild-looking, and (ii) this introspection potentially makes Prolog code hard to understand.</p>
<p >Oleg pointed this note in response to <a href="http://lambda-the-ultimate.org/node/112#comment-68897">my defence of cut</a>; it is short, sweet, and well-argued.</p>Logic/DeclarativeMon, 23 Jan 2012 10:54:29 +0000The Experimental Effectiveness of Mathematical Proof
http://lambda-the-ultimate.org/node/4392
<p ><a href="http://perso.ens-lyon.fr/alexandre.miquel/publis/effectiveness.pdf">The Experimental Effectiveness of Mathematical Proof</a></p>
<blockquote ><p >
The aim of this paper is twofold. First, it is an attempt to give an answer to the famous essay of Eugene Wigner about the unreasonable effectiveness of mathematics in the natural sciences [25]. We will argue that mathematics are not only reasonably effective, but that they are also objectively effective in a sense that can be given a precise meaning. For thatâ€”and this is the second aim of this paperâ€”we shall reconsider some aspects of Popperâ€™s epistemology [23] in the light of recent advances of proof theory [8, 20], in order to clarify the interaction between pure mathematical reasoning (in the sense of a formal system) and the use of empirical hypotheses (in the sense of the natural sciences).</p>
<p >The technical contribution of this paper is the proof-theoretic analysis of the problem (already evoked in [23]) of the experimental modus tollens, that deals with the combination of a formal proof of the implication U â‡’ V with an experimental falsification of V to get an experimental falsification of U in the case where the formulÃ¦ U and V express empirical theories in a sense close to Popperâ€™s. We propose a practical solution to this problem based on Krivineâ€™s theory of classical realizability [20], and describe a simple procedure to extract from a formal proof of U â‡’ V (formalized in classical second-order arithmetic) and a falsifying instance of V a computer program that performs a finite sequence of tests on the empirical theory U until it finds (in finite time) a falsifying instance of U.
</p></blockquote>
<p >I thought I had already posted this, but apparently not.</p>
<p >Consider this paper the main gauntlet thrown down to those who insist that mathematical logic, the Curry-Howard Isomorphism, etc. might be fine for "algorithmic code" (as if there were any other kind) but is somehow inapplicable the moment a system interacts with the "real" or "outside" world (as if software weren't real).</p>
<p ><b >Update:</b> the author is Alexandre Miquel, and the citation is "Chapitre du livre Anachronismes logiques, Ã paraÃ®tre dans la collection Logique, Langage, Sciences, Philosophie, aux Publications de la Sorbonne. Ã‰d.: Myriam Quatrini et Samuel TronÃ§on, 2010."</p>FunctionalLambda CalculusLogic/DeclarativeSemanticsSun, 30 Oct 2011 16:05:45 +0000Concurrent Pattern Calculus
http://lambda-the-ultimate.org/node/4189
<p ><a href="http://www.dsi.uniroma1.it/~gorla/papers/cpc-long.pdf">Concurrent Pattern Calculus</a> by Thomas Given-Wilson, Daniele Gorla, and Barry Jay:</p>
<blockquote ><p >Concurrent pattern calculus drives interaction between processes by comparing data structures, just as sequential pattern calculus drives computation. By generalising from pattern matching to pattern unification, interaction becomes symmetrical, with information flowing in both directions. This provides a natural language for describing any form of exchange or trade. Many popular process calculi can be encoded in concurrent pattern calculi.</p></blockquote>
<p >Barry Jay's <a href="http://lambda-the-ultimate.org/node/1678">Pattern Calculus</a> has been discussed a <a href="http://lambda-the-ultimate.org/node/3695">few times here before</a>. I've always been impressed with the pattern calculus' expressive power for computing over arbitrary structure. The pattern calculus supports new forms of polymorphism, which he termed "path polymorphism" and "pattern polymorphism", which are difficult to provide in other calculi. The closest I can think of would be a compiler-provided generalized fold over any user-defined structure.</p>
<p >This work extends the pattern calculus to the concurrent setting by adding constructs for parallel composition, name restriction and replication, and argues convincingly for its greater expressiveness as compared to other concurrent calculi. He addresses some of the obvious concerns for symmetric information flow of the unification operation.</p>FunctionalLogic/DeclarativeParallel/DistributedTheoryTue, 25 Jan 2011 03:19:55 +0000Milawa: A Self-Verifying Theorem Prover for an ACL2-Like Logic
http://lambda-the-ultimate.org/node/3964
<p ><a href="http://userweb.cs.utexas.edu/users/jared/milawa/Web/">Milawa: A Self-Verifying Theorem Prover for an ACL2-Like Logic</a></p>
<blockquote ><p >
Milawa is a "self-verifying" theorem prover for an ACL2-like logic.</p>
<p >We begin with a simple proof checker, call it A, which is short enough to verify by the "social process" of mathematics.</p>
<p >We then develop a series of increasingly powerful proof checkers, call them B, C, D, and so on. We show that each of these is sound: they accept only the same formulas as A. We use A to verify B, and B to verify C, and so on. Then, since we trust A, and A says B is sound, we can trust B, and so on for C, D, and the rest.</p>
<p >Our final proof checker is really a theorem prover; it can carry out a goal-directed proof search using assumptions, calculation, rewrite rules, and so on. We use this theorem prover to discover the proofs of soundness for B, C, and so on, and to emit these proofs in a format that A can check. Hence, "self verifying."
</p></blockquote>
<p >This might help inform discussions of the relationship between the de Bruijn criterion (the "social process" of mathematics) and formal verification. I think it also serves as an interesting signpost on the road forward: it's one thing to say that starting with a de Bruijn core and evolving a more powerful prover is possible in principle; it's another thing for it to actually have been done. The author's thesis <a href="http://userweb.cs.utexas.edu/users/jared/milawa/Documentation/defense.pdf">defense</a> slides provide a nice, quick overview.</p>DSLFunctionalImplementationLambda CalculusLogic/DeclarativeSemanticsSat, 29 May 2010 17:49:47 +0000A Lambda Calculus for Real Analysis
http://lambda-the-ultimate.org/node/3831
<p ><a href="http://paultaylor.eu/ASD/lamcra/">A Lambda Calculus for Real Analysis</a></p>
<blockquote ><p >
Abstract Stone Duality is a revolutionary paradigm for general topology that describes computable continuous functions directly, without using set theory, infinitary lattice theory or a prior theory of discrete computation. Every expression in the calculus denotes both a continuous function and a program, and the reasoning looks remarkably like a sanitised form of that in classical topology. This is an introduction to ASD for the general mathematician, with application to elementary real analysis.</p>
<p >This language is applied to the Intermediate Value Theorem: the solution of equations for continuous functions on the real line. As is well known from both numerical and constructive considerations, the equation cannot be solved if the function "hovers" near 0, whilst tangential solutions will never be found.</p>
<p >In ASD, both of these failures and the general method of finding solutions of the equation when they exist are explained by the new concept of overtness. The zeroes are captured, not as a set, but by higher-type modal operators. Unlike the Brouwer degree, these are defined and (Scott) continuous across singularities of a parametric equation.</p>
<p >Expressing topology in terms of continuous functions rather than sets of points leads to treatments of open and closed concepts that are very closely lattice- (or de Morgan-) dual, without the double negations that are found in intuitionistic approaches. In this, the dual of compactness is overtness. Whereas meets and joins in locale theory are asymmetrically finite and infinite, they have overt and compact indices in ASD.</p>
<p >Overtness replaces metrical properties such as total boundedness, and cardinality conditions such as having a countable dense subset. It is also related to locatedness in constructive analysis and recursive enumerability in recursion theory.
</p></blockquote>
<p >Paul Taylor is deadly serious about the intersection of logic, mathematics, and computation. I came across this after beating my head against <a href="http://www.amazon.com/Probability-Theory-Logic-Science-Vol/dp/0521592712/ref=sr_1_1?ie=UTF8&s=books&qid=1266357272&sr=8-1">Probability Theory: The Logic of Science</a> and <a href="http://axiomaticeconomics.com/">Axiomatic Theory of Economics</a> over the weekend, realizing that my math just wasn't up to the tasks, and doing a Google search for "constructive real analysis." "Real analysis" because it was obvious that that was what both of the aforementioned texts were relying on; "constructive" because I'd really like to develop proofs in Coq/extract working code from them. This paper was on the second page of results. Paul's name was familiar (and not just because I share it with him); he translated Jean-Yves Girard's regrettably out-of-print <a href="http://paultaylor.eu/stable/Proofs+Types">Proofs and Types</a> to English and maintains a very popular set of tools for typesetting <a href="http://paultaylor.eu/diagrams/">commutative diagrams</a> using LaTeX.</p>Category TheoryFunctionalLambda CalculusLogic/DeclarativeMeta-ProgrammingSemanticsType TheoryTue, 16 Feb 2010 22:00:42 +0000Certified Programming With Dependent Types Goes Beta
http://lambda-the-ultimate.org/node/3763
<p ><a href="http://adam.chlipala.net/cpdt/">Certified Programming With Dependent Types</a></p>
<p >From the introduction:</p>
<blockquote ><p >
We would all like to have programs check that our programs are correct. Due in no small part to some bold but unfulfilled promises in the history of computer science, today most people who write software, practitioners and academics alike, assume that the costs of formal program verification outweigh the benefits. The purpose of this book is to convince you that the technology of program verification is mature enough today that it makes sense to use it in a support role in many kinds of research projects in computer science. Beyond the convincing, I also want to provide a handbook on practical engineering of certified programs with the Coq proof assistant.
</p></blockquote>
<p >This is the best Coq tutorial that I know of, partially for being comprehensive, and partially for taking a very different tack than most with Adam's emphasis on proof automation using Coq's Ltac tactic language. It provides an invaluable education toward understanding what's going on either in <a href="http://ltamer.sourceforge.net/">LambdaTamer</a> or <a href="http://ynot.cs.harvard.edu/">Ynot</a>, both of which are important projects in their own rights.</p>
<p >Please note that Adam is explicitly requesting feedback on this work.</p>FunctionalLambda CalculusLogic/DeclarativeMisc BooksSemanticsTeaching & LearningType TheorySat, 09 Jan 2010 16:56:49 +0000ActorScript(TM): Industrial strength integration of local and nonlocal concurrency for Client-cloud Computing
http://lambda-the-ultimate.org/node/3717
<a href="http://arxiv.org/abs/0907.3330">ActorScript(TM): Industrial strength integration of local and nonlocal concurrency for Client-cloud Computing</a>
by Carl Hewitt, 2009.
<blockquote >
ActorScript is based on a mathematical model of computation that treats â€œActorsâ€ as the universal primitives of concurrent digital computation [Hewitt, Bishop, and Steiger 1973; Hewitt 1977]. Actors been used both as a framework for a theoretical understanding of concurrency, and as the theoretical basis for several practical implementations of concurrent systems.</blockquote>
I hope I do not need to introduce Carl Hewitt or his Actor model. This paper is a modern attempt to expose that model via a practical PL.Logic/DeclarativeObject-FunctionalMon, 14 Dec 2009 13:47:46 +0000SequenceL - declarative computation on nonscalars
http://lambda-the-ultimate.org/node/3635
<p >I recently came across the language <i ><a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.90.7370">SequenceL</a></i>, which it seems <a href="http://www.nasaspaceflight.com/2005/12/cev-abort-system-gains-a-brain/">NASA is using</a> in some of its human spaceflight programs. SequenceL is described as a high-level language for declarative computation on nonscalars. One of the key features of the language appears to be the avoidance of the need to explicitly specify recursive or iterative operations. For example, given the function definition </p>
<pre >
Search(scalar Target, tuple [Subscript, Word]) =
Subscript when Word = Target
</pre><p >which applies to tuples, the programmer can apply the function directly to lists of tuples without any need to specify how that application will be performed, e.g.</p>
<pre >
search(fox,[[1,dog],[2,cat],[3,fox],[4,parrot]]) â†’ 3
search(rabbit,[[1,dog],[2,cat],[3,fox],[4,parrot]]) â†’ []
</pre><p >The language designers (Daniel Cooke and J. Nelson Rushton) claim that this kind of thing leads to more concise and readable code, and a more direct representation of a specification.</p>
<p >Unfortunately, the <a href="http://languages.cs.ttu.edu/sequencel/">SequenceL</a> website appears to be inaccessible at the moment. However, <a href="http://www.cs.ttu.edu/~dcooke/">Daniel Cooke's site</a> includes links to a number of papers and talks that describe SequenceL. In particular, the paper <i ><a href="http://www.cs.ttu.edu/~dcooke/sequencel11-27-2006.pdf">Normalize, Transpose, and Distribute: An Automatic Approach for Handling Nonscalars</a></i> provides a detailed description of the "Consume-Simplify-Produce/Normalize-Transpose" approach that is embodied by SequenceL. There's also an overview of SequenceL available through <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.90.7370">CiteSeer</a>.</p>FunctionalLogic/DeclarativeSun, 11 Oct 2009 21:34:30 +0000