If you hated the last paper, you'll get to hate it all over again :).
The main innovation in this work is a treatment of fields and assignment to support it. We have found that assignment between a term's fields (of any depth) is completely parametric with respect to the assignments of that term! This seems quite intuitive, but the semantics are tricky. Soundness proof of this treatment is included in the paper, along with a feasible (but unproved) way of implementing it.
I'm new here but I'm not quite sure whether I'm in the right place. I've designed a few languages as a hobby and produced their associated compilers/interpreters etc but I'll admit I don't understand half of the terminology used on this site. I actually find it quite difficult even navigating the site - I assume using free forum software like phpBB was looked down upon for some reason. I guess having a simple description stating what the forum is about met with a similar fate :)
My current project is a little language to allow non programmers to produce platform games so it's certainly not a generic language but hopefully it might get kids producing games and showing them to their friends. Is this the right place to discus the language and ask any questions I might have? I don't mind a little criticism. If so, where should I post? and, if not, are there any suitable forums anywhere?
The following is some example code that places some moving coloured boxes across the screen and allows the player to jump across them to the exit point:
The challenge to the generality of Logic Programming as a foundation for computation was officially thrown in The Challenge of Open Systems [Hewitt 1985] to which [Kowalski 1988b] replied in Logic-Based Open Systems. This was followed up with Guarded Horn clause languages: are they deductive and logical? [Hewitt and Agha 1988] in the context of the Japanese Fifth Generation Project (see section below). All of this was against Kowalski who stated “Looking back on our early discoveries, I value most the discovery that computation could be subsumed by deduction.” [Kowalski 1988].
However (contrary to Kowalski), computation in general is not subsumed by deduction. Mathematical models of computation do not determine particular computations as follows: The Actor Model makes use of arbitration for determining which message is next in the reception order of an Actor that is sent multiple messages concurrently. For example Arbiters can be used in the implementation of the reception order of messages sent to an Actor which are subject to indeterminacy in their reception order. Since reception orders are in general indeterminate, they cannot be inferred from prior information by mathematical logic alone. Therefore mathematical logic cannot implement computation in open systems. Consequently, Logic Programming can axiomatize but not in general implement computation.
Shared financial accounts had been developed as a paradigmatic example of Actors [Hewitt, Bishop and Steiger 1973]. For example in ActorScript, a simple account can be implemented as follows:
CreateEuroAccount.[startingBalance:Euros] ≡ Actor with currentBalance:=startingBalance◊ implements Account using GetBalance→ currentBalance¶ Deposit[anAmount]→ Void afterward currentBalance:=currentBalance+anAmount¶ Withdraw[anAmount]→ (anAmount > currentBalance) ¿ True ↝ Throw OverdrawnException; False ↝ Void afterward currentBalance:=currentBalance–anAmount ?§▮
However, the above program cannot be implemented using logical deduction.
The Procedural Embedding paradigm is strictly more general than the Logic Programming paradigm. Nevertheless, logic programming (like functional programming) can be a useful programming idiom.
For more information see Inconsistency Robustness in Logic Programming.
Beginning in the 1970’s, Japan took the DRAM market (and consequently most of the integrated circuit industry) away from the previous US dominance. This was accomplished with the help of the Japanese VLSI project that was funded and coordinated in good part by the Japanese government Ministry of International Trade and Industry (MITI) [Sigurdson 1986].
MITI hoped to repeat this victory by taking over the computer industry. However, Japan had come under criticism for “copying” the US. One of the MITI goals for ICOT was to show that Japan could innovate new computer technology and not just copy the Americans.
ICOT tried to go all the way with Prolog-style clause programs. Kowalski later recalled “Having advocated LP [Logic Programming] as a unifying foundation for computing, I was delighted with the LP [Logic Programming] focus of the FGCS [Fifth Generation Computer Systems] project.” [Fuchi, Kowalski, Ueda, Kahn, Chikayama, and Tick 1993] By making Prolog-style clause programming (which was mainly being developed outside the US) the foundation, MITI hoped that the Japanese computer industry could leapfrog the US. “The [ICOT] project aimed to leapfrog over IBM, and to a new era of advanced knowledge processing applications” [Sergot 2004]
However, it proved difficult to implement clause invocation in these languages as efficiently as message-passing in object-oriented programming languages. Simula-67 originated a hierarchical class structure for objects so that message handling procedures (methods) and object instance variables could be inherited by subclasses. Ole-Johan Dahl  invented a powerful compiler technology using dispatch tables that enabled message handling procedures in subclasses of objects to be efficiently invoked.
Going beyond object-oriented programming to concurrent programming, ICOT encountered further difficulties dealing with concurrency, e.g., readers-writers concurrency. Concurrency control for readers and writers in a shared resource is a classic problem. The fundamental constraint is that multiple writers are not allowed to operate concurrently. Below is an implementation of a reading priority guardian that encapsulates a shared resource:
CreateReadingPriority.[theResource:ReadersWriter] ≡ Actor with writing:=False, numberReading:(Integer thatIs ≧0):=0◊ queues readersQ, writersQ◊ implements ReadersWriter using Read[query]→ Enqueue (writing or not IsEmpty writersQ) ¿ True ↝ readersQ; False ↝ NullQueue ? Prep numberReading:=numberReading+1◊ RequirePre not writing◊ Permit readersQ◊ hole theResource.Read[query] always permit (IsEmpty writersQ) ¿ True ↝ readersQ; False ↝ (numberReading = 1) ¿ True ↝ writersQ; False ↝ NullQueue ? ? also RequirePre numberReading≧1◊ numberReading:=numberReading–1◊¶ Write[update]→ Enqueue (numberReading>0 or not IsEmpty readersQ or writing or not IsEmpty writersQ) ¿ True ↝ writersQ; False ↝ NullQueue ? RequirePre not writing◊ Prep writing:=True◊ RequirePre numberReading=0◊ Hole theResource.Write[update] always permit (IsEmpty readersQ) ¿ True ↝ writersQ; False ↝ readersQ ? also RequirePre writing◊ writing:=False◊§▮
Implementations of the above kind procedure in the Prolog-style clausal languages used by ICOT were both verbose and inefficient.
The technical managers at ICOT were aware of some of the pitfalls that had tripped up previous Artificial Intelligence (AI) researchers. So they deliberately avoided calling ICOT an AI Project. Instead they had the vision of an integrated hardware/software system [Uchida and Fuchi 1992]. However, the Prolog-style clause programming paradigm turned not to be a suitable foundation because of poor modularity and lack of efficiency by comparison with direct message passing [Hewitt and Agha 1988].
Another problem was that multi-processors found it difficult to compete because at the time single processors were rapidly increasing in speed and connections between multiple processors suffered long latencies.
The MITI strategy backfired because Japanese companies refused to productize ICOT hardware.
However, the architects of ICOT did get some things right:
The architectural reliance on Prolog-style clausal programs was a principle contributing cause for the failure of ICOT to achieve commercial success.
For more information see Inconsistency Robustness in Logic Programming.
I posted this article at reddit, and would like to hear constructive feedback here at LTU.
[Karmani and Agha 2011] promoted the Fog Cutter [so named by Kristen Nygaard] model in which each computational agent is defined to have a mailbox, thread, state, and program. However, the Fog Cutter is not a general model of Actors because of the following differences:
The Fog Cutter model has been extremely useful for exploring issues about Actors including the following alternatives:
In practice, the most common and effective way to explain Actors has been operationally using a suitable Actor programming language (e.g., ActorScript) that specifies how Actors can be implemented along with an English explanation of the axioms for Actors.
For more information see Actor Model of Computation
Please put fewer things in new language designs, not more. I say this on the slim chance it takes, and simpler designs are seen. I'll try to make my point briefly; note this is also what I think a design should do: make points briefly. First I'll make a comment on Rust. Then I'll offer to argue for simplicity in a language you talk about here if you want.
My guess is Golang folks will win, because Rob Pike gets the less idea. To compete Rust folks must go small, not big, so a language lawyer can explain in simple terms and still be correct. All I want is for things not to suck. I don't have an agenda to sell Go or Rust, I just think competition is good. So I don't want Rust folks to lose.
Rust folks have a clear disadvantage when more folks are involved in design: getting complex is often how smart folks compete, even when this loses the game for a group, as individuals win prestige for themselves in isolation. In game theory terms, folks betray long term global goals to get short term local reward. (Actually, there are several incentives to be bad; some folks like a priesthood model where complexity offers future job security in the understanding bottleneck. But this isn't as big a problem as complexity smelling good right now in arguments.)
Compare with C++: it got more complex over time, and adding new complexity is going strong. Folks who want a low level language must abandon it, because committee direction is toward high level. I used C++ from 1990 to 2010, but the direction of 2011 designs showed me I need to go elsewhere when I need runtime control. Or compare with Scheme, especially when it comes to the way R6 differed from R5. (I haven't look at recent Scheme efforts.) If you want design to be simple, it must be a goal. It won't happen accidentally when more than a few hands get involved.
How do you know when it's simple enough? There's an empirical test. Write down rules, give them to a fresh newcomer of mean ability and intelligence, and see if they can apply and defend them in various cases — especially when attacked by a bright lover of complexity. Your average practitioner should be able to say, "The simple rule here dictates solution X to your wicked problem Y." You did it right when this can occur. You did it wrong when it requires an expert to adjudicate normal discussion. It's a design smell if any part requires an expert to grasp. Another design smell: you grasped it last week, but not today.
Boil things down to basic definitions where they work when applied to complex cases. If you're so smart, you should be able to do this, right? Ask others for help when you want to know if it's simple now, because your perspective is suspect. Be agreeable to other folks rewriting your rules so they get easier to learn and apply. Tease folks who try to make things harder to understand because it makes them look good. You and your children are going to end up driving a car controlled by software developed by non rocket scientists. Ease up on them.
An important question in programming languages seems to be how to optimize the expression of common cases or how to allow writing programs in you-know-what-I-mean style -- some tools for that are default or implicit parameters, implicit conversions, inheritance. They all can be explained in terms of some kind of merge operation that takes some base information and combines that with user-provided information. All these features make languages more practical and more complex.
Are there any papers or projects that specifically explore defaults, implicits and inheritance as related topics?
Inconsistency robustness is “information system performance in the face of continually pervasive inconsistencies.” A fundamental principle of Inconsistency Robustness is to make contradictions explicit so that arguments for and against propositions can be formalized. This paper explores the role of Inconsistency Robustness in the history and theory of Logic Programming, which can be usefully defined as “using logic to infer computational steps.”
Inconsistency Robustness has been a continually recurring issue in Logic Programming from its beginnings including Church's system developed in the early 1930s where the hope was that partial functions would allow development of a general logic without the kind of paradoxes that had plagued earlier efforts by Frege, Russell, etc. Because it allowed the kind of self-referential propositions introduced by Gödel and because thought that a general system must be able to computationally enumerate its theorems, Church's system was quickly shown to be inconsistent. In the face of this inconsistency, Church abandoned the quest for a general foundation and subsequently used the lambda calculus solely for the purpose of computation.
Robert Kowalski put forward a bold thesis: “Looking back on our early discoveries, I value most the discovery that computation could be subsumed by deduction.” However, mathematical logic cannot always infer computational steps because computational systems make use of arbitration for determining which message is processed next by a recipient that is sent multiple messages concurrently. Since reception orders are in general indeterminate, they cannot be inferred from prior information by mathematical logic alone. Therefore mathematical logic cannot in general implement computation. The proposed adjustment to the inconsistency is that logic programming (like functional programming) can be a useful programming idiom even though it is not universal. (It is shown in this paper how logic can be used to infer concurrent programs that cannot be implemented as Logic Programs.) Also, logic programming can provide useful principles and methods for systems which are quasi-commutative and quasi-monotonic even though the systems themselves are not strictly speaking logic programming.
In contrast with previous work, Kowalski has advocated limiting Logic Programming to backward-chaining only inference based on adding the negation of a goal to a database and then preferring resolution with the negation of the goal and its resolvents. However, his proposal leaves out forward chaining and other logical operations. The proposed adjustment to the inconsistency is to explore Logic Programming using inconsistency-robust Natural Deduction that is contrary to requiring (usually undesirable) reduction to conjunctive normal form in order to use a resolution-like system such as Prolog. Because contemporary large software systems are pervasively inconsistent, it is not safe to reason about them using classical logic, e.g., using resolution theorem proving.
The above examples are illustrative of how issues of inconsistency robustness have repeatedly arisen in logic programming.
Above abstract is from the full article
I am interested in developing an app that allows users to feed it arbitrary sample text then write similar text using an auto completion tool. The auto completion tool is based on generating text from a context free grammar and will be considerably more powerful than t9/search-bar auto complete because it can generate entire phrases or blocks of code with adjustable components. I have a mockup of the tool here that generates json: http://nathanathan.com/cfg-gen/ However my mockup doesn't offer suggestions for default values at this time.
I am struggling with the problem of transforming sample text into into a grammar I can use with my autocompletion tool. I read a bit about the sequitur algorithm but it doesn't solve my problem because it generates deterministic grammars. So I'll refer to the algorithm I'm looking for as nonsequitur. Nonsequitur is Intended to create nondeterministic grammars that minimize the decisions the user needs to make to generate the sample text, without removing decisions that are essential complexity. For example, it should make it so the user only needs to make one decision to write an s-expression rather than two decisions to write a right and left parenthesis.
The approach I've been considering for implementation is to treat the problem as a search problem and use an algorithm like A*. The cost function would be based on the size of the grammar as well as the size of the input sequence needed to reconstruct the source. The formula might look something like this...
Grammar size * ( magic number + length of input needed to reconstruct sample text / sample text length )
This formula is intended to avoid the special cases such as the grammer containing no structure because it generates all strings and deterministic grammars that fully encode the input sequence.
The search state space is the set of all grammars and input sequences that generate the sample text. I'm not sure what all the transition functions should be. I was thinking replacing duplicate symbol pairs like in sequitur would be one. There will need also be a transition that creates ambiguous rules somehow, perhaps by merging pairs of existing rules.
I hope that by discussing the problem here I can find out if I'm reinventing a wheel, discover flaws in my approach and gather suggestions for improvements.
Active forum topics
New forum topics