Hey LtU, first post here :)
The few that try to explain how prototypal anything works, get it blatantly wrong, confusing lots of different terms. One guy even called it synonimous to "the properties pattern", which was just ridiculous. Anyway...
Most often, here's what happens with prototypal inheritance:
- Creation (constructors, factory methods, etc)
Feel free to provide more examples on how different languages use/combine these three and if these really cover the load. Thoughts on the "first-class classes" idea?
I've posted here in the past and just had a few questions about the basics of parsing and perhaps wanted a bit of insight on parsing theory.
I'm currently writing a parser compiler, to better understand the formal side of parsing, and into that I hand-wrote the grammar description language's parser. One of the major things I've learned through this is how very little I really know now and knew when I started.
In the general case, when parsing the full file you begin from the start rule, and go from there, that much is pretty basic. The parts I get a bit hazy on are how to handle partial parses when faced with interactive environments, such as the user typing the text into Visual Studio and integrating your parser infrastructure into the Visual Studio Extensibility. The current method I use involves parsing the entire thing every time, with a asynchronous delay kicked off to pause each time a key is pressed, so it's not parsing everything literally every key press. The major question I have is how to merge the parse trees and isolate the impact of a change to the concrete/abstract tree that results from the user's text.
Is there research based off of interactive parsing I could look at to get an idea?
In the future I want to understand this concept well enough to automate its integration into the project I'm writing now, even if it's in the second or third iteration. I have a feeling it's complicated, perhaps more than I'm thinking now.
Insight is very welcome.
Also note to people posting links to published documents: I don't have any subscriptions to any journals/publication websites, so it might be difficult to follow if such sources are referenced since they usually request a monetary fee for retrieving their data.
DRUPL limitations truncated previous ongoing discussion. So I have re-posted here. BTW, I would like to thank LtU for previous discussion that was very helpful in improving the current version.
Some readers might be interested in the following abstract from Mathematics self-proves its own Consistency:
Mathematics self-proves its own Consistency (contra Gödel et. al.) Carl Hewitt http://carlhewitt.info That mathematics is thought to be consistent justifies the use of Proof by Contradiction. In addition, Proof by Contradiction can be used to infer the consistency of mathematics by the following simple proof: The self-proof is a proof by contradiction. Suppose to obtain a contradiction, that mathematics is inconsistent. Then there is some proposition Φ such that ⊢Φ and ⊢¬Φ. Consequently, both Φ and ¬Φ are theorems that can be used in the proof to produce an immediate contradiction. Therefore mathematics is consistent. The above theorem means that the assumption of consistency is deeply embedded in the structure of classical mathematics. The above proof of consistency is carried out in Direct Logic [Hewitt 2010] (a powerful inference system in which theories can reason about their own inferences). Having a powerful system like Direct Logic is important in computer science because computers must be able to carry out all inferences (including inferences about their own inference processes) without requiring recourse to human intervention. Self-proving consistency raises that possibility that mathematics could be inconsistent because of contradiction with the result of Gödel et. al. that “if mathematics is consistent, then it cannot infer its own consistency.” The resolution is not to allow self-referential propositions that are used in the proof by Gödel et. al. that mathematics cannot self-prove its own consistency. This restriction can be achieved by using type theory in the rules for propositions so that self-referential propositions cannot be constructed because fixed points do not exist. Fortunately, self-referential propositions do not seem to have any important practical applications. (There is a very weak theory called Provability Logic that has been used for self-referential propositions coded as integers, but it is not strong enough for the purposes of computer science.) It is important to note that disallowing self-referential propositions does not place restrictions on recursion in computation, e.g., the Actor Model, untyped lambda calculus, etc. The self-proof of consistency in this paper does not intuitively increase our confidence in the consistency of mathematics. In order to have an intuitively satisfactory proof of consistency, it may be necessary to use Inconsistency Robust Logic (which rules out the use of proof by contradiction, excluded middle, etc.). Existing proofs of consistency of substantial fragments of mathematics (e.g. [Gentzen 1936], etc.) are not Inconsistency Robust. A mathematical theory is an extension of mathematics whose proofs are computationally enumerable. For example, group theory is obtained by adding the axioms of groups to Direct Logic. If a mathematical theory T is consistent, then it is inferentially undecidable, i.e. there is a proposition Ψ such that ⊬TΨ and ⊬T¬Ψ, (which is sometimes called “incompleteness”) because provability in T is computationally undecidable [Church 1936, Turing 1936]. Information Invariance is a fundamental technical goal of logic consisting of the following: 1. Soundness of inference: information is not increased by inference 2. Completeness of inference: all information that necessarily holds can be inferred Direct Logic aims to achieve information invariance even when information is inconsistent using inconsistency robust inference. But that mathematics is inferentially undecidable (“incomplete”) with respect to Ψ above does not mean “incompleteness” with respect to the information that can be inferred because it is provable in mathematics that ⊬TΨ and ⊬T¬Ψ.
The full paper is published at the following location:
I've been toying with this little project during my daily commute (only today learning that it's an area that's been explored off and on since the 60s under the name "adaptive parsers" -- I'll have to update the readme):
Some things that I think are interesting about this experiment are:
* lexing/parsing are handled by the same underlying LALR(1) parser-generator algorithm
The basic idea here is that you feed this thing some basic axioms (your primitive functions, which can be used in subsequent lexer/parser reductions), then you can feed it input to parse and/or extend the syntax of what can be parsed. I have been thinking about using this to allow syntax extension in modules supported by a compiler I've been working on. A simple (working) example from the project page:
// now that we can write comments -- extend this grammar to accept arithmetic expressions
assocl t before p.
assocl e before t.
lexeme int = x:'[0-9]+' -> toInt x.
// with that out of the way, let us now do a little arithmetic
FWIW, I'm not working in an academic setting, just your average compiler project in a large non-technology company. I'm curious to know if others have thoughts/warnings about this approach, or maybe interesting recent research/discussions I should read (I did turn up this old thread from several years back).
See the daily Blondie comic strip for October 3, 2013.
Looks like I'm going to write something up, with docs and code about a lightweight process model, coming in at least three flavors. The most verbose flavor will be a fiction format, consisting mainly of dialogs between characters discussing ideas at several levels of generality, so finished code comes with a body of analysis motivating it, in detail.
I decided each character should care about a specific app, so that's what they want to talk about technically. For example, one will be working on a distributed network codec of the sort I did in recent years, for compression via distributed caching. Each character will want to evaluate benefits with respect to their app focus.
You can help me get some docs underway by suggesting an app which benefits from concurrency support, especially if scale would surpass comfortable native thread limits, so cooperative multitasking is suitable and meaningful.
If you can think of a programming language angle for one of the apps, that would help too. At least one character should have no app in mind (making other characters roll their eyes), but want to add green threads to a programming language, then think of an app later.
Forth might be a good choice there, because it's simple, and because the stack model makes it easy for the main dev, Wil, to describe behavior of runtime elements in terms of a stack machine. It's an excuse to present a way to visualize interactions between lightweight processes (lanes) and their constituent lightweight threads (jobs).
Naming lightweight processes is hard, because they provide almost no isolation, and retain very few qualities one associates with a native OS process beyond identity and a set of related (green) threads, with a minimum of one, where all execution occurs. So far lane is the best short, one-syllable word I found that means a track in which related partially independent things run to get a task moved where it needs to go. But if you can think of a better term, I'm all ears.
Does a programming language need intimate coordination with concurrency models? Whenever I tried to think of how Lisp syntax might change to suit what I had in mind, almost nothing ever seemed necessary beyond a way to name entities peculiar to the model. Symbols like any other seemed enough. For example, to affect your job or lane, a symbol must name them.
A "List of ICFP'13 accepted papers, with links to preprint or additional information when available" is being published by user 'gasche' at github: https://github.com/gasche/icfp2013-papers. Any other links, anyone?
Annual Peter Landin Semantics Seminar: Rationalism v Hardware, Prof Richard Bornat, 2 December 2013, 6pm, London
BCS FACS - Annual Peter Landin Semantics Seminar: Rationalism v Hardware
Venue: BCS, First Floor, The Davidson Building, 5 Southampton Street, London, WC2E 7HA | Maps
Cost to attend: Free of charge, but, please book your place via the BCS online booking system.
Book Online: https://events.bcs.org/book/832/
Peter Landin (1930 - 2009) was a pioneer whose ideas underpin modern computing. In the the 1950s and 1960s, Landin showed that programs could be defined in terms of mathematical functions, translated into functional expressions in the lambda calculus, and their meaning calculated with an abstract mathematical machine. Compiler writers and designers of modern-day programming languages alike owe much to Landin's pioneering work.
Each year, a leading figure in computer science will pay tribute to Landin's contribution to computing through a public seminar. This year's seminar is entitled "Rationalism versus hardware" and will be delivered by Professor Richard Bornat (Middlesex University).
Welcome & Introduction
Peter Landin Semantics Seminar
Rationalism v Hardware - Professor Richard Bornat (Middlesex University)
Peter Landin's great contribution was to show that programming could be abstracted from hardware and made the subject of mathematical study. That work was fundamentally rationalist: his thesis was that unless programming fits the lambda calculus, we could never understand it. But empiricism cannot be defeated, and hardware inevitably pricked rationalist complacency with fixed size integers, array overflow, null dereference and lots, lots more.
Now the primacy of the rationalist approach has been challenged afresh with the rise of multiprocessors. In the pursuit of speed, execution is no longer sequential and memory is no longer simply shared. A multicore processor is a distributed system, and different designers make different arbitrary choices in an attempt to make their systems programmable. The risks are obvious, and the remedy isn't clear.
Rationalists are fighting back, and we already have rational models of hardware that cover the range of modern processors. But we are far from winning, and the attempt to define C and C++ as hardware-independent concurrent languages seems to be flawed.
This talk will survey the state of the battle - trying not to frighten you too much - and will include a presentation of my own attempt to produce a program logic for multiprocessor concurrency (joint work with Jade Alglave, Peter O'Hearn and Matthew Parkinson). Although Peter Landin didn't approve of program logic, I hope he would have approved our attempts to lift programming once again above the level of what he always called `instrumentation'.
I was looking for work on GADTs in the presence of subtyping and found this 2013 paper GADTs meet subtyping of Gabriel Scherer and Didier Rémy.
The running example of the paper is (can you guess?) an expression type. One thing that I was looking for specifically that I didn't find addressed in the paper is how to interpret e.g.
Active forum topics
New forum topics