Generative Grammar-based Computation
A promising approach for set comprehension and set processing is to utilize generative grammars. Generative grammars essentially denote the set of 'sentences' they will generate, and a set of sentences can carry complex information about the world and one's designs for it.
With grammar-based computation, the primary operations would be composition and refinement of grammars. Working with the 'sentences' directly is marginalized except to the extent that literals describe relatively trivial grammars. This breaks from traditional application of grammars, which is to analyze a sentence for 'matches' and to provide rewrite rules that generate a new sentence. This traditional application mirrors common functional programming (pattern matching and production rules), and is often 'lossy' in the sense that one cannot regenerate the original sentence from the matching rules. Grammar-based computation doesn't involve 'rewriting'; there is no explicit 'generated sentence'.
Instead, grammar-based computation is very close to logic-programming. A grammar can be understood as a sort of predicate that will classify the sentences it generates. For example, a grammar named 'Sum' might generate sentences of the form (A B C) where A, B, C represent integers and A+B=C. One might refine this into a new grammar Add4 that generates sentences of the form (B C) where 4+B=C. Simple constraint programming is also feasible - i.e. all (B C) sentences where C < 100 - and then one can 'generate' solutions to the constraint problem.
At the edges of the system, one could ask for sample sentences. Within a grammar, 'queries' would need to be simple without hindering composition of grammars. Perhaps G1\G2 is a composition that generates G1 if G1 is not empty, otherwise generates as G2. (Another useful query might be whether G1 produces a unique sentence... perhaps some sort of 'at least one, at most one' approach.)
The challenge I've been facing is making this practical. By practical I mean supporting properties such as:
Some of these features feed one another. For example, if grammars denote sets of sentences, then supporting 'sets' within a sentence effectively means having 'grammars' within sentences. And supporting grammars within sentences, along with expressive patterns to adapt those grammars, might be sufficient to describe continuous fields.
Anyhow, I'm curious as to state-of-the-art and prior-art that I haven't discovered yet, and for any opinions/ideas.
A quick summary of what I have discovered: Attribute grammars and derivatives are simply too darn complex - they attempt to combine functional composition of attributes with grammatical productions, and the result is painful to write, difficult to read, and functions are generally irreversible (analytic rather than generative)... and they certainly weren't designed for reasoning across grammar compositions. Alessandro Warth's OMETA and Parsing Expression Grammars are designed for composition, but are also analytic grammars. The Language Machine seemed related, but really is all about stream-rewriting.
I'm still reading John Shutt's 93 thesis on RAGs, which seem related to what I'm seeking, but I've yet to fully grok them; I'll admit to being put-off by the emphasis on achieving "Turing-power" as though it were a virtue rather than an often unnecessary evil. I just now located an older LtU discussion referencing a paper on 2-level grammars and semantics which seems related. Matt Estes wrote Properties of Infinite Languages and Grammars, which I'll need to read.
I also picked up Grammar Systems: A Grammatical Approach to Distribution and Cooperation and Grammatical Models of Multi-Agent Systems, though I'm really hoping to find something to which I can apply a distributed security model (object capabilities, preferably) and these books describe systems that involve 'trustful' collaboration.
Active forum topics
New forum topics