LtU Forum

A production rule system matching algorithm with match cost logarithmic wrt the size of the knowledge base (rules + facts)

I propose an algorithm to match facts against rule sets (or against fact sets) which has a time complexity logarithmic with respect to the size of the knowledge base, measured in terms of rules + facts in working memory. As far as I am aware, the current state of the art is polynomial, with RETE derivatives.

There is a description of this algorithm in a reference implementation I have published, in Python (sorry), whose docs can be checked here:

syntreenet in gitlab

Thanks!

A pretty printing algorithm

A pretty printing algorithm which works for strict languages with an example implementation in ocaml.

For a relational language, how treat KeyValues, Streams in relations with operators that add or remove "columns"

I'm building a relational language where I wish to treat everything as a relation. The main point is to provide a common core of relational operators (aka: Queries) that work across all. A "relation" here is a table with a header of fields, columns and rows.

I have a container that match directly this, that internally is an array:



id name age
1 Jhon 18

Now, I have other containers that are Key/Value like a hashmap and BTreeMap, and single value like Generators, ranges and vectors. Only generators, as being read-only forward-only can be say to output a fixed schema.

You can say a hashmap, for example, is a relation that look like:



Key Value
1 Jhon

And a vector one like



it
1
2
3

With all of them, I wanna provide the same operations like:

table ?where #age = 18
kv    ?where #key =1 
array ?where #it = 2

Now my dilema is what to do for some operations that add or remove columns, like joins and projection:

table ?select *, #age + 10 as older
kv    ?select *, #value |> String.Upper as name
array ?select *, it * 10 as calc

What could be the best option here?

1- Say that kv/array have fixed schemas and can't grow or shrink their headers

2- Say a KV only have fixed the KEY and the value header is flattened, so it become:

[key, value, name]

The problem is not what OUTPUT. I could just output tuples. Is how REPRESENT the relation if it be more than the KV.

So, If I have 10 columns, what is the best option to put inside a KV? Consider that the relational model allow to rename columns, and add and remove them.

Another problem is that KV like Btree allow to model... trees. If I think instead of Btree as a index behind a table, the language lost the ability of "naturally" model trees. BTW, I wanna to have AGDTs.

P.D: Other potential question is: What kind of container could work alike "table" but also alike hashmaps/trees? How model a type that allow several "backends"?

P.D: Originally here: https://www.reddit.com/r/ProgrammingLanguages/comments/awkdrb/for_a_relational_language_how_treat_kv_and/ but not good answers... so I ask here with a bit of more details.

Streaming Language Rewrite Processing (SLRP)

Streaming Language Rewrite Processing (SLRP) is a low-level computing model that is essentially characterized by limited-memory processors rewriting a much larger input stream. Computation under this condition requires multiple passes, but is highly amenable to pipelining and partitioning, and thus parallelism. SLRP processors may have side channels to external devices that we drive through ‘effectful’ rewrite rules.

TLDR: SLRP is a very promising alternative to the von Neumann architecture. SLRP offers advantages for security, concurrency, distribution, and scalability. However, SLRP performs poorly for data plumbing and loops because there is no pass-by-reference within the stream. This weakness can be mitigated by effects, shunting a region of the stream to a remote device and keeping a small placeholder – a reference.

The article develops a Turing complete foundation for a SLRP machine code based on concatenative combinatory logic, and sketches several extensions. The challenge is that the operands for a combinator may be larger than a processor's memory, e.g. for a copy operator `c[X] == [X][X]` the value `[X]` might be larger than what our processor can remember. Thus, a processor cannot simply copy and paste. To solve this involves divide-and-conquer tactics, and a notion of open-ended blocks.

Paper on ParaSail published in <Programming> journal V3.3; new release also available

We very recently published a long-ish article on the ParaSail programming language (ParaSail = Parallel Specification and Implementation Language) in the relatively new Journal on the Art, Science, and Engineering of Programming, Volume 3, Issue 3:

ParaSail: A Pointer-Free Pervasively-Parallel Language for Irregular Computations

There is also a new (version 8.0) release of the ParaSail binaries (Linux, Mac, and partial Windows) and sources at:

http://parasail-lang.org

A direct download is available at: http://bit.ly/psl8rel

Abstraction Tiers of Notations

DZone has published 3 my articles:

Few key points from them:

  • Abstractions are not created equal and they could be sorted by tiers depending on structural elements they use and the way these elements are used.
  • The tiers are closely tied to cognitive development stages as described by J. Piaget and M.L. Commons.
  • Each new tier of the abstractions is built upon constructs of the previous tier.
  • Adoption of new tier causes major generation of the general-purpose programming language to appear.
  • Basing on the model, the next generation of general-purpose programming languages will be likely System-Oriented Programming. That will use ‘system’ as a new category of type that could be instantiated. Current dependency injection frameworks are early and incomplete attempts to implement this new abstraction.
  • Basing on the model, the evolution of the database languages should be parallel to the evolution of general-purpose programming languages, with the same abstraction tiers adopted.

Note, I’ve described some elements of this model in some discussions on LtU previously.

Looking for papers on covariance and contravariance

Hello,

I am looking for papers and a particular paper I used to have on covariance and contravariance.

The paper I used to have many years ago probably about 2006 or 2007 when I originally started looking into type theory and did not realize its relevance had many inference rules for covariance and contravariance using plus and minus symbols within the inference rules.

I am now researching this area and am interested in any papers on this topic.

Many thanks in advance,

Aaron Gray

Functional Design Patterns - Relating Haskell Typeclasses to Software Design Patterns

Recently, while re-reading through the Haskell Typeclassopedia I thought it would be a good exercise to map the structure of software design-patterns to the concepts found in the Haskell type class library and in functional programming in general.

By searching the web I found some blog entries studying specific patterns, but I did not come across any comprehensive study. As it seemed that nobody did this kind of work yet I found it worthy to spend some time on it and write down all my findings on the subject.

I think this kind of exposition could be helpful if you are either:

- a programmer with an OO background who wants to get a better grip on how to implement complexer designs in functional programming
- a functional programmer who wants to get a deeper intuition for type classes.

I have set this up as a GitHub project with full sourcecode for all examples:

The Patternopedia

POLA Would Have Prevented the Event-Stream Incident

POLA Would Have Prevented the Event-Stream Incident by Kate Sills

The JavaScript world was rocked this week by news that the popular npm package event-stream included malicious code that attempted to steal the private keys of certain Bitcoin users.

Since the attack was discovered, both the JavaScript community and the cryptocurrency community have been passionately debating how to prevent such an attack. At Agoric, we think this attack was entirely preventable, and the answer is POLA, the Principle of Least Authority.

This npm / event-stream debacle is the perfect teaching moment for POLA (Principle of Least Authority), and for the need to support least authority for JavaScript libraries. My talk Securing EcmaScript, presentation to Node Security explained many of these issues prior to this particular incident.

For LtU, my best explanation of POLA is Verify What? Navigating the Attack Surface given to the "Formal Methods Meets JavaScript" workshop at Imperial College.

Video on Unison/comparison to Haskell/Monads/distributed computing

https://youtu.be/rp_Eild1aq8

Monads are passé, apparently.

Previously mentioned on ltu rather a long time ago http://lambda-the-ultimate.org/node/5151

XML feed