A production rule system matching algorithm with match cost logarithmic wrt the size of the knowledge base (rules + facts)

I propose an algorithm to match facts against rule sets (or against fact sets) which has a time complexity logarithmic with respect to the size of the knowledge base, measured in terms of rules + facts in working memory. As far as I am aware, the current state of the art is polynomial, with RETE derivatives.

There is a description of this algorithm in a reference implementation I have published, in Python (sorry), whose docs can be checked here:

syntreenet in gitlab

Thanks!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

An update

syntreenet now provides a scalable rules engine for any formal language described by a parsing expression grammar (PEG).