Artificial Intelligence

Does anyone cares about learning some of AI methods?

here is an 11 pages paper

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

null hypothesis

A new discussion topic might get more traction if your original post has more words: at least enough to make a point of your own, or express your interest.

Maybe I can help by asking an obvious question. What about the null hypothesis? (When I skim the paper I don't see a null hypothesis considered: that maybe a language specific to a goal does not help reach the goal.) I can say this several ways, some of which might get more reaction than others. Here's one:

Why pursue a PL (programming language) specific to AI? Why won't any general purpose language work as well? A DSL (domain specific language) can express things more concisely by assuming the model of a domain involved, but doesn't really say something you cannot express in more general syntax and grammar. Presumably there's something about AI in particular that makes it amenable to treatment using a domain targeted language? What would that be?

About Synth

From AI point of view, Synth would be a direct competition to OpenCog. It would be ran inside browser, so specific programmed applications would be easily presented to a wide range of users.

Although U might consider Synth as AI domain specific language, I wouldn't call it that way because it would be a general use language in which U could program regular applications such are games, databases and other general apps. It just happens that it handles AI very well (in fact, this was one of mine conditions in making a new language, to handle AI well).

So, why not javascript or something else for AI? It is because in the essence of Synth are mechanisms that are commonly used in AI, so U wouldn't have to program it from the scratch which U would have to do in javascript and similar languages. These mechanisms are expression unification and reactive inference engine.

Let's digress a bit to see the other side of Synth, the general programming part. Well, Synth would provide a new, exotic way of programming. Let's say that we have a structure by which we can store data in variables. Programming in Synth would be managed only with assignments to variables upon events and events araise when other variables change their values. So we wouldn't have classic pointer which goes line by line in code and executes it. We would have a state machine in which variables change their values when events araise. It is yet to be seen how this system would behave in algorithm construction, but from what I've seen by now, technology is promissing.

And if U can imagine some DSL that would easy up programming of a part of your application, there is a solution for that: Synth would provide user definable parser and interpreter for custom languages, so U can mix Synth and i.e. Python or other code. But I think that this wouldn't be used much, as my experience shows that people usually stick with one language per application. What might be used wider is user definable data formats for storing data such is SVG for graphics. So you would pick a language to program in (let's say Synth) and when you get to graphics part, you can use SVG syntax to manage shapes on screen.

Synth would attempt to combine state machine with reactive programming. State machines assigns values to variables, while inference engine provides reactive part to the very same variables. So, variables can contain states, reactive values or both.

To return to AI, most of AI theories deal with stateless systems. Synth would attempt to extend itself to easily manage state machines, which would give us more complete embracement of systems in the Universe.

Actually I was searching for an AI method to deal with solving different problems. I needed a data structure that could describe states in the Universe. Besides that I needed some dynamics mechanism to describe how these data change through time. I've managed to battle through and Synth idea was born.

I find that Synth would be an exciting innovative platform and I hope U'll like it.

Hi. Popping in to tell you:

Hi. Popping in to tell you: you appear to be saying "AI" when you mean "logic programming". In logic-programming terms, you don't appear to be doing anything all that special, and don't appear to be citing any of the previous research literature on theorem-provers, logic programming languages, computationally representing logics, etc.

In terms of competing with OpenCog for agent-based tasks, mere logic programming with nothing else attached is going to do laughably bad. Sorry.

It's ok, though: we were all young and excited once. I'm pretty sure my first postings on LtU were of this quality.

One protip for life in computing, though: never use the term "AI" if you can even remotely help it. Just don't. The term has been overloaded with so damn many meanings that using it imposes a cognitive burden on your audience of figuring out what you actually mean, while simultaneously signalling amateurishness to most seasoned researchers (who can talk about specific fields, subfields, problems, and subproblems in their definition of the term "AI", be that machine learning, planning, decision assistance, etc.).

Induction and deduction

It would support induction (indirectly) and deduction (directly). That seems as an AI to me...

Any reasonable programming

Any reasonable programming language can support induction by just writing out a statistical induction algorithm.

Ok, so Synth would have

Ok, so Synth would have direct support just for deduction.

But the paper is about some AI methods. That is why I put that title to this thread. Aren't induction and deduction classify as AI?

induction and deduction

Aren't induction and deduction classify as AI?

About as much as hammers classify as carpentry.

Ok, I'll be more careful

Ok, I'll be more careful with using term AI.

protip: there is no AI

One protip for life in computing, though: never use the term "AI" if you can even remotely help it.

I second this.

Though I'll add that 'AI' isn't unreasonable when you're just lumping all known methods together (e.g. AI in game development).

game dialogs could use punching up

Someone more interested in AI than me would give better feedback, but I don't want to leave you hanging after your effort to reply. I may offer only token conversation. I didn't see your name in the paper, and infer authorship by your reply. After a trivial starting comment, I'll make slightly more useful remarks.

I stumble reading each U that means you, because it interrupts flow by requiring I reason about what it means. Normally I don't even notice common standard words, but parsing U requires as much effort by itself as the rest of the sentence. Granted, English spelling orthography is a little weird, so I see the appeal of reform. I'd actually prefer lowercase y as a replacement. I'm not very sound oriented, y'know, so seeking sound-based explanations to parse is taxing. Sorry if this comment duplicates any received before from others.

So the value added is support for expression unification and reactive inference? If those were in a library, instead of a language, users would also benefit, but in an old PL — perhaps any one they choose, assuming a nice foreign language interface. Something is modeled; the model must be explained to users. A language atop adds another layer of indirection; now you need to explain the runtime too, to show how it drives the model. Hmm, you might write a library in another language (Javascript as example makes me shudder, but my taste is off topic), and then use that explain what a language means: this syntax implies manipulating the model like this. Then the model could be explained as the library version.

I experience AI features only in console games, and the Google search bar. Both have problems. In Fallout 3, super mutants don't notice when you take out their conversation partners with Lincoln's .44 rifle. In the Google search bar, search devs don't notice when you lose your carefully chosen search terms to dumbed-down pattern matches against what average people are interested in finding.

McCusker, thank you for

McCusker, thank you for critics, I appreciate your effort. I like critics regardless of their appearance as a good or a bad critics.

Symbolic

Apart from syntax, how does this differ from Prolog? It looks to me this is a kind of event driven prolog?

How about performance? For some of the AI work I do, the performance of matrix operations (for neural networks), sequential tree operations (like UCT, for search solutions), and monte-carlo methods is critical.

Is this aimed at the Prolog style symbolic AI, rather than deep-learning or other statistically based AI?

The system I described looks

The system I described looks to me more like Type Theory or Constructive Logics than like Predicate Logics (which, I assume Prolog exhibits).

Performance? It would be ran inside browser, written in pure Javascript. So performance isn't expected to be so spectacular. What I'll gain is presenting potential applications to a wide range of users through web page.

The first real-world application I plan to write in Synth is a web site for solving math, physics and chemistry problems, together with theorem prover and maybe some general question answering mechanism. I guess one could write an app for finding new quantum formulas from experiment data by induction.

Trinitarianism

Logic, Category Theory and Type Theory are all fundamentally the same (does that make them members of a meta-category?). Different type-systems map to different logics:

http://ncatlab.org/nlab/show/computational+trinitarianism

I don't see any problem with expressing intuitionistic (constructive) logics in Prolog, you just don't use 'not' or 'cut' and you write clauses for the not_x case separately. So clause abc(...) is true if abc is derivable from the database and not_abc(...) is true if it is not. This allows both to be true at the same time.

interesting :)

interesting :)

Ascetism

Logic, Category Theory and Type Theory are all fundamentally the same

In somewhat similar fashion to the way Turing machines and lambda-calculus are fundamentally the same; which is to say, for some theoretical purposes, but not for actually doing things with them in practice. Our theoretical tools aren't tuned toward practicality, giving theoreticians a practical bind spot, which is part of why I'm interested in buiding different kinds of tools.

Symbolic methods would be

Symbolic methods would be wired inside Synth, while statistical methods would be achievable indirectly by regular coding.

The main differnce from Type

The main differnce from Type Theory would be in declaring types and structures of knowledge. What would in Type Theory be written as:

Num: MathExp
SimpleExp: MathExp
Formula: MathExp

in Synth would be written as:

MathExp {Num | SimpleExp | Formula}

in some inline BNF-like manner. I think that besides "event driven" architecture this kind of knowledge declaration would be the only innovation.

This kind of structure would also support some kind of inheritance mechanism like in:

Vehicles (
    Type {
        Car |
        Ship
    },
    Use {
        Transport |
        Cargo
    }
)

"car" and "ship" form "vehicles" inherit "use". I think that this kind of inline structure would be more comfortable for programmers than plain one-plane knowledge declaration (like classes in OOP are all declared in the same level - if we leave out packages notation. Here any atom can fulfill a role of package, and I find it pretty cool). So we would get some kind of Type Theory and OOP in one language, which means that with the same language we can program and deduce.

combinatorial explosion

The usual difficulty of working with logic programming is that it is difficult to reason about or control performance. This resists developing applications at large scales.

AI, as a field, spends a lot of research effort figuring out how to avoid or minimize combinatorial expansions and searches, and generally improve performance. There are generic algorithms like A* search or weighted grammars. And there are many problem specific techniques, e.g. for computer vision or path planning or natural language processing.

Do you have any plans for Synth to address this challenging aspect of AI development? I did not see any when reading, but I wasn't very thorough.

I planned just knowledge

I planned just knowledge format and deduction.

Other fields should be reachable through regular programming on knowledge base particles.

I don't know, maybe something else also would be interesting to wire into language, but right now I don't have any idea (but I gladly accept new ideas).

Logic + Control

The usual difficulty of working with logic programming is that it is difficult to reason about or control performance. This resists developing applications at large scales.

"Resists"? Given the close correspondence between relational databases and logic programming (cf Datalog), I don't think this claim holds up. Indeed, I'd say that logic programming is one of the most widely deployed paradigms at large scales! :-)

relational databases & logic programming

If you used SQL to encode application logic in the same manner as logic programming - i.e. with deep joins, transitive closures, lots of fine-grained 'views' (representing derivable facts), etc. - you would encounter similar difficulty reasoning about and controlling performance.

In practice, logic programming is not how SQL is used. And, indeed, SQL is not well optimized (either in syntax or performance) for that use case.

Agreed

Oh, I fully agree - hence the smiley. SQL could be written (and optimised) for that style, and Prolog could be written like SQL. But they're not.

Dyna

In terms of prior art, I recommend looking at Dyna, which demonstrates that many ML algorithms are very naturally expressed in a datalog-like language with support for incremental evaluation.

weighted solutions and composition

I find it interesting that integrating an 'optimization' task (i.e. weighted solutions) with a search (constraint solver, grammar, planner, logic query, etc.) can result in a much more incremental computation and compositional subprograms. In some ways, it seems counter-intuitive that 'find an optimal solution' scales more readily than 'find any solution', but the weights can help direct the solver, resulting in less time wasted searching less valuable (or less probable, depending on the nature of the weighting) solution spaces. It means less time enumerating all solutions, more focusing on the top K promising areas (thus ameliorating combinatorial explosions).

I remember someone writing an excellent blog article explaining this with details and examples, but I can't find it again. :(

I agree that dyna is good prior art, as it leverages a weighted logic.

Do let me know if you recall

Do let me know if you recall the article. I'm very interested in this style of programming.

Can you summarise the semantics of Synth?

Hi Ivan, I've glanced at your paper and I don't understand how to begin to parse out either the syntax or semantics of the Synth language it describes.

At a syntactical level, it seems to be a hybrid of C/Java syntax - parentheses and braces representing some kind of blocks, control characters including @, | and comma, mashed up with XML-like tags. This doesn't map onto any low-level syntax structure that I recognise, so I don't know what terminates a block, what these markup characters mean, etc.

I'm also struggling to understand just what sort of semantic constructs your language deals with - are these expressions supposed to represent logical propositions? Relations? Functions? Something else?

You then go straight into a very complex example expression, with no explanation of the low-level elements or how they're composed, and start immediately talking about 'Artificial intelligence' with no context.

I have the impression that you're perhaps thinking along similar lines to what I've been exploring (ie, a Prolog like logic language based on relations rather than predicates) -- but then again, you might be talking about something entirely different.

Can you please point us to a much more basic introduction to Synth, starting from absolute first principles as if we have zero background understanding of what you're trying to achieve?

And I guess here it is:

"Project Synth"

https://docs.google.com/document/d/1MW592CjCnvpfJ-1lSwcRpDgvyv5WhBoJklbAGwpE5EY/edit?pli=1

Had to google a bit to find it; a lot of your previous posts were dead links.

Unfortunately, I'm not much the wiser for having read this document. It seems to be something midway between a type theory, an ontology and a parser?

While I like the idea of reactive programming (and I'd especially love to see reactive logic programming), I don't see how to get there from this document. And as specified, this language doesn't much feel to me like a 'universal parser' at all, since the language reserves so many symbols of its own.

There's an outline of something interesting here, but I'd like to see both more detail, and less surface syntax, if that makes sense.

If you look more carefully

Hi natecull :)

If you look more carefully inside short "Syntax" paragraph, you will see an invitation to a parser site. For a year now I'm developing a parser which will be the main essence of Synth. On that site you can get familiar with syntax part of Synth.

Syntax part is just a data shape specificator, a type of a variable. It says what can be parsed inside semantic braces. Just imagine a regular BNF parser. Than imagine that you can define BNF grammar and text which you want to parse both in the same file. Grammar would be "syntax" and text to parse would fit into "semantics". Semantics are really values that will be held in runtime inside a variable.

Now lets expand our semantic notation to pair different parsed texts with "|=". Now we get expressions A |= B that say "when A is a content of a variable then B is also a content of the same variable". That variable now can be instanced in some other place (just touched in the document, the document is not the full specification of Synth, it covers just AI methods). When instancing a variable we have to pass initial value to the instance. Now if initial value is A, that variable will automatically contain B.

To resume: we have variables that can hold data. They have a type - syntax and (in runtime) a value - semantics.

Maybe I should write a full Synth specification first, if you're interested in reading, but I hoped to save you of boring details. And it didn't make sense to write a specification before coding down the project, while AI part, as I thought, would be an interesting reading to programmers.

Try to visit the parser site that you missed, it is the key of the document.

Attribute Grammars

How is this different from an attribute-grammar, apart from different syntax?

I threw superficial look at

I threw a superficial look at attribute-grammar. The similarity is that each node can contain a runtime value. The difference (appart from syntax) is that those values have to be abstract syntax trees of type of the nodes. The other difference is in pattern matcher that assigns extra values to a node if the node contains value of some pattern.

ASTs

... those values have to be abstract syntax trees ...

Playing with ASTs sounds like Lisp. Also reminiscent of MC Harrison's remark (in 1969) that "any programming language in which programs and data are essentially interchangable can be regarded as an extendible language". Also reminds of adaptive grammars (eg, RAGS (pdf)).

Hi Ivan. I'm looking at your

Hi Ivan. I'm looking at your Moony Parser docs at http://parser.moonyweb.com/ but I'm still struggling to understand how you intend to get from parsing raw text to a programming language.

Specifically, my idea of a 'programming language' just at the data level includes having at least one data containment or structuring concept (ie, arrays, lists, objects, dictionaries), and at least one and preferably several built-in raw atom types (eg integers, floats, characters, strings in ASCII/UTF-8/UTF-16/etc are usually considered fundamental; many logic languages also make distinctions between eg strings, symbols, and variables as primary type concepts).

So far I can't see how Moony Parser has 1) any built-in atom types other than 'string' or 2) any container or object type. Without that bening defined, I'm not sure how you intend to model parse trees. The Lisp approach would be to use cons cells for structure, but Moony doesn't feel especially Lispy, so what's your underlying semantics for containment? Arrays? Objects? Dictionaries? Just raw strings again (like Unix)?

On top of that, a language also needs to have an underlying evaluation semantics (with the associated body of mathematical theory) describing how new statements or judgements are produced: whether function evaluation, term rewriting, or logical inference. You seem to be talking about logical inference, but you don't specify _what_ logic, what kind of inference scheme, etc, etc. Then you have to make sure all the edge cases of your syntax, data-structuring semantics, and evaluation scheme all interact cleanly.

I'm guessing your |= symbol is perhaps a reference to the 'turnstile' in constructive logic? Or is it the material implication of first-order predicate logic? Can you please specify its semantics in full? (There's, eg, a huge difference between FOPL and CL in terms of how 'not' and especially 'not not' is handled.)

You talk about 'texts' and 'variables'. What are texts? Are they strings? Are they sequences of objects? If so, what kind of sequence, and what kind of objects? Similarly, what are variables and how are they modeled in the fundamental underlying data structures of your language? Can a variable contain arbitrary text? Can text contain arbitrary variables? If I have a number, is it a text, a variable or something else? And so on.

You say "And it didn't make sense to write a specification before coding down the project", but that's just about the opposite of a statement I understand. How can you possibly 'code' something if you don't know _what_ it's supposed to be doing in the first place?

So far I can't see how Moony

So far I can't see how Moony Parser has 1) any built-in atom types other than 'string' or 2) any container or object type. Without that bening defined, I'm not sure how you intend to model parse trees. The Lisp approach would be to use cons cells for structure, but Moony doesn't feel especially Lispy, so what's your underlying semantics for containment? Arrays? Objects? Dictionaries? Just raw strings again (like Unix)?

atom types are @Number, @Variable, @RegExp, @String...

On top of that, a language also needs to have an underlying evaluation semantics (with the associated body of mathematical theory) describing how new statements or judgements are produced: whether function evaluation, term rewriting, or logical inference. You seem to be talking about logical inference, but you don't specify _what_ logic, what kind of inference scheme, etc, etc. Then you have to make sure all the edge cases of your syntax, data-structuring semantics, and evaluation scheme all interact cleanly.
I'm guessing your |= symbol is perhaps a reference to the 'turnstile' in constructive logic? Or is it the material implication of first-order predicate logic? Can you please specify its semantics in full? (There's, eg, a huge difference between FOPL and CL in terms of how 'not' and especially 'not not' is handled.)

it is a kind of constructive logics

You talk about 'texts' and 'variables'. What are texts? Are they strings? Are they sequences of objects? If so, what kind of sequence, and what kind of objects? Similarly, what are variables and how are they modeled in the fundamental underlying data structures of your language? Can a variable contain arbitrary text? Can text contain arbitrary variables? If I have a number, is it a text, a variable or something else? And so on.

variables are simply parsed trees.

You say "And it didn't make sense to write a specification before coding down the project", but that's just about the opposite of a statement I understand. How can you possibly 'code' something if you don't know _what_ it's supposed to be doing in the first place?

it is lined up in my head. I'll write full specification as you seem interested. Stay tuned, I'm back in a day or two.