Haskell for AI?

Machine learning consists of a lot of math and AI have traditionally used logic and functional languages, not imperatives.

So Haskell should be fit for this stuff no? Yet I don't say much applications using Haskell for AI. What is the standard AI-language today? Still LISP?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Moo.

Is there any special reason for the historical preference of Lisp and Prolog in AI research?

I can't imagine imperative languages (80s and upwards, Fortran and Cobol cripple the mind ;) ) to be a real hindrance for AI development.

Common origins

The biggest single reason is probably common origins. LISP and the AI Lab at MIT co-evolved. By the time the post-80's languages existed, the commitment to LISP and prolog among AI people was well established, and the embedded mental base made change difficult. Today, many of the computational problems that underly AI are commonly implemented in other languages. Most of today's leading SAT solvers are implemented in C or C++ for performance reasons, and modern provers all seem to be implemented in some variant of ML.

Assuming we get it out there successfully, it will be interesting to see whether BitC invades either of those niches a bit. The language supports, but does not mandate, pure programming, which *might* make it a candidate for provers (I am skeptical because of the minimal core logic challenge, but let's see). It ought to be able to get all of the performance of C for things like SAT solvers. I don't say that it will invade either space, and I'm starting to feel like I should shut up about the BitC stuff until we get it released. So chalk this up as an "I'm curious and interested" rather than any sort of "today, we are going to take over the world" sort of thing.

Really looking forward

to BitC getting out there and taking over the world :-)

Historically more neats than

Historically more neats than scruffies got research funding. As a result the driving focus in AI research was dominated by the idea that a sufficiently powerful logic would allow us to describe an AI. For this reason functional and logical languages were a good fit; they looked similar to the type of clean mathematical approach being tried.

Then machine learning came along and showed that throwing lots of algorithmic approaches at well defined problems created progress quickly. The main requirement in ML research tends to be lots of number-crunching ability. Although Prolog is still quite common in areas of ML (such as Inductive Logic Programming) I suspect that something that allows people to try out number-crunching approaches (like Matlab) is currently the most popular.

It would appear, after several decades of deadlock, that neither the neat, nor the scruffie approach is enough by itself. It seems reasonable to believe that the next popular language for AI will be something that combines raw number-crunching performance with clean logical semantics. It will be interesting to see what such a language looks like, as going by the past few decades of language research, the answer is not obvious.

One other nice feature of Prolog and Lisp for AI research is their reflectivity. When you are reasoning about the mapping between a problem domain and strategies for operating within that domain, not having an extra interpretative layer in the way is very productive.

Anecdote

Sometimes the availability of funds and labor can drive the pragmatic choice of implementation language for a project at least as much as the features of that language. A close friend of mine was hired in grad school to re-write a large AI codebase from Common Lisp -> C#. The reasons were that the professor could find more students willing to work in C# than Lisp, and that Microsoft was handing out research funds for anything even remotely connected to their platforms.

Indeed. Consider the

Indeed. Consider the challenge of hiring kernel hackers in Baltimore...

Depends what you mean by AI...

These days, most AI is really ML, which means statistics and text processing. Therefore in my immediate experience, the most common languages used for AI are Perl and something else for the numbers (often C or Matlab). This is probably really depressing to the GOFAI Lispers, but then I guess they find lots of things about ML depressing...

The bottom line is that it's been a long time since the notion of "a good language for AI" meant anything different than "a good language for numerical computing." So I think we could probably do better than C, but for example, all the old philosophizing about the connections between reflection (in the PLT sense) and AI are really irrelevant here.

But maybe I only have this perspective because I work with a bunch of ML people. Maybe GOFAI is alive and well someplace?

[Caveat lector: in this thread, ML means Machine Learning.]

Oops...

I should also have mentioned SQL, probably...

You are dead on, as far as

You are dead on, as far as my experience goes. I would only add that even before machine learning I found the connection (hype?) between reflection/symbol manipulation and AI dubious, and I am sure I wasn't the only one.

"ML"

[Caveat lector: in this thread, ML means Machine Learning.]

Except in shap's above remark about theorem provers. :-}

Prolog and Peirce

One of the reasons for the rise of Prolog is that the Kowalski school of AI had an attractive story [1] to tell about what reasoning is that starts with C. S. Peirce's epistemology, and can be formalised as meta-theorems about first-order logic, which carried across to theorems about Horn clause reasoning, which in turn makes for well-motivated engineering efforts in Prolog in the school.

As we have previously discussed, in History of Logic Programming, Carl Hewitt doesn't think the Kowalski school's story works, and has been trying to push his alternative.

[1]: There are countless rehearsals of this story; the first six pages of the following give a fast-paced overview, based on Progol, an ILP tool complementing Prolog:

O. Ray. Automated Abduction in Scientific Discovery. Model-Based Reasoning in Science and Medicine, SICI 64:103-116, 2007.