I have been following LtU for sometime now and finally decided to start with CTM after reading some great reviews. I am stuck with the mozart system. I am using FC3. Was anyone able to run the examples in the book successfully in mozart?


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Mozart platforms and ports

Dear rumplestiltkin,

Mozart has been ported to lots of platforms, and lots of versions of Linux including FC1 and FC2, but not yet (it seems) to FC3. I suggest that you ask your question on the Mozart developers list (hackers@mozart-oz.org), to find out whether somebody has already done it. If not, it should be easy to do following the instructions on the GNU/Linux download page. If you have technical problems with the port, you can always ask for help on the hackers list. We are always grateful to volunteers to help us port Mozart to more platforms.

A couple of resources

As PvR suggests, the Mozart/Oz Mailing Lists are the best place to get help. Once you get up and running, I'd also suggest checking out the CTM Book Website and Dominic's CTM Wiki.

And if all else fails, you could always try and use Alice ML for CTM, (but since that's my own work in progress, I'm a tad biased). :-)

I downloaded Alice just last night

I am completely new to the Alice ML (and I recently found out that there is an ML family of languages). The impression I get is that I need to know SML, then read about how Alice has extended SML...there doesn't seem to be an end-to-end Alice ML tutorial...is that right?

Secondly, is the language is such a shape that I can use it for a commercial project? I may have a project involving reading streams of data, doing some computations and displaying it in real-time...as well as allowing users to do some querying (setting filters, etc.)

Thirdly, CTM has a chapter on GUI programming in Oz...but I didn't see GUI programming emphasized in Alice's website. Did I miss something?

And finally, how different are the capabilities and syntax of OCaml vs. Alica ML? Basically I would like to start running examples from Ben Pierce's TAPL, if possible, in Alice ML (simply because I want to avoid learning a seperate language just for TAPL).


Alice ML is a conservative extension to SML

If you know SML, then picking up Alice ML is fairly straightforward. The extensions add powerful capabilities but integrate fairly seamlessly into ML. Of the SML books I've read, Ullman's Elements of ML Programming is the best introduction, making for a much faster read - teaches you ML in a fairly no-nonsense fashion. Of course, you could use the CTM translations as a way to learn Alice ML, just as it's also a good book for learning Oz. I've also started on Translation of SICP to Alice but I've only just started on chapter 2, so it may be months before much else appears.

As for commercial projects, it really depends what you are going to use it for. The core language is fairly solid, having all the functionality of the Standard Basis Library (giving you pretty solid stream handling). The GUI uses a thin abstraction layer on top of GTK (if you already know GTK, then it's not bad - but the abstraction could be better). The database uses SQLite but I haven't used it (but it is the one thing I really need to get working for the commercial projects that I have in sight). Overall, Alice still has a bit of work before the libraries are as mature as that of Oz, but the quality of what's there is pretty solid.

After I finish CTM and SICP, I had planned to Translate TAPL to Alice but if you want to beat me to it, I'd be more than willing to cede. I haven't gotten far into TAPL, but from all indications it uses a rather limited subset of CAML. The translation to Alice should be, as Andreas said, almost mechanical. At any rate, it should be no trouble to use Alice for TAPL.

good work!

You are doing impressive work! I ordered the book "ML for the Working Programmer [Paperback] by Paulson, Lawrence C." I'm in no position to contribute anything yet, even basic TAPL is tough going for me. The formal definition used in that book seems extremely powerful (operational semantics?). I wish there were university courses that used TAPL style, all the Compiler or PLT classes I've thought of taking seem to avoid it. Anyway, I'll let you know how my day-job project works out if I use Alice ML.

Hadn't exactly planned it this way

But since I made a sacred vow in my New Year's resolution to take up Alice, I figured I'd create some artifacts along the journey. Still got a long way to go, and the translation of CTM will start to get rougher. Alice doesn't have direct support for object oriented programming, which is the next chapter I'm working on (O'Caml support for OOP is the most major difference with Alice). Alice also doesn't support full unification, so the ability to translate chapter 9 is an definitely an open question.

Just fyi, we used TAPL at Gla

Just fyi, we used TAPL at Glasgow Uni in our final year types course. It was fairly hard at first, but the course notes helped a lot. They should still be up... http://www.dcs.gla.ac.uk/~simon

Unfortunately the servers seem to be doing a wobbly at the minute and I couldn't get into check the url.

O'Caml vs. SML/Alice

Just a couple of stray thoughts:
  • If the goal is to do commercial work and go through TAPL, I'd recommend O'Caml over SML, even extended as Alice, if for no other reason than TAPL is already in O'Caml, and O'Caml gives you a soup-to-nuts interactive, bytecompiled with time-travel debugger, and native-compiled with profiler solution.
  • If the goal is to work through CTM, I recommend Oz.
  • If the goal is to try to work through CTM but in a statically-typed setting, I'd help Chris with his awesome effort on CTM in Alice. :-)
I've been spending a lot of time in the ML world lately, and it seems to me that O'Caml is head and shoulders the most popular dialect of ML at the moment. It really is an amazing system (note that that's not to say that Alice isn't, too). Alice is still pretty new, whereas Caml/Caml Light/Caml Special Light/O'Caml have been around for about a decade now.

I guess my point is that there's no reason to think of O'Caml as "just the language you'd use for TAPL." Far from it.

ocaml abilities

I really concurrency, logic programming and a few other things in the CTM book. This question might make no sense, but could they be added to ocaml? In fact, does it make sense to extend ocaml the way alice ml extends standard ml?

Regarding the commercial project. I have a project to do for a paycheck, and I want to learn some stuff. I figured I would combine the two. If Ocaml, Alice, Oz, etc. don't work or I can't understand them in time, I'll just fall back to java :)

Many Paths

falcon: I really concurrency, logic programming and a few other things in the CTM book. This question might make no sense, but could they be added to ocaml? In fact, does it make sense to extend ocaml the way alice ml extends standard ml?

O'Caml supports shared-state concurrency using either its own thread system or OS threads, with everything that implies (CTM has a lot of good stuff to say about concurrency, all of which I agree with both theoretically and based on my experience).

You could presumably write an embedded logic language in O'Caml and maybe even make it syntactically nice enough with some camlp4 syntax extension, but that's probably a non-trivial project...

If the folks who are writing your paycheck are letting you choose the language, and they don't care whether the language is natively compiled or not, then for Pete's sake, go ahead and use Oz. Seriously.


We are trying to build tools for the securities industry (stock trading)...I'm a junior programmer but I have flexibility. We are not doing heavy duty numerical analysis so I don't need raw speed of C. I like the fact that static type checkers of OCAML type languages keep stupid mistakes such as null pointer exceptions at bay (don't want silly exceptions on traders' screens!). Oz's logic concepts (with which I am not yet familiar), relational concepts, etc. should help me build something more innovative than the ever present stock ticker with flashing lights...that's why I figured Alice ML was a perfect combination of Oz's abilities and type checking of ML.

First, see how far you can get with Oz

I suggest that you first try out your ideas in Oz, since all of the logic and constraint stuff is already implemented and mature (see, e.g., the MOZ 2004 proceedings to get an idea of what you can do). Then, if you get too many type errors, you can always move to Alice or some other statically typed language. But I suspect that you won't get so many type errors in Oz. What happens is that simple tests will catch the type errors, and once they are fixed you get the same stability as a statically typed language. Remember also that dynamically typed languages never have segmentation faults, just like statically typed languages (if both are implemented right). The real issue is that your language should be strongly typed, i.e., the type system is enforced. Whether it is enforced by the compiler or at run-time does not make so much difference (in my experience).

Been meaning to ask

The first time I mucked about with Oz was several years ago, soon after reading up on Prolog and Erlang. The influence of Prolog was rather obvious, with Oz seeming to be well rooted in the logic languages.

This time around, I've been reverse engineering the Oz code from the perspective of ML, and I can't help but see many similarities in stucture. Been meaning to ask whether this similarity with ML is pure coincidence (simply given my current skewed perspective?) Or whether ML was also influential in the design of Oz?

FWIW, I don't see my translations as a knock on Oz. Indeed, I have gained a very healthy respect for the power and expressivity of the language. Even for those readers that stick to Oz, I think the Alice translations can help give some perspective on what's going on in Oz, simply by seeing the examples laid out in a different language where the Dataflow variables have to be a bit more explicitly laid out.

Dataflow variables and other CTM stuff in ocaml

It is possible to extend OCaml with dataflow variables and other CTM stuff, and it's even easy to extend the syntax accordingly (with camlp4). I don't know, maybe someone already tried.

The problem is that the syntax and semantics of ocaml are already very heavy and "loaded" of different concepts, so the extension would probably fare worse in the simplicity department. But it is definitely possible.

Dataflow is intrusive

It is possible to extend OCaml with dataflow variables and other CTM stuff, and it's even easy to extend the syntax accordingly (with camlp4).

I don't think so. Dataflow variables or futures a la Oz and Alice are absolutely intrusive to the language semantics and even more so the implementation. You cannot just add them as yet another feature, you have to completely change some basics of the language.

It should be fairly simple to add something like I-structures, which can be thought of as non-transparent dataflow variables. But that is a much less powerful concept.

Transparent dataflow variables

I didn't think about adding transparent dataflow behavior by default, like in Oz or Alice. That way I thought many of the idioms of dataflow could be written, even if in a more convoluted way.

But of course, to change all variables to dataflow would be a complete change in semantics and in the implementation.


I didn't think about adding transparent dataflow behavior by default

Hm, I am not sure I understand what you mean by "default". If you have some form of transparent dataflow variable - however introduced - then you have a complete change of semantics because they are first-class and can appear everywhere. Otherwise you cannot have transparent variables at all, and the closest you can get are I-structures or something similar. Maybe that's what you meant?

not first-class

I meant I was not thinking about first-class dataflow variables. Sorry, my wording was muddy and I was with Oz in mind (don't know Alice ML).

But you can add operators to deal with dataflow variables (using camlp4) and other syntactic sugar, so you could translate, for example, a function from the CTM book almost unaltered. Sure, it isn't the same as having first-class dataflow variables, but I was thinking about "translating programs from the CTM book".


OCAML with join calculus: jocaml!
By the way, while we are on the subject, I have a few more generic questions:
1. How come formal languages such as the ones discussed on LTU are slower than languages used in the commercial world. I would think that having a formal model of a higher level language could be combined with a formal model of the undelrying machine and we could optimize the heck out of the compiler! Surely more so than languages with ad-hoc syntax/semantics. I worked a little on Genetic Programming (similar to Genetic Algorithms) a few years ago...couldn't such an automated technique do a better job than human compiler writers ever could?
eh...I had other questions but I forgot them now :)

Well, what languages/compiler

Well, what languages/compilers are you comparing? Commercial compiler vendors often put a lot of effort into optimising performance, and because the execution environment is tied to the low level language used to programme it, programmers can exploit knowledge of their execution environment to increase performance.

Of course, I'm not expert, and their might be fundamental reasons.

The Need for Speed

Once again, just a couple of thoughts, some of which are repeats, so please bear with me:

  1. The "formal" languages almost always specify some kind of ideal machine, and that ideal machine exists in the realm of mathematical logic. Back in the real world, we don't have, e.g. infinite memory, so we either accept that we need to manage memory manually, which can be done perfectly in theory, or we accept garbage collection, which is much safer but, even at the current state of the art, imposes some overhead. Remember, this is just one example, and not even a very good one: better ones would have to get into the costs of powerful constructs like continuations, etc. that have much more to do with the language's semantics, but I lack the time and space to tackle here.
  2. Some languages impose a sharp representational cost on their users. I'm thinking here of, e.g. Common Lisp or Scheme, in which, barring explicit declarations to the contrary if your implementation supports them, arithmetic can go from machine-word-sized to costly-non-machine-supported-representation just by what would otherwise be an overflow. As I've commented before, O'Caml is a little different: out of the box you get limited-precision numerics, and if you want arbitrary precision, you'll need to "open Num" and use the /-suffixed arithmetic operators (+/, -/, */, //...)
  3. It's not clear that we've learned everything there is to learn about optimizing languages like Haskell or O'Caml or Oz or Clean or... as we've also discussed before, functional representations have nice properties and are even employed as Intermediate Representations in traditional compiler settings, e.g. the Tree-SSA system of GCC 4.x. But while some classes of optimizations become easier to implement with such a representation, other classes become harder. This is a very old problem. It's so old that even though some wizards in the 1950s and 1960s correctly envisioned that a "sufficiently smart compiler" would solve many or most of the performance problems of the day, progress on that front was sufficiently slow that "sufficiently smart compiler" or "SSC" is nowadays only used sarcastically, despite existence proofs like CMU CL/SBCL, Stalin, MLton, O'Caml... It's also only fair to observe that whatever progress has been made in compiler technology has been utterly dwarfed by Moore's Law (cf. also Proebsting's Law).
  4. You have to decide when to stop optimizing. When I worked at ICOM Simulations on the TMON debugger for the Macintosh, TMON's principle developer, Waldemar Horwat, not at all jokingly said that he should write a plug-in replacement for the MPW Pascal and C compilers that would just hammer away at the problem using the best algorithms that he could think of. The problem was that the best algorithms he could think of were NP-complete, so we'd have to bond ourselves and offer this, not as a product, but as a service: other developers could send us their source code (hence the need for bonding) and we'd let our compiler grovel over it for a few weeks, and send them the resulting binary. At some point, compile times matter.

The Need for Abstraction

I'd call region allocation 'state of the art' memory management, and it's faster than manual memory management.

There's an easy standard for when to stop optimizing, when it's perfect. There's a single step optimal code generator mentioned a few weeks back, I think that's the way of the future.

Separation from the machine also means that I don't have to think about my program behaving differently on machines with different words sizes, as OCaml does.

At the end of the day, programs are "formal" abstractions that model real world problems. In my experience I get the best program optimizations by improving my problem abstraction. The less I need to focus on machine details, the faster I can change my abstractions.

Anyway, whatever the difference between "formal" Haskell and "informal" OCaml, it's not enough to affect our chances of a date on Friday night. (Unless we obsessively post to LtU, I guess.)

--Shae Erisson - ScannedInAvian.com

region allocation!

Region allocation was one of the main things I was thinking when I asked the original question. This is also what I was thinking when I asked the question about typed file/operating systems.

Is there a fundamental reason why languages with a precise mathematical model and well defined machine architectures (explicit set of registers/memory/disk, explicit set of machine instructions with determenistic results, etc.) can't be linked together? I understand that due to undecidability, etc. there may not be a 'perfect' solution, but if the gap between what hackers can code up and what automated algorithms can produce is so wide...then doesn't that mean that either the language models need tuning or the hardware needs to change? By the way, Pierce's ATTAPL has a chapter on "Effect Types and Region-Based Memory Management."