LtU Forum

A question for the theory guys

What is the relation between programming languages, model theory, and proof theory? I heard proof theory is more concerned with "syntax" whereas model theory is more concerned with "semantics". Would it be true to say that operational semantics corresponds to proof theory and denotational semantics to model theory? If yes, how exactly do they relate to each other?

By the way, a small related question: What exactly is a "theory" when someone says, he developed (say) a "Theory of Objects" like Abadi/Cardelli?

GCC Wiki

GCC wiki

Why do I feel like a kid in a candy store?

Unimperative Programming Language - Teaser

The following is from my latest project which is a foray into functional programming. Any comments or suggestions?

The Unimperative Programming Language

by Christopher Diggins

Introduction

Unimperative is a simple programming language, which supports a mix of procedural programming and functional programming. In Unimperative the primary data type is a function. The sequence of evaluation in Unimperative matters somewhat, and function parameters are evaluated left to right.

Unimperative has a surprise twist, which I will save for the end

Comparing to Scheme

Unimperative is quite similar to Scheme. In Scheme we can write a factorial function as:

  (define factorial
    (lambda (n)
      (if (zero? n)
          1
          (* n (fact (- n 1))))))

In Unimperative we can write the equivalent function as:

  Function factorial =
    (IfZero, _1,
      1,
      (Mult, _1, (Self, (Dec, _1))));

Basic Usage

In many language we call functions by writing:

  MyFxn(x, y)

In the Unimperative programming language we instead write:

  (MyFxn, x, y)

The first element following an open paranthesis, and followed by a comma, is always treated as a function which will be evaluated. The elements which follow are passed as arguments to the function.

Other functions after the first one in a list are not evaluate:

  (MyFxn, OtherFxn)

This evaluates the function MyFxn and passes the OtherFxn as a parameter without evaluating it.

A function must be followed by at least one argument in order to be evaluated. The evaluation operator is the combination of paranthesis and comma:

  (xxx,

The following code does not evaluate MyFxn:

  (MyFxn)

If we wanted to evaluate MyFxn we would have to write:

  (MyFxn, nil)

Alternatively we could also write:

  (Eval, MyFxn)

Defining Functions

To define a function in Unimperative we do so as follows:

  Function MyFxn = (Add, _1, _2);

The _1 and _2 are place holders which refer to the first and second argument respectively.

An Interesting Twist

The Unimperative programming language has a surpise: it is completely legal C++, and doesn't use any macros! An Unimperative library will be available soon as part of the next OOTL ( Object Oriented Template Library ) release from http://www.ootl.org

Neologism

There ought to be a word for the sinking realization that, semantically, the language you've been designing and implementing for months is only a small extension over the one you're implementing it in.

(I posted about this language once before. Writing an interpreter for it in Scheme was one thing; but, once I started to compile it down to Scheme, and saw how easy it was--the generated Scheme code was a pretty trivial transform from the ASTs--I realized that meant my semantics offered very little over Scheme's. I'll probably stick with it as an exercise, but that's about all the value it offers at this point.)

Avoiding worst case GC with large amounts of data?

Garbage collection has improved greatly over the years, but there are worst cases which are still difficult to avoid without writing your own custom memory management system. For example, let's say you have 128MB of 3D geometry data loaded in Haskell/OCaml/ML-NJ/Erlang/Lisp/YourFavoriteLanguage. This data is in stored in the native format of your language, using lists and tuples and so on. At some point, the GC is going to go through that 128MB of data. Generational collection delays this, but it some point it will still happen. Or do any systems handle this situation gracefully?

The best I've seen in this regard is Erlang, because each process has its own heap, and those heaps are collected individually. But if you put 128MB of data in one heap, there will still be a significant pause. Perhaps Erlang's GC is still better than most in this regard, as there a single-assignment language creates a unidirectional heap which has some nice properties.

And then of course there are the languages with reference counting implementations, which perform better than languages with true GC in this regard.

Thoughts?

C++ OR mapping - cross platform and db

Any good suggestions for something like that? i.e, Hibernate but for C++.

I am looking for a product/solution that can be used in production work, so it must be stable, supported etc.

Feedback Sought on Software System Design and Implementation Course

I was wondering if I could get some feedback on a course I taught in 2004 on software system design and implementation:

Lecture links can be found in the forum description for each week. Lab and assignment links can be found under "News".

In the last offering, I gave prizes to encourage discussion. In the next offering, I am thinking of giving prizes for the top players of the Speculative Search Game, where the topics must be relevant to the course. (See: http://www.cse.unsw.edu.au/~amichail/spec.)

I plan to make the course open again, meaning that anyone can participate in discussions, games, etc.

Any feedback would be appreciated.

BitC, a new OS implementation language

BitC language specification, version 0.4. Out of the new Coyotos project, successor of Eros and KeyOS, comes this new language:

BitC is conceptually derived in various measure from Standard ML, Scheme, and C. Like Standard ML [CITE], BitC has a formal semantics, static typing, a type inference mechanism, and type variables. Like Scheme [CITE], BitC uses a surface syntax that is readily represented as BitC data. Like C [CITE], BitC provides full control over data structure representation, which is necessary for high-performance systems programming. The BitC language is a direct expression of the typed lambda calculus with side effects, extended to be able to reflect the semantics of explicit representation.

(via Slashdot)

Langauges and Hardware...

I guess this may be more of a compiler issue than a language design issue, but with the recent excitement over Cell processors maybe people will have some insights.

Basically, I'm interested in processor/hardware support for higher level language concepts. In particular I've been looking for what exactly made Lisp Machines so much better for Lisp than normal hardware(so far not really found much), but its not just Lisp machines I'm interested in. I've been hearing about type-safe assembly languages, and I wonder what it would take to have hardware support to help with things like multithreading(besides the obvious "just use multiple processors" :) ), and garbage collection. I guess Transmeta's processors had some interesting possibilities too, but I've not heard of them being exploited.

I know that some VM's(like Parrot and .Net) help out with these things, and I've heard that the JVM has been implemented in hardware, but has anyone had much experience with hardware support stuff like this? Are these VM's really appropriate for hardware support or is there yet-another-way which would be better? Partially I just don't know what keywords to look for.

"Popular vs. Good" in Programming Languages

Here is part of a discussion from the "Language Smiths" discussion group that I thought might be of interest to some here:


From: http://groups.yahoo.com/group/langsmiths/message/2281




--- In langsmiths@yahoogroups.com, Daniel Ehrenberg

wrote:

> What are your goals with this language? Are you trying

> to be popular, or are you trying to make a good

> language?

My goals are probably the same as just about every other language
designer: to create a better language for me to create the types of
programs that I typically need to create. What is perhaps slightly
different for me is the types of programs that I typically need to
create. I write large enterprise class applications for: financial
institutions, the telecom industry, huge inventory systems, CRM and
e-commerce. This is probably the sort of thing that about 90% of the
world's programmers work on I'm guessing. (In short: the types of
programs that nobody would every write for fun.) The types of
programs that I write a far too large for me to write on my own so I
need to make my language acceptable to the masses if I want to be able
to use it. As a result, I want to make a language which is useable by
90% of the worlds' programmers (and some percentage of the world's
non-programmers). Language experts design languages for their own
needs, and because they're always writing languages, we end up with
many languages which are excellent for language experts to write
languages in. Languages for their own needs and for their own kind.
Unfortunately, the offerings for non-language designers to create
non-languages in, is drastically under-served.

Computer languages are actually for people, not computers. The
computer is happy with machine code. If you make something which is
for people but then most of the people don't like it, then you've
failed. Could you imagine a Chef saying "do you want me to make good
food, or food that people like?" It sounds like a ridiculous thing
to say in the context of food (or clothing, or anything else
supposedly designed for people) and yet it seems to be the common
belief in the design of programming languages.

I wouldn't say that being "good" and being "popular" are mutually
exclusive. A popular language is more likely to have better compilers
and tools, be available for any particular platform, and more likely
to have some desired functionality already implemented by someone
else. Pragmatically, a language which lets me *not* write something
because it has already been written by someone else, is better than a
language which lets me write it any number of times faster. I'm more
likely to be able to buy a book about a popular language, or to write
a book about a popular language and get paid for it. I'm more likely
to be able to find a job or conversely to hire developers for a
popular language. Popularity is a feature, and in many cases, the
most important one.

XML feed