LtU Forum

Dynamic Programming Languages + VLIW/EPIC

Today I was thinking about the architectural differences between current/regular/normal/whatever systems, and the VLIW designs that Intel has come up with.
If you're not familiar with these systems (Itanium, etc), here are some good links:
IA-64 @ Wikipedia
IA-64 Preview @ ArsTechnica

It occurred to me that I had no idea how dynamic and interpreted languages like Python, Lisp, and the others, would perform on such systems.
Does assigning the bulk of the optimization work to the compiler help or hurt the performance of applications written in these languages?

I found an interesting thread on this topic here:
lists.tuned.org archives
..but it's 6 years old. Surely there's more information about these things now that they are widely available.

Is this still largely an unknown? I'm curious.

Introduction to computability logic

Introduction to computability logic (pre-print)
... The intuitive notion of (interactive) computational problems is formalized as a certain new, procedural-rule-free sort of games (called static games) between the machine and the environment, and computability is understood as existence of an interactive Turing machine that wins the game against any possible environment.
To all the lovers of games (and Turing machines :)

To claim relevance to PLT: computability logic can be seen as an alternative to linear logic (both being resource-aware). Also, interactive programming can be seen as a game between a programmer and PL environment...

Actually, I enjoyed the first part of the paper more (before getting to Turing machine).

The Origins of the Turing Thesis Myth

The Origins of the Turing Thesis Myth
In this paper, we identify and analyze the historical reasons for the widespread acceptance of what we call the Turing Thesis myth, which extends the Turing Thesis to imply that Turing Machines model all computers.
The paper discusses how
Turing Thesis: Whenever there is an effective method (algorithm) for obtaining the values of a mathematical function, the function can be computed by a TM.
became
Strong Turing Thesis: A TM can do (compute) anything that a computer can do.
While certainly nothing new for LtU regulars, the paper still has some educational value.

[on edit: Warning, some of the statements in the paper may aggravate Haskell programmers]

Grad School advice

I've been lurking on these forums for a couple years now, and you guys have really pushed my interest in programming languages. I recently graduated college with a bachelor's in computer science, minoring in math. Now I'm taking a year off before I go to grad school, hopefully to get my Ph.D. I would like to pursue areas of overlap between abstract math and computer science in general, and programming language theory in particular. For example, I would like to study category theory as it applies to functional languages, and how to formalize methods of abstraction. I would also like to study type systems, and perhaps their connections to formal logic systems and theorem proving.

This being a fairly specific area of study, I've had trouble doing research on grad schools. I know a lot of you are in academia, and I would really like to get some advice on what schools have good programs and good faculty. Thanks in advance for any help you can give.

Patrick Schultz

Notes from a Concurrency-Oriented Junkie

Joe Armstrong is at it again in this interesting Erlang-General list discussion, providing a witty yet mind-expanding approach to Erlang program design in Erlang's unique Concurrency Oriented (CO) paradigm:

I always tell people this:

  1. Identify the concurrency in your problem. Give the processes names.
  2. Identify the message channels - give these names.
  3. Write down all the messages that are seen on a channel, give these names

In doing this you have to chose a "granularity of concurrency" that is appropriate for your problem.

If you are modelling crowds then one process per person would be appropriate, you would not model a person as 10^26 molecules with one molecule per processes.

And also

In Concurrency Oriented (CO) programming you concentrate on the concurrency and the messages between the processes. There is no sharing of data.

An Erlang should be thought of thousands of little black boxes all doing things in parallel - these black boxes can send and receive messages. Black boxes can detect errors in other black boxes - that's all.

Ripped from the Erlang List

Compaq WebL

I noticed that WebL was mentioned in the thread on NQL on the old Lambda, http://lambda-the-ultimate.org/classic/message3567.html#3592
as not being maintained any more, it seems that it is maintained at the following site: http://www.research.compaq.com/SRC/WebL/

there are a number of old projects around the same site: http://www.research.compaq.com/SRC/software/

some of these are also on the currently maintained http://www.research.compaq.com/downloads.html
WebL among them. (Some of these look quite tasty and useful)

a language that is not being maintained on the new downloads is obliq: http://www.research.compaq.com/SRC/personal/luca/Obliq/Obliq.html

which strikes me as being relevant to the whole argument about objects and messages.

hierarchy destruction in Ada

Ada 95 provides a mechanism with tagged and type'class to allow dynamic dispatch. So if one has:

procedure something (O : basetype'class) is
begin
enact (O);
end something;

then as we all know, with the appropriate redefinitions of enact () for the subtypes of basetype, we have dynamic dispatch.

But what about deallocation ? I want to declare:

type handle is access all basetype;

procedure dispose (ptr : in out handle);

and can't see how to make this deallocate the correct object. Obviously, one can dynamically dispatch on ptr.all but it seems impossible to solve the complete problem without an unchecked conversion. Anyone ?

LLVM 1.3 is released

This is possibly interesting for language implementors out there: LLVM 1.3 is now officially released.

-Chris

Cω vs. Erlang?

I know there was Cω topic before. But the discussion really had more to do with Xen (XML, DB integration) than the Polyphonic C# concurrency part. I'm an Erlang newbie myself, but I already can see that asynchronous communication is the common thread to both languages and this approach is going to be big - can be used from embedded system to biz process orchestration.

Looking at the Santa Claus sample, Cω code seems to be quite compact and the syntax extension to C# is minimal. However process/thread creation is implicit, and it seems to be a little more difficult to reason. I would imagine a solution in Erlang clearer, but maybe longer.

Any thoughts/pointers on the pros and cons of different concurrency approaches? We really should have a "great concurrency shootout".

Type Theoretical Foundations for Data Structures, Classes, and Objects

Via MetaPRL

Type Theoretical Foundations for Data Structures, Classes, and Objects

We will show that records and objects can be defined in a powerful enough type theory. We will also show how to use these type constructors to define abstract data structure.
and
Here we extend the constructive type theory with a simpler type constructor dependent intersection, i.e., the intersection of two types, where the second type may depend on elements of the first one (not to be confused with the intersection of a family of types). This new type constructor allows us to define dependent records in a very simple way.

Dependent intersection looks worth attention for many reasons, e.g. providing a simpler alternative to very dependent functions. I especially liked how it simplifies treatment of coincident labels (multiple inheritance, anyone? :).

XML feed