LtU Forum

Type and Effects systems and Lucassen's Thesis

I'm chasing references for type and effect systems, from the first papers by Gifford and Lucassen. One of the references is Lucassen's PhD thesis, "Types and Effects towards the Integration of Functional and Imperative Programming", Technical Report MIT-LCS-TR-408; it has a page at MIT here. I was hoping to find this thesis in electronic format, but my searches didn't reveal nothing. I searched for the homepage of the author too, with no better results.

Does anyone know if this TR is online ?

Lazy linear algebra

Lazy Linear Algebra

I wonder if any of the LtU readers would be able to advise me about
lazy languages/libraries for LinearAlgebra.html">linear algebra (vector-matrix numerics). It
strikes me that this is an problem domain in which the lazy approach
might be able to offer significant advantages. For example, it is
often the case that one wants to obtain certain elements of a matrix
resulting from some expression (e.g. one or more rows or columns, or
some elements from some row(s) or column(s)). Linear algebra can be
computationally expensive, especially if you don't want all the
values that would be computed by a naive implementation of some
expression. The lazy approach might be particularly useful if you
don't know a priori which elements you are going to need (e.g.
if the answer you seek depends upon the properties of the matrices in
question, which you may not know until you start in on the
calculations). It is certainly true that one could hand-code this
kind of laziness, but that way lies bugs and hard work on the part of
the programmer. Further, certain matrix operations yield matrices
with properties that could be lazily exploited (e.g. symmetric
matrices may be specified by only the upper or lower triangle),
yielding potential time and space savings. I realise that libraries
such as LAPACK contain
functions that exploit the properties of the matrices, but I
understand that it is the programmer who decides which functions
should be called rather than the machine. As someone with no formal
PL education, can anyone tell me what if any research has been done
on lazy linear algebra and how I might start playing around in such
an area?

(posted for Chris Rose)

Kay no longer at HP

Jst so you know.

Concerning introspection and compilation.

My knowledge in the area of languages is limited, that is why I am always visiting this site. Some of the papers, and terminology are very helpful and give me many 'over-my-head' interesting reads, so to the editors and commentors you have my thanks for the education.

I don't have a detailed grasp on introspection but I feel I have enough knowledge to appreciate its practicality. My main question arises from a seperation I see, in dealing with introspection, between so-called compiled languages like C, and C++, and dynamic languages like Python and Ruby. Does a language make sacrifices to gain the ability to introspect? And if so, what kind of sacrifices does it have to make? Can a languages like C or C++ gain introspection?

Though I guess the ultimate question would be, would languages like C and C++, and in turn their programmers, gain anything substantial from introspection?

I am sorry if my question is not completely clear, so please prod me with questions if the questions need to be refined for better answers.

Regards,

MJ Stahl

The Limits of the Semantic Extensibility of Computer Programs

I'm curious if there have been studies on the limits of semantic extensibility by computer programs. I'm not entirely sure how to phrase exactly what I mean. It's kind of like "what are the limits to how much 'environmental extensibility' a computer program can have?" I imagine the answer is related to type theory, somewhere along the lines of it being limitted based on predefined type information, but I'm not quite sure where to start looking.

This type of information would be useful when building models and meta-models, to know when "architectural astronauting" (to use a Spolsky term) isn't going to get you any more real benefits, anyway.

Sorry for the confused question. The concepts are a bit confused in my mind. Which is why I brought it here, because I imagine some of you know if anyone has done work in this area.

The Complexity Zoo

The Complexity Zoo

Please do not feed oracles to the complexity classes! These classes require a specially balanced diet to ensure proper relativization.

Way too often PL designers refer to Turing machine or other formalism as a benchmark of expressivity. For practical needs, though, the algorithmical complexity of programs is quite important. E.g., if the program takes O(n) steps on Turing machine, but O(2n) on my shining new rewriting engine, then it's time to doubt the usefulness of my engine in practice.
What are the well-known (classical) results relating complexity classes of programs expressed on different media (calculi/abstract machines/etc.)?
Or am I totally confused and there could not be such a thing?

Sleep, scripting language for Java apps, released

Hello All, I just wanted to introduce a new scripting language called Sleep. Sleep is an attempt to bring some of my favorite things about Perl to the world of Java. Today I released Sleep 2.0 which brought many new features including binary data support, closures, and multidimensional data structures to the language. Following this milestone, Sleep is now a product that I am truly proud of. If Java scripting languages are your thing, you may enjoy it too.

Sleep 2.0 can be found at: http://sleep.hick.org/

An introduction to Sleep: http://today.java.net/pub/a/today/2005/07/14/sleep.html

On the Revival of Dynamic Languages

On the Revival of Dynamic Languages pdf

The programming languages of today are stuck in a deep rut that has developed over the past 50 years... We argue the need for a new class of dynamic languages... features such as dynamic first-class namespaces, explicit meta-models, optional, pluggable type systems, and incremental compilation of running software systems.

Conversion of 'functional' to 'imperative' algorithms: is it possible?

Does anybody know any work done on conversion of 'functional' to 'imperative' algorithms?

I am asking this because, while doing some mental exercises with algorithms to help me understand functional programming better, I realized that any functional algorithm could be converted to an imperative one, in order to save resources and increase performance.

The way I work goes like this: i write down a functional algorithm, then try to optimize it by converting it to an imperative one using assigments. I try to re-use memory space, so as that the algorithm becomes faster, without loosing the meaning (and maybe correctness) of the functional algorithm.

Let me give you an example. Let's just say we have the following simple function:

let f(x) = x + 1

which is used here:

let g(y) 
with
    let a = f(y)
=
    a + 2    

We can see that the above is equal to (assignment is operator ':='; operator ',' is used for sequence of operations):

let g(y) = 
    y := y + 1,    (* reuse variable 'y' *)
    y + 2

In the above example, the variable 'y' is re-used (I am talking about variables here, not value bindings, because I am talking about implementation): instead of using an intermediate variable 'a', the compiler reuses the variable 'y', increases it by 1, then adds '2' to it and returns the result.

Of course the above example is trivial and the relevant optimization may already take place in modern FP compilers. But what about more complex examples? For example, I was thinking about the 'quicksort' algorithm. The functional algorithm goes like this (in a imaginative mixture of functional and imperative syntax, using 'h:t' for head-tail list representation):

(* input is a list with first element as 'head' and the rest of elements as 'tail' *)
let qsort(head:tail)
with
    (* less and greater than nil is nil *)
    let less(e, nil) = nil
    let greater(e, nil) = nil
    
    (* calculate elements of input list 'h:t' less than 'e' *)
    let less(e, h:t) = 
        (if h < e then h else nil) + 
        less(e, t)
        
    (* calculate elements of input list 'h:t' greater than 'e' *)
    let greater(e, h:t) = 
        (if h > e then h else nil) + 
        greater(e, t)
=
    (* the result is: less elements + head + greater elements *)
    qsort(less(head, tail)) + 
    head + 
    qsort(greater(head, tail))

The actual implementation of the algorithm is quite slow, since it copies data around in a new list for each 'invocation' (I am not talking 'reduction' here, because I speak about actual code running on a computer). The ideal would be if the above functional algorithm could be converted automatically to an imperative one that does in-place sorting.

If we examine closely the above quicksort algorithm, we will see that the algorithm does the following things:

  1. get all the elements of the list less than its 'first element' into a 'list A'.
  2. get all the elements of the list greater than its 'first element' into a 'list B'.
  3. sort 'list A'.
  4. sort 'list B'.
  5. the end result is 'list A' + 'first element' + 'list B'.

Let's see if we can speed up the above algorithm (let's name it quicksort A) by using two lists: a list A which is the input list and a list B which is the output list. Instead of creating new lists, we are going to use list B as an intermediate list for sorting elements, without changing the basic logic behind the sorting algorithm.

The 'less' function can be written like this:

let less(e, ha:ta, out hb:tb) =     
    if ha < e then        
        hb := ha,            (* copy value of element 'ha' to 'hb' *)                        
        less(e, ta, tb)      (* process rest of elements of A into tail of B *)
    else     
        less(e, ta, hb:tb)   (* process rest of elements of A into B *)

The above algorithm uses another list B to accumulate the 'less than first element' data of a list. The 'greater' algorithm can be written in the same way.

Now the qsort algorithm becomes:

(* input is list 'ha:ta', output is list B, which must have the same length as 'ha:ta' *)
let qsort(ha:ta, out B)
with
    let temp = List(sizeof(ha:ta))     (* temp is a list with same size as list 'ha:ta' *)
=
    less(ha, ta, temp),                (* accumulate elements of 'ta' less than 'ha' into 'temp' *)     
    qsort(temp, B),                    (* sort temp list into B *)
    B[sizeof(temp)] := ha,             (* copy 'ha' right after 'temp' sorted into B *)
    greater(ha, ta, temp),             (* accumulate elements of 'ta' greater than 'ta' into 'temp' *) 
    qsort(temp, B + sizeof(temp) + 1)  (* sort temp list into B right after the less part and the 'ha' element *) 

The above algorithm (let's name it quicksort B) can be preferrable than the pure one, because the intermediate variable 'temp' can exist on the stack, thus increasing performance of the program and reducing garbage (or be used for compile-time garbage collection). Here is a working C++ implementation:

#include 
using namespace std;

#define ARRAY_SIZE      5

int less(int e, const int in[], int in_size, int out[])
{
    int c = 0;
    for(int i = 0; i < in_size; ++i) {
        if (in[i] < e) {
            out[c] = in[i];
            ++c;
        }
    }
    return c;
}

int greater(int e, const int in[], int in_size, int out[])
{
    int c = 0;
    for(int i = 0; i < in_size; ++i) {
        if (in[i] > e) {
            out[c] = in[i];
            ++c;
        }
    }
    return c;
}

void sort(const int in[], int in_size, int out[])
{
    if (in_size == 0) return;
    int temp[ARRAY_SIZE];
    int c = less(in[0], in + 1, in_size - 1, temp);
    sort(temp, c, out);
    out[c] = in[0];
    int c1 = greater(in[0], in + 1, in_size - 1, temp);
    sort(temp, c1, out + c + 1);
}

void sort(const int in[ARRAY_SIZE], int out[ARRAY_SIZE])
{
    sort(in, ARRAY_SIZE, out);
}

int main(int argc, char* argv[])
{
    int in[ARRAY_SIZE] = {5, 6, 7, 3, 8};
    int out[ARRAY_SIZE];

    sort(in, out);

    for(int i = 0; i < ARRAY_SIZE; i++) {
        cout << out[i] << endl;
    }

    getchar();
    return 0;
}

Is it possible to optimize the algorithm further? could the sorting be done without using a second list? The answer is yes. Let's examine the 'logic' behind the sorting again: we traverse a list twice, once for accumulating the elements 'less' than an element and once for accumulating the elements 'greater' than that element. In order to do the 'less' algorithm 'in-place', we have to 'push' elements less than an element in front of the list while pushing greater elements at the end of the list. The algorithm could be like this:

(* pushes elements of 'h:t' less than 'e' at beginning of list; 
   function greater is not needed, because all the elements after the 'less' part
   automatically belong to 'greater' part
 *)
let less(e, out h:t, out f, out l) =
    if h < e then            (* less element is found *)
        l := f,              (* save value of 'f' into position 'l' *)
        f := h,              (* place less element 'h' at beginning of list *)
        f = next(f),         (* set 'f' to next position in list *)
        l = h,               (* note down the 'free' position *)
        less(e, t, f, l)     (* process the rest of elements of 't' *)
    else
        less(e, t, f, l)     (* less element is not found *)
        
(* quick sort *)        
let qsort(h:t)
with
    let f = h                (* 'f' is the first free entry in the list *)
    let l = h                (* 'l' is the last freed entry in the list *)
= 
    less(h, t, f, l),        (* push less than 'h' elements to beginning of 't' *)
    l := f,                  (* before setting middle element, save the value at middle element position *)
    f := h,                  (* set middle element *)
    qsort(t[begin..f]),      (* sort less part *)
    qsort(t[f+1..end])       (* sort greater part *)
        

Here is the working C++ implementation (let's call this version 'quicksort C'):

#include 
using namespace std;

#define ARRAY_SIZE      5

int less(int e, int a[], int begin, int end, int &f)
{
    int l = f;
    for(int i = begin; i < end; ++i) {
        if (a[i] < e) {
            a[l] = a[f];
            a[f] = a[i];
            l = i;
            ++f;
        }
    }
    return l;
}

void sort(int a[], int begin, int end)
{
    if (end <= begin) return;
    int e = a[begin];
    int c = begin;
    int l = less(e, a, begin + 1, end, c);
    a[l] = a[c];
    a[c] = e;
    sort(a, begin, c);
    sort(a, c + 1, end);
}

void sort(int a[ARRAY_SIZE])
{
    sort(a, 0, ARRAY_SIZE);
}

int main(int argc, char* argv[])
{
    int a[ARRAY_SIZE] = {5, 6, 7, 3, 8};
    
    sort(a);
    
    for(int i = 0; i < ARRAY_SIZE; i++) {
        cout << a[i] << endl;
    }
    
    getchar();
    return 0;
}

After doing all the above, I realized that I converted the purely functional quicksort A to imperative algorithm quicksort B, and then took that algorithm and convert it to quicksort C, by doing some replacements and adjusting the algorithm for overwriting data. The quicksort C algorithm is quite similar to the known partitioning algorithm (if not the same?). I have come to suspect that:

  • such a conversion from 'purely functional' to 'imperative' algorithms could be formalized for any algorithm; then the compiler could handle assignments by automatically recognizing what can be reused and what can't
  • all assignments in an imperative algorithm are optimizations of the relevant 'pure' algorithm(s)

Since there is no formal way of converting a functional algorithm to an imperative one, programming languages put the burden of assignment on the programmer, thus allowing for assignment-related bugs to creep in an application; the actual problem with assignments is that a programmer can easily forget when an assignment affects another part of a program. If a compiler could convert a functional algorithm to an imperative one so as that the least possible amount of resources is spent (least copying / least garbage), then functional programming languages could easily reach imperative languages' performance, while keeping their 'pure' properties.

I searched the web but I did not find a relevant paper. Does anybody know anything about this? I apologise if this is something stupid or if I did some stupid mistake or already proven not feasible, but my interest is great on this subject.

Economics of Programming Languages

I like programming languages a lot. I've used a number of them professionally, and have even written one myself - Hecl - although it borrows most of its ideas, if not source code, from Tcl. And, of course, I've taken part in my share of debates and discussions on "which language is best," a topic which of course doesn't have one clear answer but is often the source of heated arguments.

I recently read an interesting book, Information Rules: A Strategic Guide to the Network Economy by Carl Shapiro and Hal R. Varian (Harvard Business School Press, 1998; ISBN: 087584863X), which talks about the economics of the world of high technology. While reading it and thinking about programming languages, a number of things clicked. They aren't earth-shattering conclusions. On the contrary, a lot of them are more or less common sense, but it's nice to read that there are some methodically studied theories behind some of the intuitions, hunches and observations I've made over the years.

In this article, I attempt to list what I believe to be the most salient points of the economics of programming languages, and describe their effects on existing languages, as well as on those who desire to write and introduce new languages.

Economics of Programming Languages

XML feed