User loginNavigation |
LtU ForumType and Effects systems and Lucassen's ThesisI'm chasing references for type and effect systems, from the first papers by Gifford and Lucassen. One of the references is Lucassen's PhD thesis, "Types and Effects towards the Integration of Functional and Imperative Programming", Technical Report MIT-LCS-TR-408; it has a page at MIT here. I was hoping to find this thesis in electronic format, but my searches didn't reveal nothing. I searched for the homepage of the author too, with no better results. Does anyone know if this TR is online ? Lazy linear algebraLazy Linear Algebra I wonder if any of the LtU readers would be able to advise me about (posted for Chris Rose) Kay no longer at HPJst so you know. By Ehud Lamm at 2005-07-24 05:59 | LtU Forum | login or register to post comments | other blogs | 5560 reads
Concerning introspection and compilation.My knowledge in the area of languages is limited, that is why I am always visiting this site. Some of the papers, and terminology are very helpful and give me many 'over-my-head' interesting reads, so to the editors and commentors you have my thanks for the education. I don't have a detailed grasp on introspection but I feel I have enough knowledge to appreciate its practicality. My main question arises from a seperation I see, in dealing with introspection, between so-called compiled languages like C, and C++, and dynamic languages like Python and Ruby. Does a language make sacrifices to gain the ability to introspect? And if so, what kind of sacrifices does it have to make? Can a languages like C or C++ gain introspection? Though I guess the ultimate question would be, would languages like C and C++, and in turn their programmers, gain anything substantial from introspection? I am sorry if my question is not completely clear, so please prod me with questions if the questions need to be refined for better answers. Regards, MJ Stahl The Limits of the Semantic Extensibility of Computer ProgramsI'm curious if there have been studies on the limits of semantic extensibility by computer programs. I'm not entirely sure how to phrase exactly what I mean. It's kind of like "what are the limits to how much 'environmental extensibility' a computer program can have?" I imagine the answer is related to type theory, somewhere along the lines of it being limitted based on predefined type information, but I'm not quite sure where to start looking. This type of information would be useful when building models and meta-models, to know when "architectural astronauting" (to use a Spolsky term) isn't going to get you any more real benefits, anyway. Sorry for the confused question. The concepts are a bit confused in my mind. Which is why I brought it here, because I imagine some of you know if anyone has done work in this area. The Complexity Zoo
Way too often PL designers refer to Turing machine or other formalism as a benchmark of expressivity. For practical needs, though, the algorithmical complexity of programs is quite important. E.g., if the program takes O(n) steps on Turing machine, but O(2n) on my shining new rewriting engine, then it's time to doubt the usefulness of my engine in practice. Sleep, scripting language for Java apps, releasedHello All, I just wanted to introduce a new scripting language called Sleep. Sleep is an attempt to bring some of my favorite things about Perl to the world of Java. Today I released Sleep 2.0 which brought many new features including binary data support, closures, and multidimensional data structures to the language. Following this milestone, Sleep is now a product that I am truly proud of. If Java scripting languages are your thing, you may enjoy it too. Sleep 2.0 can be found at: http://sleep.hick.org/ An introduction to Sleep: http://today.java.net/pub/a/today/2005/07/14/sleep.html By rsmudge at 2005-07-20 14:32 | LtU Forum | login or register to post comments | other blogs | 6156 reads
On the Revival of Dynamic LanguagesOn the Revival of Dynamic Languages pdf The programming languages of today are stuck in a deep rut that has developed over the past 50 years... We argue the need for a new class of dynamic languages... features such as dynamic first-class namespaces, explicit meta-models, optional, pluggable type systems, and incremental compilation of running software systems. Conversion of 'functional' to 'imperative' algorithms: is it possible?Does anybody know any work done on conversion of 'functional' to 'imperative' algorithms? I am asking this because, while doing some mental exercises with algorithms to help me understand functional programming better, I realized that any functional algorithm could be converted to an imperative one, in order to save resources and increase performance. The way I work goes like this: i write down a functional algorithm, then try to optimize it by converting it to an imperative one using assigments. I try to re-use memory space, so as that the algorithm becomes faster, without loosing the meaning (and maybe correctness) of the functional algorithm. Let me give you an example. Let's just say we have the following simple function: let f(x) = x + 1 which is used here: let g(y) with let a = f(y) = a + 2 We can see that the above is equal to (assignment is operator ':='; operator ',' is used for sequence of operations): let g(y) = y := y + 1, (* reuse variable 'y' *) y + 2 In the above example, the variable 'y' is re-used (I am talking about variables here, not value bindings, because I am talking about implementation): instead of using an intermediate variable 'a', the compiler reuses the variable 'y', increases it by 1, then adds '2' to it and returns the result. Of course the above example is trivial and the relevant optimization may already take place in modern FP compilers. But what about more complex examples? For example, I was thinking about the 'quicksort' algorithm. The functional algorithm goes like this (in a imaginative mixture of functional and imperative syntax, using 'h:t' for head-tail list representation): (* input is a list with first element as 'head' and the rest of elements as 'tail' *) let qsort(head:tail) with (* less and greater than nil is nil *) let less(e, nil) = nil let greater(e, nil) = nil (* calculate elements of input list 'h:t' less than 'e' *) let less(e, h:t) = (if h < e then h else nil) + less(e, t) (* calculate elements of input list 'h:t' greater than 'e' *) let greater(e, h:t) = (if h > e then h else nil) + greater(e, t) = (* the result is: less elements + head + greater elements *) qsort(less(head, tail)) + head + qsort(greater(head, tail)) The actual implementation of the algorithm is quite slow, since it copies data around in a new list for each 'invocation' (I am not talking 'reduction' here, because I speak about actual code running on a computer). The ideal would be if the above functional algorithm could be converted automatically to an imperative one that does in-place sorting. If we examine closely the above quicksort algorithm, we will see that the algorithm does the following things:
Let's see if we can speed up the above algorithm (let's name it quicksort A) by using two lists: a list A which is the input list and a list B which is the output list. Instead of creating new lists, we are going to use list B as an intermediate list for sorting elements, without changing the basic logic behind the sorting algorithm. The 'less' function can be written like this: let less(e, ha:ta, out hb:tb) = if ha < e then hb := ha, (* copy value of element 'ha' to 'hb' *) less(e, ta, tb) (* process rest of elements of A into tail of B *) else less(e, ta, hb:tb) (* process rest of elements of A into B *) The above algorithm uses another list B to accumulate the 'less than first element' data of a list. The 'greater' algorithm can be written in the same way. Now the qsort algorithm becomes: (* input is list 'ha:ta', output is list B, which must have the same length as 'ha:ta' *) let qsort(ha:ta, out B) with let temp = List(sizeof(ha:ta)) (* temp is a list with same size as list 'ha:ta' *) = less(ha, ta, temp), (* accumulate elements of 'ta' less than 'ha' into 'temp' *) qsort(temp, B), (* sort temp list into B *) B[sizeof(temp)] := ha, (* copy 'ha' right after 'temp' sorted into B *) greater(ha, ta, temp), (* accumulate elements of 'ta' greater than 'ta' into 'temp' *) qsort(temp, B + sizeof(temp) + 1) (* sort temp list into B right after the less part and the 'ha' element *) The above algorithm (let's name it quicksort B) can be preferrable than the pure one, because the intermediate variable 'temp' can exist on the stack, thus increasing performance of the program and reducing garbage (or be used for compile-time garbage collection). Here is a working C++ implementation: #include Is it possible to optimize the algorithm further? could the sorting be done without using a second list? The answer is yes. Let's examine the 'logic' behind the sorting again: we traverse a list twice, once for accumulating the elements 'less' than an element and once for accumulating the elements 'greater' than that element. In order to do the 'less' algorithm 'in-place', we have to 'push' elements less than an element in front of the list while pushing greater elements at the end of the list. The algorithm could be like this: (* pushes elements of 'h:t' less than 'e' at beginning of list; function greater is not needed, because all the elements after the 'less' part automatically belong to 'greater' part *) let less(e, out h:t, out f, out l) = if h < e then (* less element is found *) l := f, (* save value of 'f' into position 'l' *) f := h, (* place less element 'h' at beginning of list *) f = next(f), (* set 'f' to next position in list *) l = h, (* note down the 'free' position *) less(e, t, f, l) (* process the rest of elements of 't' *) else less(e, t, f, l) (* less element is not found *) (* quick sort *) let qsort(h:t) with let f = h (* 'f' is the first free entry in the list *) let l = h (* 'l' is the last freed entry in the list *) = less(h, t, f, l), (* push less than 'h' elements to beginning of 't' *) l := f, (* before setting middle element, save the value at middle element position *) f := h, (* set middle element *) qsort(t[begin..f]), (* sort less part *) qsort(t[f+1..end]) (* sort greater part *) Here is the working C++ implementation (let's call this version 'quicksort C'): #include After doing all the above, I realized that I converted the purely functional quicksort A to imperative algorithm quicksort B, and then took that algorithm and convert it to quicksort C, by doing some replacements and adjusting the algorithm for overwriting data. The quicksort C algorithm is quite similar to the known partitioning algorithm (if not the same?). I have come to suspect that:
Since there is no formal way of converting a functional algorithm to an imperative one, programming languages put the burden of assignment on the programmer, thus allowing for assignment-related bugs to creep in an application; the actual problem with assignments is that a programmer can easily forget when an assignment affects another part of a program. If a compiler could convert a functional algorithm to an imperative one so as that the least possible amount of resources is spent (least copying / least garbage), then functional programming languages could easily reach imperative languages' performance, while keeping their 'pure' properties. I searched the web but I did not find a relevant paper. Does anybody know anything about this? I apologise if this is something stupid or if I did some stupid mistake or already proven not feasible, but my interest is great on this subject. Economics of Programming LanguagesI like programming languages a lot. I've used a number of them professionally, and have even written one myself - Hecl - although it borrows most of its ideas, if not source code, from Tcl. And, of course, I've taken part in my share of debates and discussions on "which language is best," a topic which of course doesn't have one clear answer but is often the source of heated arguments. I recently read an interesting book, Information Rules: A Strategic Guide to the Network Economy by Carl Shapiro and Hal R. Varian (Harvard Business School Press, 1998; ISBN: 087584863X), which talks about the economics of the world of high technology. While reading it and thinking about programming languages, a number of things clicked. They aren't earth-shattering conclusions. On the contrary, a lot of them are more or less common sense, but it's nice to read that there are some methodically studied theories behind some of the intuitions, hunches and observations I've made over the years. In this article, I attempt to list what I believe to be the most salient points of the economics of programming languages, and describe their effects on existing languages, as well as on those who desire to write and introduce new languages. |
Browse archives
Active forum topics
|
Recent comments
11 weeks 1 day ago
15 weeks 3 days ago
17 weeks 15 hours ago
17 weeks 15 hours ago
19 weeks 5 days ago
24 weeks 2 days ago
24 weeks 2 days ago
24 weeks 5 days ago
24 weeks 5 days ago
27 weeks 4 days ago