LtU Forum

Why compiler optimizations are interesting

    Compiler optimizations are interesting because they automatically improve the efficiency of programs. Hand-optimization by a programmer is a time consuming and notoriously error-prone activity, so it is of the utmost importance for the compiler to make correct optimizations automatically wherever possible. Compiler optimization also serves as an important link between theory and practice: many optimizations are made by proving certain properties of programs -- for instance, that an array index cannot be out of bounds or that a data structure is uniquely referenced -- and the relevant proof techniques are a valuable area of research in themselves.

    Ensuring that whole programs can be effectively optimized does impose certain design constraints on the compiler and on the programming language. The benefits of automatic optimization do, however, far outweigh these relatively minor restrictions.

To me that is a reasonable perspective and I think it's widely held. My own real perspective is quite different, but it's nothing novel and can probably be summarised using catch-phrases:

    Adequate performance really comes from algorithms, data structures, profilers, experience; the compiler itself is the program that most needs to be simple and fast; whole-program analysis is the enemy of interactive programming; the means seem to justify the ends for many optimizations presented at conferences; I must trust my own experience more than microbenchmarks and Socratic discourses on modern computer architecture.
Of course this has absolutely nothing to do with type systems.

New Paul Graham thing...

The Python Paradox


I mostly agree with the thrust of this piece, I think, but here's the most interesting bit (to me, at least):

Both languages [Python and perl] are of course moving targets. But they share, along with Ruby (and Icon, and Joy, and J, and Lisp, and Smalltalk) the fact that they're created by, and used by, people who really care about programming. And those tend to be the ones who do it well.
It's interesting that all of those languages are dynamic and thus favored by Paul Graham. Does he really think that, e.g., Haskell and Ocaml are being created by people who don't "really care about programming." Or is this just a cheap shot? Or are those languages really just completely off his radar?

LPFML, Xml language for linear programming

Sometimes you just stumble across a project that you had in the back of your mind for a while: an XML standard for representing linear programming problem and solution instances

now I just have to find the time to write a solver.

Call-by-what?

I know that this was discussed on LtU already, but as "Why Types" discussion demonstrated, people can always come with new ideas on old topics.

From theoretical point of view, call-by-value and call-by-name are dual.

Does it mean that in practice PLs should support just one of them, and get "two-for-the-price-of-one"?

Or does usability demand supporting them both?

Is this decision similar to supporting just And-Not as opposed to full set of (redundant) logical operations?

Does the decision depend on type system, or is fully orthogonal to it?

Constraint-Based Type Inference for Guarded Algebraic Data Types

Constraint-Based Type Inference for Guarded Algebraic Data Types

Quite a mouthful, but look:

Guarded algebraic data types, which subsume the concepts known in the literature as indexed types, guarded recursive datatype constructors, and phantom types, and are closely related to inductive types, have the distinguishing feature that, when typechecking a function defined by cases, every branch must be checked under different typing assumptions. This mechanism allows exploiting the presence of dynamic tests in the code to produce extra static type information.

I still have to grok this idea of "dynamic tests producing extra static type information" in its entirety, though.

PLs and SE

it's never a language designer's job to define what's a sound software engineering practice. (on the Why Are Type Systems Interesting thread)

Setting aside the discussion on static vs. dynamic type checking, I must say that I find this statement rather puzzling. Aren't PL designers suposed to make SE easier? Isn't the best way to do this to evaluate and consider SE parctices? Isn't that what happens in practice (e.g., mining patterns for language features, library building etc.)?

When I created the software engineering department on LtU I wanted it to focus on language features and language design that are directly concerned with software engineering issues. But my view is perhaps even stronger than that: I think most PL issues are ultimately about SE.

Natrually, one approach is for the language to support whatever SE practices each and every programmer or team chooses. But isn't that simply an example of one sttitude towards the best way to handle the complexity of SE?

I find it incredible that some would argue that PL designers should be agnostics when it comes to SE.

Why type systems are interesting - part II

The type systems thread is getting ridiculously long. Please continue the discussion here...

Slate 0.3 released

Slate 0.3 has been released. Slate is a language similar to Smalltalk with a prototype based object system and multi-method dispatch.

First-class labels for extensible rows (draft)

Daan Leijen. Submitted to the 32nd ACM Symposium on Principles of Programming Languages (POPL'05), Long beach California, January 12, 2005. (PDF, BibTeX)
This paper describes a type system for extensible records and variants with first-class labels; labels are polymorphic and can be passed as arguments. This increases the expressiveness of conventional record calculi significantly, and we show how we can encode intersection types, closed-world overloading, type case, label selective calculi, and first-class messages. We formally motivate the need for row equality predicates to express type constraints in the presence of polymorphic labels. This naturally leads to an orthogonal treatment of unrestricted row polymorphism that can be used to express first-class patterns. Based on the theory of qualified types, we present an effective type inference algorithm and efficient compilation method. The type inference algorithm, including the discussed extensions, is fully implemented in the experimental Morrow compiler.

Always trust Daan to come up with something both elegant and practical...! However the examples involving bottom (undefined) labels left me skeptical.

breve: a 3D simulation environment

breve is a free software package which makes it easy to build 3D simulations of decentralized systems and artificial life. Users define the behaviors of agents in a 3D world and observe how they interact. breve includes physical simulation and collision detection so you can simulate realistic creatures, and an OpenGL display engine so you can visualize your simulated worlds.

I guess the on-topic bit is this:

breve simulations are written in an easy to use language called steve. The language is object-oriented and borrows many features from languages such as C, Perl and Objective C, but even users without previous programming experience will find it easy to jump in.

but frankly steve is probably not so interesting in itself; rather breve is just a lot of fun to play with.

Apparently, several AI/robotics groups use breve for their research, but it's very easy to get started and comes with a simple IDE and several demos, of which the most interesting is undoubtedly the Walker, a genetic algorithm which evolves four-limbed walking "plates".

XML feed