What do you mean by studying "programming languages"?

Reading some of the recent threads, it occured to me that there's room for a new thread in the spirit of the Getting Started thread and threads like Explaining monads and Why type systems are interesting.

The issue I want to raise is what is at the focus of the study of programming languages. By this I mean, what is the thing (or things) programming language researchers study. I do not want to discuss what should be done, at the moment, but rather orient outsiders about the actual practice in the field.

Specifically, my aim is to explain which aspects of languages are studied, and to let people know how these research areas are called, and perhaps where to find more information about new results etc. I am not thinking about specific lines of research, but rather about the kinds of things researchers study.

My intention will hopefully be clearer after reading my answer to the question in the title of this post:

Most of the time, programming language theory (PLT) is concerned with the semantics (i.e., meaning) of specific programming constructs. Much less outstanding research is done on other aspects, such as syntax, implementation and runtime issues, and languages as wholes (compared to the study indvidual constructs).

This is, of course, a very incomplete answer to the question. Let the discussion begin...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Equivalences

An important aspect of research in programming language theory is the notion of equivalences between specifications of a language. Among other uses, these equivalences may permit developping distinct-yet-fully-compatible interpreters, compilers, schedulers, garbage-collectors, etc.

Researchers are also interested in equivalences between programs and of soundness and/or completeness of a program with respect to a specification. Among other uses, equivalences permit the development of provably safe optimizations.

===

Is that along the lines of what you want ?

Aspects of Languages

I was thinking more along the lines of which aspects of languages are part of the science of PLT. Programmers know their languages of choice well, and it's worth pointing out that not all aspects of languages are studied by what is commonly called PLT.

However, the kinds of thing you mention is also worth discussing, of course.

And one more thing: Let's try to give concerete examples ("we study binding. Here is a let expression in Scheme, whch binds x.." etc.)

Aspects of Language Work

Right, so while my verbal diarhea is running away with me: It seems to me that work on programming languages in general breaks into a handful of categories. Not all are necessarily theory, but I'll line them out here.
  • Raising the level of abstraction available to the programmer. This path starts in... what, Babbage's engine? But we can perhaps jump to assembly, and then gradually add compiled languages, functional abstractions, heaps, garbage collection, objects, matrices and matrix operators, iterators, collections, ... (in varying orders depending on which branch of the programming-language family tree you traverse, of course.)
  • Managing state, and/or trying to get along while pretending that there is no state (only Zuul.)
  • Managing concurrency and parallelism, particularly when you're allowing as to how there might, actually, be state. This covers a range from database-style transactions (Atomicity, etc.) to multithreaded programming (under various memory consistency models, even.)
  • Providing fault-tolerance. (Not sure how much this is language vs. algorithms, but what the hell.)
  • Statically proving things about how a program will behave when it runs.
  • Verifying that a program adheres to certain conventions, be they classical data typing, information flow control, etc. Typically, the theorists are interested in what they can prove statically, yes?
  • Proving things about irritating logic puzzles accidentally created by programming language designers, e.g. showing that C++ template expansion at compile-time is actually Turing-complete.
Hm. Some of my biases may be showing. Anyhow, I don't know that I've given examples as detailed as you'd like for every category, but perhaps people who know more about each individual category will have some ideas. And perhaps we can talk about whether things like studying different models of shared-memory consistency, say, fall under the PLT rubric, or somewhere else...

Added a few minutes later, with a Duh! and a smack to my own forehead:

  • Efficient implementation!
  • Proving that some construct is fundamental in that it can be used to build a number of other popular constructs, e.g., the original lamba-the-ultimate papers; continuations and continuation passing style; and single-instruction Turing-complete architectures.

Is there a unifying theme?

There are lots of interesting languages out there with lots of PLT influence but the four that are the most interesting to me most are Haskell, ML, Scheme, and Oz. Each of these (family of) languages is quite heavily influenced by PLT but have very different emphasis in the parts that they are exploring.

Haskell seems to be the state of the art in lazy Functional Programming; Scheme is at the forefront of any new way of thinking about programming paradigms; ML is quite conservative and emphasizes mathematical guarantees. Oz comes from the logic language community with a heavy dose of Constraint Programming.

Now unless you're an Oleg who can master both Scheme and Haskell, or a Gert Smolka that can be as comfortable with ML and Oz, the rest of us have difficulty being proficient in more than one of these major PLT based languages. So it can be hard to see where there is a common set of PLT that arises above these quite distinct languages.

My uneducated guess would be that the major thrust of PLT is concerned with Types (whether latent or dynamic). But that's probably just because I've spent more time in with ML lately. And I'd venture a guess that PLT really goes beyond any particular language implementation - being a General Theory of Programming Languages. For those of us in the consumer class of PLs, it can be difficult to discern what all these languages have in common.

Simplistic answer:

They break new grounds ?

Just like natural languages and mathematical languages, programming languages shape thought. One could say that designing new programming languages and studying their properties is all about removing thought barriers.

For instance, without automatic garbage-collection, programs must be in great part designed and structured around memory management. Since garbage-collection appeared, we know that there are other, often better, ways we are allowed to think about a program.

Similarly, modules and abstract data types, as seen in ML or (slightly differently) in numerous object-oriented languages, are in great part about modularisation of code. They give us reasoning/design tools for abstracting away parts of the program, hence hopefully making us more efficient at working in groups.

At least, I guess that's one way of looking at this.

Motivation

Thanks, Ehud. Great topic!

I am new to PLT, so I have a question for those who post responses here. In my area of expertise (computer architecture), research has to be strongly motivated. For example, "using this branch predictor we get a speedup of 20% for some benchmarks" or "using this technique we reduce simulation time from 1 week to 1 day." The point is that every conference and journal paper has ranging from several paragraphs to a whole section on why the research described is important. This is often very important to getting it published.

What is the motivation for studying a particular aspect of PLT? Is motivation important? From the few papers I've read, it's often unclear why one would want to do what is described. I'm sure some things are simply understood by the community and so nobody repeats them. Please repeat them here though. :)

asking why

Hi Sean, I was struck by your question "why" since I've been thinking about that a lot lately. At work over the last year or so (at two different employers) I noticed "why?" is often asked in a way loaded with aggression because the asker usually means "why the hell did you do that??" For some reason, it's rarely a request for information; it's a challenge that one has been wasting time on unnecessary issues.

You can see why I relate to a context you cite where justification is necessary for research. When I'm asked "why?" at work, the other person nearly always implies one of two equivalent approaches might be wrong, even when -- as always -- they are equivalent. But "wrong" merely means not to their taste. Often it's asked to question choice of syntax or even choice of language.

(At Akamai, the chief architect often asked "why did you write that in C++ instead of C?" when the same thing could always be done in either language, of course. My answer was usually, "Because I was told we wanted more C++." And he'd say it wasn't "necessary" and order conversion to C. When comparing two approaches in two different Turing complete languages, it's odd when a person implies one language should only be used when necessary, as if only one could yield some effect.)

When someone asks "why?", they nearly always imply a shared set of standards for weighting values, so that X is better than Y, for some choice of X and Y. But it's seldom the case with two coding practitioners. Folks typically differ (dramatically) on desired levels of simplicity, abstraction, indirection, syntax verbosity, and a lot of other personal cognitive preferences. All of them might affect maintainability, code longevity, and stability; but judgment on this is highly subjective.

In the hardware field, cost/benefit analysis might be easier to perform. "More expensive" is easier to measure if it is manufacturing cost. And "faster" is easier to measure if a benchmark will capture it. But in software, cost and benefit are both hard to measure, and subjective opinion differs enormously, often depending on basically religious principles.

In software, cost is often a function of what happens in a developer's mind, either when first creating software, or when updating it later, or when sharing with other developers with attendant effects in their own minds. Cost is whatever makes software slower to write, or buggier to run, or harder to move forward in evolutionary changes. Measuring this is nasty, and comparisons that drive answer to "why?" can seldom be justified between two developers.

So when one developer asks another "why?", often they mean, "Why didn't you write this exactly the way I would have written it?" The answer is typically, "Because I'm not you." So the asking person is playing a dominance game, basically, but disguised as rational discourse.

p.s. (Folks might remember me from my treedragon site, when I still went by David McCusker instead of Rys McCusker. I think this is the first time I posted here.)

I think this is a quite good

I think this is a quite good example on how humanistic and non-technical PLT really is.

Asking !why is important, too

Thank you, Rys, for this discourse on the human side of computer programming from an obviously experienced point of view.

So when one developer asks another "why?", often they mean, "Why didn't you write this exactly the way I would have written it?" The answer is typically, "Because I'm not you." So the asking person is playing a dominance game, basically, but disguised as rational discourse.

This is unfortunate, but I can see how it is true. Personally, in my graduate work, I have come to see the importance of understanding why something is done. To me, understanding the "why" tells me how something is useful, where it fits in the jigsaw puzzle, and what I could do with it. Perhaps it comes from reading too many papers in my field, but I need to be motivated before I can come to acknowledge any proposed idea. Too many things haven't worked (even though they were published) and it takes some experience to know if others will succeed or fail. Computer architects tend to take on a very skeptical (sometimes even negativist) attitude due to this fact.

What I would like to know is, do these dominance games or religious battles occur even in PLT research? Yes, we see it all the time in popular literature, but I haven't noticed it too much in the papers posted on LtU. Well, except for the static type-checking vs. dynamic type-checking battle perhaps... ;)

The Y of Why

Great question. It's amazing how many years you can devote to something without stopping to ask why...

It is easier to answer the why question about specific reseach projects. For example, concerning a recent paper I mentioned on the home page, you might say that "aliasing causes bugs, and linear types might provide part of the solution."

Answering the why question regarding the entire enterprise of PLT (or any other field) is much harder. My personal answer would be that (a) the subject is intrinsically interesting and worth studying (the same way fundamental science is), and the study is intellecuatlly rewarding and (b) there are many reasons to think that sound PLT can improve both the experience of programming and the quality of software.

Sapir-Whorf

While I personally am strongly in the camp that opposes the Sapir-Whorfian hypothesis, I believe that like most strongly-held beliefs, there is an aspect of truth to it that ought not to be overlooked. Surely it is possible for humans to "think" visually, musically, spatially, etc. But what verbal thinking allows is something special. The essence of programming is names and values. So what's in a name? A *lot*. A name, or a word, is a social contract regarding concepts. A name has meaning when we agree that it refers to particular mental states that we have. Whether those mental states correspond to visual stimuli, aural stimuli, somatic stimuli, or imagined stimuli doesn't matter, which is exactly what makes words powerful. What the word does is give us a tag for a potentially complex state so that we don't have to describe that state in terms of its components. In a word, a word is an abstraction.

Jargon exists because it is useful for practitioners in a field to name commonly referred constructs with a simple tag. Imagine if we had to spell out what "recursion" means every time we referred to the concept. The very tedium of doing so would slow down any effort to carry on a discourse about the subject. When we coin new words, we are creating new building blocks of thought in the sense that we are creating new social contracts regarding concepts that we believe are somehow fundamental due to the regularity of their appearance. Being able to say that a problem is NP-complete is much more convenient than having to say that a problem is isomorphic to TSP, which is a very hard problem that cannot be solved in a time scale which can be expressed as a polynomial of the problem size, nor whose solutions can be checked within that time frame. It makes talking about computational problems much easier, and papers can be tens of pages instead of hundreds.

When it comes to PLs, the "words" of the PL are not the identifiers, but the symbolic constructs that actually do the heavy lifting. A dot (period) in C doesn't do nearly as much as a dot in Haskell, all contexts considered. When we "expand our vocabulary" by adding new intrinsic operations (words), we are recognizing patterns of regularity that should be encoded into the language itself, rather than as patterns expressed by the language (words, rather than paragraphs). In some sense, our ability to think is expanded, because we can now address the relevant concept by a word rather than a sentence or paragraph. The language's ability to express solutions is correspondingly expanded.

So I would argue that at the highest level, the purpose of PLT is to add new "words" to the metaphorical problem-solving dictionary, because it enhances the ways we can talk about things, and that allows us to express more powerful solutions in a space that our puny brains can deal with. It is well-known that our brains seem to be able to deal with "chunks" of information in short-term memory on the order of 6-8 chunks at a time. What is surprising is that this limit appears to be independent of the nature of the chunks themselves. So one can view abstraction as the brain's "hack" for getting around this physiological limitation. By making each chunk more semantically loaded, we are able to climb the ladder of sophistication with a finite working memory. Perhaps also this is why we have a sense of "mathematical" beauty that recognizes economy and elegance.

Indeed, the words which we speak in today's generation of languages are very powerful compared to the words of languages just 50-60 years prior. And if software is to progress as a science, we need to continue identifying new patterns and naming them in powerful ways, just as physicists and chemists identify patterns in space and matter, and name them. The act of naming is powerful. It is a way for the brain to "capture" a wild beast in the space of the mind. That beast may be a shape-shifter, shimmering in different colors, and lead us to give it several names at first. But over time, we reign it in, see it for what it is, and give it an even more powerful name, to increase our dominion over it.

Different names capture different patterns. Not all patterns are universal. Not all patterns are relevant to all practitioners. That's why we have different PLs. But the justification of PLT as a whole is that there are patterns out there that are general enough to be captured and tamed. And when we bring new words into the fold, our speech becomes more powerful, and we can say much more with less.

Very nicely put

I especially liked the last sentence. ;)

I think this is very true. It's what has interested me in Haskell (which then brought me to LtU). To be able to write a program in clear, concise vocabulary; to force myself to express ideas in different ways; to think outside the box....

Thanks again, by the way, for that response on abstract algebra in a previous thread. It helped me find some direction.

Wh?, Wh*, or Wh_

It's amazing how many years you can devote to something without stopping to ask why...

Always happy to challenge the status quo!

My personal answer would be that (a) the subject is intrinsically interesting and worth studying (the same way fundamental science is), and the study is intellecuatlly rewarding [...]

I've found that the unfortunate thing with computer architecture is, though there are a great many intrinsically interesting ideas that could be studied, the selection is limited to those useful to the industry. They have since determined that some things just will not survive (or even get published) whether due to market forces or simple disinterest.

PLT does not appear to be so tightly bound. Problems (such as aliasing or encapsulating I/O) may be derived from industry experiences and solutions may be proposed by research; however, (as Rys so clearly wrote) there may be multiple solutions for multliple practitioners. The industry may or may not pick up on any solution, but that doesn't necessary correlate to it being interesting or useful. I like how PLT has this flexibility

and (b) there are many reasons to think that sound PLT can improve both the experience of programming and the quality of software.

And this attracts me in that it incorporates logical reasoning and mathematics into a practical environment (for me, the engineer at heart).

why

The why of PL research...

Why? No silver bullet, that's why.

If one agrees with "No Silver Bullet" (and I personally do), then one must anticipate that advances in the ways we make software will come only by a series of hard-won, incremental improvements. The programming languages we use shape the way we think about problems. Presumably, a better programming language can help us think about problems in a better way, which leads to shorter development times, or more secure software, or some other desirable outcome. I see PL research as doing a kind of semi-directed search through the PL design space to see which techniques a) are good frameworks for reasoning about problems and b) lend themselves to providing desired outcomes.

Sapir-Whorf hypothesis and Industry

It's nice to learn a new term. Having worked for years in description logics I guess I'd also question the hypothesis but I'd be curious to hear Daivd's reason for opposing it. We may reason visually, musically and so forth but words are what seems to run through the brain when dealing with others and communicating ideas.

The use of jargon and new words for building more powerful abstractions is important in programming. In my mind languages like Haskell are focussed on getting programs correct by using ideas from universal algrebra, type theory and category theory. Although there's been great work in Haskell in terms of building up domain DSLs and so forth, languages like Scheme with it's macro system seem more powerful in this regard. We want to write programs that are correct but we also want to build large systems. I think programming is more or less mathematics or more accurately mathematical foundations, and it's worth studying by itself.

Industry, finance capital, and markets seem to test what's useful, whatever that test and useful mean. The economics of hiring programmers of different skill sets and building products for use by end users seems to drive what the market seems useful in terms of programming languages. Programmers who build tools and frameworks for other programmers also play a role. There was an recent thread over on LessCode around these ideas.

I'm reminded of the old saying that "You can't do computer science without a soldering iron". Would this topic even exist without the computer having come along? Would foundations have remained a sort of dead end full of gnarly problems we'd prefer to ignore?

My original question

Was about the stuff PLT porefessors really do...

Anyone feels like contributing an answer to the question I posted ? ;-)

Audience

Perhaps there aren't enough professors here to give a useful answer? :(

actual research patterns

Sorry, I think I contributed to off topic influence instead of sticking to what professors actually study.

Ehud Lamm: Much less outstanding research is done on other aspects, such as syntax, implementation and runtime issues, and languages as wholes (compared to the study indvidual constructs).

But I wish these things were actually studied more. :-) I was very excited by the Smalltalk blue book on the VM in the early 90's, as well as other books on the mechanics of just getting the details to bootstrap and go. Why isn't this sort of thing a study priority?

When I have noticed PL research, it has often struck me as rather more math-bound than I'm comfortable with. (Part of the reason for this is that I'm a spatial geometer type math guy as opposed to a verbal algebraist type of math guy; professionals seem to be more of the verbal/algebraic sort.)

Type theory almost always seems that way to me. I get lost reading ML source when dispatch types must be inferred from patterns. I guess I don't like deduction much in programming languages. I like literal languages favoring do-exactly-what-I-say. I like to see what patterns you can get to emerge from that starting point.

Is there any tendency for favor research topics that are harder for laymen to understand, because it increases the factor of eminence due to exclusion? (This reminds me of research into writing styles that shows folks are perversely likely to grade writing as better if it is more obtuse rather than clear.)

Programming languages will shape the future of humanity

It's a bold statement, but I find it true: good programming languages will allow for better programs, which in turn would allow for better research, better economy, better weapons, and thus they will shape the future. That's why studying programming languages is important. Let's not forget the rate of progress in the 20th century, thanks to electronics.

The question of semantics of

The question of semantics of programming languages as practiced in research seems to be a matter of looking for the “mathematical” or “formal” explanation behind some programmatic expression. Programs are often seen as carrying out, or simulating such explanations. Having such a “formal” or “mathematical” explanation naturally means that one is in a position to ask questions and to derive properties based on the theory one is using.

There seem to be several possible theories in practice. Category theory and its use in the Haskell language has evolved into a main thread. The other main thread seems to be logic in all its many forms. On a certain level it is not surprising that Prolog and Lisp have so much in common except appearance. As another example it should not be too surprising that the Oz language derived from logic programming looks so much like other programming languages. It is really a matter of how the logic is expressed. (ie whether we use rules, functions, or procedures) Category theory and Logic seem quite different but I suspect that they can be related given enough skill in the matter. I am not a professor but this would be my take on what is going on.

Category theory and Logic

They actually are quite closely related and come together in fields like categorical logic, topos theory, and theory of fibrations. A key idea is that objects in a cateogry are collections of equivalent proofs and morphisms functions between the proofs. Relevant authors you can google are Lambek and Scott, Johnstone, Bell....

What is the study of programming languages?

My own formulation of an answer can be expressed as three questions that, taken together, drive my interest in programming language theory:

1) What are the fundamental elements of computation?
2) What are the fundamental elements of programming?
3) What kinds of languages can be constructed to economically, clearly, and completely express the elements uncovered in 1) and 2))?

Studying programming languages

Programming languages are APL, SNOBOL, FORTH, Prolog, Haskell, AWK, shell, Ada, Occam, etc. You study a programming language by writing programs with it and developing an appreciation for its character. An expert in programming languages is someone who can write fluent, accent-free programs in a lot of different languages.

I have no idea what the rest of you are talking about. :-)

Infinity and beyond!

Although I believe there is much to be learned by mastering specific programming languages, I don't think this is where all the value is. Programming language theory should be about going beyond specific instances.

Think of an algebrist that knows how to operate in various algebraic systems, but has never noticed the fact that there is an underlying theory that concerns all structures that are groups, or rings, or whatever. He would be missing a lot of what lies deep in the world of algebraic structures.

Going beyond specific systems and noticing their underlying theoretical principles allows us to go deeper and look farther. This is another instance of the saying to avoid losing the forest for the trees.

But...

...where did the programming languages come from? Why are they the way they are? How can we describe them? How can we define them? How can we make them better? How can we compare them? Now that's what I'm talkin' about. ;)

Amen!

From the Foreword of "Essentials of Programming Languages," 2nd ed.:

"This book brings you face-to-face with the most fundamental idea in computer programming:

The interpreter for a computer programming language is just another program.
It sounds obvious, doesn't it? But the implications are profound. If you are a computational theorist, the interpreter idea recalls Gödel's discovery of the limitations of formal logic systems, Turing's concept of a universal computer, and von Neumann's basic notion of the stored-program machine. If you are a programmer, mastering the idea of an interpreter is a source of great power. It provokes a real shift in mindset, a basic change in the way you think about programming."

Great quote

I totally forgot about it. I should add it to the quotes page...

You are de man

Hear, hear.

In addition, the reason you study them is because they are cool and they make you (personally) more productive.

But this aspect of the question wasn't the intention of this thread.

The reason I posted this question was that it seemed to me that some people don't realize that the personal reasons, and the kinds of tricks you and me love about programming, are not the same thing as programming language theory.

People not following the academic papers don't always realize that academic reseach is rarely about questions such as "is multiple assignment easier in Java or in Python".

I am not dismissing practical questions. I just wanted to make the distinction clearer. There's a real gap, and conversation would probably be easier is this gap is known and accepted.

Then maybe the headline shoul

Then maybe the headline should be changed from "The Programming Languages Weblog" to "The Programming Language Theory Weblog". To a new newcomer the current headline is a source of confusion.

This isn't a meta discussion

This isn't a meta discussion about what LtU is. It is a specific discussion of one very important subject we cover, namely PLT.

P.S

Being a contributing editor, Olivier, you can help set the agenda! Simply post something to your liking... ;-)

Headline

LtU is not supposed to be just about theory. However, given that there is a large, active and successful body of PL theory, which has had a great influence on programming languages and the practice of programming over the years, discussion related to programming languages which takes place without being at all grounded in theory is somewhat suspect, to say the least.

So perhaps the headline should be something like "The Programming Languages Weblog, with Associated Discussion Informed by Theory". Doesn't exactly roll off the tongue...

Heart of PLT? or PLT "Areas"? or both?

If the question is: what is at the heart of PLT? then maybe Ehud's suggestion about a cross between "Explaining Monads" and "Getting Started" could begin life as something called, let's say, "What Programming Really Is" or "Why Lambda Calculus Matters", or "Programming with Lambda Calculus" or "How the Computer Knows What to Do with Your Program". It could consist of (1) a simplified intro to material covered in, e.g. Paul Hudak's review of FP, and (2) examples implemented in actual PLs (Scheme and Haskell come to mind but I'm sure you could show examples in Python or other languages with lightweight, clean syntax.), in the spirit of "principles to practice" offered by SICP or the early Lambda papers by Steele and Sussman.

On the flipside of the question, "What do you mean by studying PLs?" there is the question, "What do people who study PLs study?"

One way to answer that question is to try to classify Programming Language Research Areas (both those areas active now and those that got PLs where they are today). I don't know a lot about PLT (I'm not a researcher, just a "fan") but I've read enough papers to see patterns in where the papers are published and what organizations the researchers build their own communities around.

For example, I could list some offical sounding titles I've come across in my reading and then maybe researchers in the various fields can extend the list (here are just some ideas that could maybe generate further brainstorming):

  • ACM SIGPLAN (Special Interest Group in Programming LANanguages), ACM Transacations on Programming Languages --- what groups, areas, fields, publications does this encompass? e.g. ACM Symposium on Principles of Programming Languages
  • JFP (Journal of Functional Programming)
  • ICFP (International Conference on Functional Programming) --- what workshops are put on by ICFP for example...?
  • FPLCA (Functional Programming Languages and Computer Architecture)
  • ICDS (International Conference on Distributed Systems, IEEE)
  • . . .

(By the way, if this approach is not a useful way to answer the question, the posts above have definitely helped explain the inner motivations and mathematical connections for why we are all so interested in PLs, but few have said what cool new things PL researchers are doing (besides the general "semantics of programs" or "type theory")---and there are many different things PLT researchers do besides just programming! What are these things and why are they important?)

I was thinking about the seco

I was thinking about the second formulation when I posted the thread, but both are ok, I guess...

Who's doing what

In the spirit of naming some specific people and their areas of study, I'll offer a few names with which I happen to be familiar to a greater or lesser degree. Then everyone can argue about whether they're practitioners of PLT, or too engineering-focused to qualify for the theory label.

  • http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/l/Liskov:Barbara.html"> Prof. Barbara Liskov at MIT has published on computer language issues since the 1970s. A quick glance through her bibliography shows work on everything from object models to fault tolerance in distributed systems to typing systems. (If I recall correctly, she invented or co-invented the object abstraction as we know it today --- I think it was initially called an abstract data type.)
  • Prof. Andrew C. Myers at Cornell came out of Prof. Liskov's group at MIT. The work of his that I've read has to do with controlling information flow through programs (again, typing-related work) although a quick glance shows interests in additional areas.
  • Prof. Guy Blelloch at CMU has worked on algorithms and programming languages, particularly aimed at parallel computers. I'm personally a fan of the NESL language.
  • Prof. David Gifford at MIT seems to be working on computational biology these days, but in the past did a lot of work on programming languages.
  • Butler Lampson , an adjunct professor at MIT and a Distinguished Engineer at Microsoft, has worked on all kinds of stuff ranging from information flow control ( A Note on the Confinement Problem is a classic!) to transaction processing to, well, read the bio.
  • Prof. Arvind at MIT did a ton of work on dataflow languages and systems.
  • Prof. Eliot Moss at UMass Amherst has done a pile of work on programming languages as well, including some very interesting stuff on transactions and concurrency.
As an MIT escapee, I could go on and on with the MIT-related people --- Prof. Charles Leiserson's (whose homepage seems to be inaccessable today) group produced Cilk , which is a big favorite of mine for practical usability and clean demonstration of the utility of randomized work stealing. And so on.

Is this vaguely what you were looking for when you started the thread?

Jeremy

PS Right, first-time post, so my background: I escaped MIT with my PhD in '02, worked at a robotics company for three years, recently left the company, and am now figuring out what the next thing will be. People here might be amused or appalled by the Advanced Scheme tutorials I ran during MIT's January session a couple years back.

ADT != OO

There was a nice discussion about this in http://lambda-the-ultimate.org/comment/reply/1067/11335
In resume, in OO, data and operations are together with mutable state, and in ADT, data and operations are separate and data is stateless.

ADTs can have state

ADTs can have date. I would say that objects are an extension of the ADT idea in which:

1) types (interfaces) can have multiple coexisting implementations (classes)
2) types can extend (subtype) other types
3) type implementations can be constructed by inheriting from and overriding other implementations

Who's Who

Actually, a page with a Who's Who of PLT, listing the big names in each major area of study might be useful to people looking for more information on topics. It might also serve to give a high-level perspective of PLT by showing which areas get the most attention (though that might be more accurately determined by looking at the number of papers published).

Web Pages for Programmin g Language Research

Expressiveness

Hmm, I don't have a very good answer to Ehud's question although given his original statement I would be compelled to just give an overview of basic computer language CS theory. I had another half-baked thought which popped into my mind though.

Somehow, to study programming languages seems to relate to the study of expressiveness of, or expression of ideas into (?), computer languages. What can/can't we express, how can we express, why do we express something in a certain way, how do we put constraints on what we express, ... .

Seems to stand to reason, does it? What else is a language for than to express ideas? Or am I spacing away?

EDIT: just read some other posts, must have picked up some meme on this blog since there are actually a rather large number of posts which mention expressivity explicitely.

2nd derivative of expressiveness

Everything I've seen, from Dijkstra to Wadler, seems to me to be fundamentally concerned with increasing the increases in expressivity.

Being a run-of-the-mill corporate programmer, I've always felt more like I was fighting the programming languages I've used rather than working with them, hence my lurking here. I'm no researcher in PLT, but a novice trying to wade through things that are too complex for what passes as my initial education. The thing that's driven me through this frustrating madness the whole time is expressivity. We can make machines, say, a two-bit adder, that do something. Then we can layer on abstractions until we get to the "zipWith" function. That's a big win. Thus, I think that PLT really drives from two questions:

1. How do/can we gain expressivity over what already exists (the 2-bit adder)?

2. How do we make the new expressivity actually practical to use, given what we currently have to work with?

(Then, perhaps, there's the meta-question of "what is expressivity and how do we adjudge something to be an increase in it?" Sort of an "I know it when I see it" thing, I suppose.)

Type systems, for example, enable us to express facts or constraints about what it is we manipulate with our programs. Similarly with Pi-calculus. All these things, at least to me, fall into this one kernel.

Otherwise, we wouldn't have programming languages; we'd have clever arrangements of switches.

ACM classification scheme

The ACM Computing Classification System might be a useful starting point.

PLT research is usually found in the following categories : D.1, D.3, F.3, F.4 (esp. F.4.1.,F.4.2), H.2.3.

I mention this just as useful reference. These are subjects that are of interest, whether the papers are actually so classified or not.

If I missed an important category let me know...

A taxonomy of programming environments and languages

You just reminded to look up ACM Computing Surveys and, lo and behold, look what I found!

Lowering the barriers to programming: A taxonomy of programming environments and languages for novice programmers (14MB PDF).
Caitlin Kelleher, Randy Pausch. ACM Computing Surveys. Vol. 37. No. 2. Jun 2005.

Since the early 1960's, researchers have built a number of programming languages and environments with the intention of making programming accessible to a larger number of people. This article presents a taxonomy of languages and environments designed to make programming more accessible to novice programmers of all ages. The systems are organized by their primary goal, either to teach programming or to use programming to empower their users, and then, by each system's authors' approach, to making learning to program easier for novice programmers. The article explains all categories in the taxonomy, provides a brief description of the systems in each category, and suggests some avenues for future work in novice programming environments and languages.

Front page material?

Lowering Barriers vs Extending Reach

Thanks for the link. It looks very interesting and I vote for it being promoted to the front-page.

I suspect that most of the readers of this site aren’t the types who initially had difficulty learning to program, but they should still be interested in lowering barriers.

Lowering barriers means taking someone who couldn’t program at all and letting them start programming; but it also could mean taking someone who could already program (maybe even very well) and enabling them to solve problems that they previously couldn’t. I think that some of the techniques used to lower barriers for new programmers could also be used to extend the reach of the more experienced ones; and in the end, I think that that's what this site is all about.