Amusing question

Take a look at this crooked timber thread. How well does this phenomenon apply to our field? Why did you decide PLT is cool? Was it because of the writing from a specific school of thought (let's include papers as well as books)? Did they give you the right impression of the field?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

SICP and GEB

I found two dusty, hidden treasures in the stacks of my undergraduate library: Structure and Interpretation of Computer Programs and Goedel, Escher, Bach: an Eternal Golden Braid. After that I was hooked.

I wouldn't say that they necessarily give the right impression of the field, they are both too optimistic. But I've continued to follow closely in the IU tradition since then.

I'd Barely Say...

...that GEB was about the field of computer programming at all! Well, OK, extremely indirectly, maybe.

I was going to say that I

I was going to say that I loved GEB when I first read it as boy, but it never occured to me to think it had something to do with my fascination with programming languages...

(By the way, when I wrote my original question I was thinking about things like EOPL, papers on the "algebra of programming", "The Design and Evolution of C++" and so on)

Who said anything about computer programming?

GEB was a fairly direct examination of languages of all kinds, which included formal systems in general, and mathematical logic and programming languages in particular. It examined how languages express meaning, getting into specifics such as recursion, and how one formal system can be embedded in another. It also included a look at the concept of languages that can express their own meaning. I'd say GEB and PLT are quite closely related. It could be considered as a preparation for a study of formal semantics. GEB is not about computer programming in almost the same way that LtU is not about computer programming.

Tongue Mostly in Cheek

First, if I haven't made it clear before, GEB was an enormous influence on me as well, and as the luck of history would have it, I went to IU and have very fond memories of hanging out at Lindley Hall and bugging Daniel Friedman and Douglas Hofstadter as much as I could get away with. :-) But when I first read GEB in high school, I couldn't at all see the parallels to what I was doing with Z-80 assembly language and Tiny Pascal on my Model I TRS-80. Picking up "The Little LISPer," SRI edition, clarified things a bit, and once I got to IU, things got clarified radically.

But if you look at the categorization of GEB, it's labeled as "Philosophy." I remember this striking me as strange as a teen-ager, although it was helpful in providing the impetus to overcome a dysfunctional distrust of the field of Philosophy. I didn't think of logic as a branch of Philosophy, but that doesn't keep Raymond Smullyan from being in IU's Philosophy department. So I think my somewhat tongue-in-cheek remark about GEB and programming languages is fair, in some sense about which a consensus could be reached.

But the observation that GEB is not about computer programming in almost the same way that LtU is not about computer programming strikes me as extremely apt. It seems to me that we are engaged in Philosophy here. We hope that our musings have practical value, and I think a great many of us insist—I certainly do—that the design and implementation of programming languages needs a philosophical basis in order to be successful in modern terms. I add the qualifier because there was a time when you could construct a language without any particular philosophical basis and still obviously fill a void, and therefore be successful. But I think an arbitrary new language now that isn't very carefully designed to express certain relatively concrete principles is likely to degenerate into a mere bricolage of features taken from other languages. As much as I like Ruby, in my opinion it falls into this trap (and in fairness, Matz never claimed to be doing anything other than developing a language that gave him joy, and has explicitly said that being a bad Smalltalk and Lisp knockoff isn't such a bad thing, with which I wholeheartedly agree). On the other hand, giving yourself joy can be a philosophy, too, cf. hedonism.

So while GEB didn't inspire me to learn about programming languages in the concrete way that SICP or EOPL or LiSP or TAPL or CTM have, it certainly has been inspirational!

Bite your tongue!

It seems to me that we are engaged in Philosophy here. We hope that our musings have practical value, and I think a great many of us insist—I certainly do—that the design and implementation of programming languages needs a philosophical basis in order to be successful in modern terms.

Not that I have anything against philosophy, but: man, I really hope not!

I want a language based on scientific results, not arguments from the humanities.

Certainly PLT has a relationship with philosophy, but I hope only as much as mathematics and science in general do. If we are engaged in philosophy here, then we are doing the wrong thing and ought to be dirtying our elbows more.

Cogito ergo sum programmator

If we are engaged in philosophy here, then we are doing the wrong thing and ought to be dirtying our elbows more.

As a counterpoint to this, I have often told my colleagues that one of the things I love about programming is that is "philosophy made manifest"; the quality of a program (or programming language) can be judged by the elegance with which it organizes its ideas, the faithfulness of its theory about its domain, and at the end of the day its utility and elegance are tested in the most cruelly practical way possible: it must run and it must do what you actually wanted it to do.

Science not philosophy

That sounds more like the scientific process to me than philosophy, and I think what you describe could apply in the abstract to any scientific theory or engineering project. Science was once "natural philosophy", but now among the things that distinguishes the two is that philosophy is not testable. Again, not a defect; they address different questions. But if you are not exploiting the formal or practical aspects in doing PLT, then you are indeed doing philosophy (or social studies) and not science. And that is a defect. (It also characterizes about 90% of what you read about programming in blogs and trade articles.)

And now I will stop philosophizing...

Heidegger, Heidegger was a boozy beggar

but now among the things that distinguishes the two is that philosophy is not testable.

Your definition of philosophy seems to be "useless musing and pontificating." ;-)

My own definition of philosophy is more a long the lines of a "useful system of thought about something", and a good philosophy in my definition IS testable, even if only subjectively.

To give a more on-topic example, take the principle enunciated by Peter Van Roy (eg here )that statelessness should be the default in concurrency/PLs/programming and state should only be added minimally as is necessary to elegantly/possibly solve your problem.

This to my mind is a philosophical principle, but it is easily testable: you simply try it out on some projects and see what kind of results you get. If it gets you the results you want, it is a good principle and if you don't you may want to throw it out.

(It bears out well in my experience.)

Or is this now science, because it is a useful and practical idea? ;-)

Science versus philosophy

Your definition of philosophy seems to be "useless musing and pontificating." ;-)

Not at all, and I repeatedly said otherwise. I don't think answering questions of morality, values and the human world is useless. I think it's much more important than computer science, in fact.

However, I do think that treating PLT purely philosophically is like boxing with one arm tied behind your back. Moreover, it is much easier to make a woolly philosophical argument, especially in the mainstream where people are not held very accountable, than it is to make a bad technical argument, where your argument's inadequacies are staring you straight in the face.

Bad philosophy is easier than bad science. But good science is easier than good philosophy, because the subject is narrower and (hence) we have better tools for it. Thus good philosophy is more valuable, because it is more scarce.

People seem to apply a tremendous double standard when evaluating philosophy and science. They will swallow any old bit of philosophical hokum, but science is deeply mistrusted. It's not that science is held to a higher standard (it's not, except by practitioners) or that people should not be more skeptical of it (they should — that's the whole point of it). It's that it is much easier to support an agenda with philosophy than it is to do so with science, because the line between philosophy and ideology can be very thin.

My own definition of philosophy is more a long the lines of a "useful system of thought about something", and a good philosophy in my definition IS testable, even if only subjectively.

Then perhaps to you science is just a good philosophy, and then what we have to argue about is what distinguishes the scientific and mathematical methods from "a useful system of thought". There is a very clear difference. (And, Marc, I know you know it.)

To give a more on-topic example, take the principle enunciated by Peter Van Roy (eg here )that statelessness should be the default in concurrency/PLs/programming and state should only be added minimally as is necessary to elegantly/possibly solve your problem.
This to my mind is a philosophical principle, but it is easily testable: you simply try it out on some projects and see what kind of results you get. If it gets you the results you want, it is a good principle and if you don't you may want to throw it out.

It is not testable at all, because there is no objective way to characterize the results or how they relate back to the hypothesis. You may convince yourself of such a conclusion, and perhaps in the end that's all that matters to you ("you" in the general sense). But to convince others without objective evidence you would need to resort to rhetoric (and there is enough of that in programming circles, I think).

Is it philosophy? I guess so. Statements of the form "X should be Y" are always value judgements, and nearly always situated (a fact not often enough acknowledged by utterers). Scientific statements are of the form "X is (not) Y". I am more interested in, and have more confidence in, the latter, partly because the latter can be decided once and for all, then serve as a stepping stone to future results, whereas the former tend to engender arguments which only go around in circles and thus never lead to anything. People who get things done tread the circle, but eventually they leave on a tangent and don't look back till they meet an obstacle. The rest get caught in the vortex.

Value judgements are important because they give you a direction to pursue scientifically. But merely staring in the direction of Mecca will not get you there, and neither will debating with your fellow pilgrims where it is. Walk a mile, though, and you may find a signpost or a landmark or another traveller who knows the way.

Apologies for waxing rhapsodic.

(And, of course, none of this is a criticism of Peter, who has taken many more steps than most of us, including me. I am more interested in his opinions for that very reason, and, it should be no surprise that I concur with him on the matter of statefulness. Nevertheless, I am even more interested in his scientific remarks.)

A PLT Tapestry

Then perhaps to you science is just a good philosophy, and then what we have to argue about is what distinguishes the scientific and mathematical methods from "a useful system of thought".

I think what distinguishes science and math as particular philosophies is that these try to focus in a very disciplined way on only those things that can be verified objectively, within particular strictly-defined notions of what "objective" means.

One of the great powers of formal, axiomatic systems is that when you have correctly derive a result with one, it is solidly objective in the precise sense that it cannot be refuted by another person who can genuinely claim to understand the system. (That person could question the utility or applicability of the system to some other matter, but not the factualness of the result within the system.)

Where philosophy comes back in is when we try to apply formal systems to practical matters, such as programming and PLs. Programming is a human activity with human ends; PLs serve as human tools with human ends. There is always a non-trivial subjective component in human ends, so there is always a subjective "philosophical" component to how formal results can and should be mapped onto practical applications.

To weave the nominal topic of this thread back into this, I think this is where the kinds of broad introductory books of the kind we are discussing (e.g. GEB, CTM, SICP, etc, which have significant philosophical content alongside their formal content) play an important role: they make plain that a mastery of programming languages (and by extension, programming) requires BOTH a mastery of formal systems AND a mastery of the philosophy of their practical application to the messy, non-axiomatic conditions of real problems.

none of this is a criticism of Peter, who has taken many more steps than most of us, including me. I am more interested in his opinions for that very reason

I think this also weaves in some ideas from the LtU Policies thread: the best discussion about PLs occurs when people who have mastered the technical and formal aspects of the field (or parts thereof) judiciously add their subjective but reasoned philosophical judgements to the collegial discussion of certain ideas or documents.

I agree with you that without the formal foundations philosophy is bound to be mere gum-flappery, but I would add that without the broader notion of philosophy, the formal systems can't be put to effective, practical use in the way that you want. We need both elements, gently woven together.

I want a language based on

I want a language based on scientific results, not arguments from the humanities.
Now that's an interesting - albeit probably overly simple - philosophy.. :-)

Look at the philosophy department (especially the sections of logic and mathematics), and then at the science department - where would you rather see PLT? The science department is busy describing the 'world out there' in terms of formalisms and models defined by the philosophy department; do you really think PLT is about describing something 'out there' rather than about building formalisms?

I get the impression you are restricting 'philosophy' here to ethics, aestetics, maybe metaphysics and ontology, but not much more than that. Of course you are free to do that, but then you shouldn't assume others are using the term in your redefined way.

Or maybe I simply completely misunderstood you..

HDA3

Reading John Baez and James Dolan's Higher-dimensional algebra III: n-Categories and the algebra of opetopes was a big turning point for me. To me it had obvious applications in PLT, but apparently everyone else is going in a different direction. Which is fine with me.

Dear sir

You owe me a new pair of eyes.

Maybe not so off direction

Reading John Baez and James Dolan ... was a big turning point for me.

In a rather strange coincidence, in a very recent post over at Ars Mathematica discussing the merits of category theory, Baez makes a brief appearance, and another poster JacquesC makes the comment that he is applying Baez's work to PLT.

I thought this might be of interest, though I haven't had a chance to read any of Jacques' work, so I can't yet add any value to this reference. ;-)

Maybe you are not as odd as you think, Frank! ;-)

It was a very gradual

It was a very gradual process for me.

But I have to second the mention of Goedel, Escher, Bach: an Eternal Golden Braid, that certainly helped. My first year in college, I tried to right a web browser, and really botched the parser, and I got sucked in to trying to understand parsing(whoever said "Syntax is the Vietnam of Programming Languages" was right...). Some other influential works include CTM and Let's Build a Compiler(I'm so ashamed to admit that one...).

Now that I think about it, trying to understand "design" was probably very big. I majored in Electrical Engineering for my BS, and so many of my classes were focused on analysis(here's a circuit, do some nodal analysis, etc.), and I thought Engineering was about Designing things, and that I would be taught how to design things. I guess PLT doesn't teach you how to design things, but its the closest thing I've found(in any discipline) that seems to ever address design(e.g. HtDP, CTM).

The final straw was the discussions on the Lightweight Languages mailing list(which led here eventually) and LtU really played a part too. I don't know that I can say I've ever seen/heard discussions of the nature and quality I see here in any other field.

You're not the only one to

You're not the only one to come partly via Let's Build a Compiler - if you're stuck in C-land, the approach is lightweight to leave you thinking you can still do it. Thankfully my first intro to Haskell came comparatively soon afterwards!

I guess in terms of theory it was TaPL and the related papers I was reading that pretty much started to fix it for me - I already had what could be called a pragmatic interest.

What really launched me into

What really launched me into PL theory was probably TAPL.
Prior to that, I had read a lot of the various scheme papers, SICP, etc.
TAPL, well, the ideas just sort of clicked for me, and from there I just became addicted to reading all the conference papers I can (and still do).

The part that makes it amusing...

Is the Did they give you the right impression of the field? bit. Any reflections on this?

Through a glass darkly

Did they give you the right impression of the field?

As I've mentioned before, one of my first PLT reads was Programming Linguistics by Gerlernter and Jagannathan.

It gave me the right impression in some respects (a good overview of PL history, the importance of certain ideas and languages in advancing that history, the importance of certain semantic notions in PLs) and a completely idiosyncratic impression in others (an unusual execution model, the notion of "hero languages").

Its most important contribution for me was that it brought Scheme to my attention (one of its "hero languages"), which caused me to pick up EOPL and SICP at a booksale.

(Incidentally, when I read GEB earlier than any of these, and really didn't connect it to PLT/Computer Science. This only dawned on me much later.)

I think one of the weaknesses of any wide-ranging overview book is that it paints a picture of heroic revolutions and sweeping ideas, when the reality of any mature intellectual field is that the day-to-day workings are much more involved with quite focused and fine distinctions, inching forward rather than striding boldly.

The revolutions and genius ideas often don't get recognized until after the fact.

Writings...Whoo! That's funny... Schools of Thought... Whee!

Stop! Stop! "Right Impression of the Field" Stop! Stop! You cracking me up! Hoo! What a laugh!

Hokay. Enough hysterical laughter...A bit of background then.

I'm in industry. I shepherd megalines of bog standard run'o'the'mill C/C++ embedded code.

Now programming has always been a depressing occupation. You are always spending your time looking at the worst code your predecessor ever wrote. If it was good code, it would just work and you wouldn't be looking at it. Even for enhancements, it would be "Open for Extension, Closed for Modification".

So if you in industry, and you are looking at code, you are looking at the worst code.

Now I write scripts to shepherd symbols, find bugs, build code, find races do stuff.

So my job is very very depressing. I scan megalines of code and find the worst of the worst and look at that.

I look at industrial issue lists where the "issues" are numbered in the thousands and there "change control boards" that sort these issues into fix now, fix later, fix maybe, fix never. And products are shipped and bought and used where (known) outstanding issues are in the hundreds. And this is in companies with, (industry wide relatively speaking) excellent quality.

These are the writings which drive me to Programming Language Theory. Megalines of Bad, Buggy code that is never fast enough, small enough or flexible enough.

It's the School's Of the Worst thoughts teams of Programmers have ever Thunk that drive me into Programming Language Theory.

Let's face it, we, out in industry are in deep deep doo-doo and increasing the complexity of languages and compilers ala C++ has merely given us enough rope to hang ourselves with.

What we really need is PLT support for...
* Program checking.
* Refactoring.
* Program Visualization.
* Architecture design, refactoring and enforcement.
* Dataflow visualization.
* Causality containment.
* Program Simplification.

A man after my own heart...

except that you forgot to say that the current state of the art costs a fortune, and is slowly but surely starting to kill people and warp companies and governments. It's the most pressing social problem I can possibly do anything about, so that's why I'm here.

To your most excellent list, I would add

*Requirements traceability
*Root-cause failure analysis
*Program evolution
*Architecture inference
*Resource utilization analysis and enforcement

Most day-to-day software engineering is reverse engineering, and it's damn well time our languages and tools reflected that fact.

Your Correction accepted and endorsed.

Message ends.

For the sake of discussion

For the sake of discussion and at the risk of going off topic;

Reading between the lines it sounds to me that you are talking about "ground floor" systems theory. Reverse engineering and architectural inference amount to trying to determine the contents of a black box based on input/output behavior. Visualization follows from both the above. But is there any organized way to go about it? A given behavior rendered in software can take many forms. In this sense the proliferation of languages seems more like a problem than a help. Also distribution of functions over networks and components is counterproductive. (ie my conclusion)

Causality containment and root cause failure analysis amounts to defining every significant operation as a success or failure and not proceeding with failures. This amounts to goal orientation.

Overall visualization to me seems to imply target domain semantics.

Also I want to say that I can't really relate to what you guys are going through because I am retired. These are only observations from a distance. Good luck!

Dispatches from the bug-fighting trenches

But is there any organized way to go about it?

1)Look for bugs: anything from typos to deadlocks to large-scale architectural bottlenecks
2)Discern patterns to bugs. Patterns should have extremely low false-positive rates, or they are useless. Per information theory, this is easier the more "verbose" or "redundant" the language/code-base is.
3)Create automated detector for bug patterns (this is actually the easy bit)
4)Integrate your detector into a high-usability automated bug-pattern detection tool. If possible, include single-keystroke fix for bug pattern.
5)If your end result isn't easier to use than just slapping code together, you've wasted your time. Toss it out and try again. Developers won't use it anyway.

Wash, rinse, repeat. For a moderate complexity language with a large standard library, diminishing returns for this process sets in at about 500-1000 bug patterns.

Architectural inference is more difficult that mere bug-hunting, but the techniques are similar. Attaining the necessary level of usability necessary is more difficult. Developer assistance is usually necessary (e.g. annotations), and global search and complexity explosion often mean that interactive speeds are difficult to achieve. The important part is looking for the thousand little patterns, rather than going top-down and attempting to build a model in one swell foop.

In this sense the proliferation of languages seems more like a problem than a help. Also distribution of functions over networks and components is counterproductive. (ie my conclusion)

These are constraints, but not always insurmountable ones. That's why they pay us the big bucks.

Overall visualization to me seems to imply target domain semantics.

Yeah, but that means making computers that can think, rather than just calculate. Fortunately, you can do a lot with approximations and heuristics. Experienced programmers usually get it right, and their architectural patterns and bugs tend to fit into a finite (but large) number of detectable buckets. These facts can be used to make a reasonable guess at intended semantics, without any actual understanding of the underlying domain.

Not really.

We have the white box. We have the source code in front of us. But it is an incredibly large, complex, fine grained, fractal and highly coupled white box. Black box approaches are only useful in a very small (but certainly non-zero) set of cases.

Each level of Programming Language Design has gone from the bottom up. Let's take what we are doing now, and add a language feature or paradigm that will make programming easier / simpler / more expressive / safer.

All Good Stuff.

Now we have literally several man decades of code in C/C++ in front of us.

And no one, not a single person really knows what the hell is really going on.

It's too big. Way to big to fit in a single mind.

To find a bug a single person may spend days tracing the threads of causality from one end to another. If he is Good, he will reject and ignore (most) of the irrelevant ones.

The Executable UML crowd have as a (intermediate) Holy Grail, Round Trip Engineering. Take a monster program in C/C++ feed it to The Thing and it produces a set of UML diagrams. Fix things up in UML, put The Thing in reverse, reproduce the monster C/C++ program, compile and run.

The gotcha with this whole vision is the preprocessor, the syntax, and the semantics of C/C++ is so incredibly Gnarly that ...
* The Thing is going to be a monster of a monster program.
* The mapping to(or from) UML is either going to be Fuzzy and inexact or
* The UML is going to as gnarly or perhaps even uglier than the C++.

So I have a different Vision to the UML crowd. Instead of layering new features on any existing language.

Create languages that are designed to be machine analyzable.

For example, Scheme syntax is very very suitable, but it's semantics can get quite hairy.

I want a language so that when we have megalines of this new language we can...
* Simplify the code, highlight obstacles to further simplification.
* Show dependency networks and aid in decoupling and improving cohesion.
* Analyse the dataflows.
* Have static debuggers where we can say things like, "Ah, we must have got to this line of code, select all lines of code that could have caused that. Overlay with all lines of code changed in last revision."
* Do round trip engineering with simple tools, to simpler diagrams, not Gnarly monsters to fuzzy Horrors diagrams.
* Enforce Architectural design decisions.

Ehud ask for readings that exemplify this school of thought. Well, rather than initiating my interest, these readings were profoundly interesting to me because they showed startling promise of resolving these issues...

A conditional term rewriting system for Joy based on the two constructors concatenation and quotation. by Manfred von Thun

Coabgebra is something

Coabgebra is something interesting in this context. I am about half way through the class notes of this .

Book on Coalgebra

Bart Jacobs has a text book in preparation. From his papers page:

B. Jacobs, Introduction to Coalgebra. Towards Mathematics of States and Observations. Two-thirds of a book in preparation; comments are welcome.

Yes and No. Mostly No.

Coalgebra may inform your choices, but wouldn't be something to copy.

I'm talking manipulating millions of lines of code. Imagine trying to work with multi millions of lines of that notation!

Rather I would look at what makes for a difficult coalgebra, and restrict or remove it.

Obvious examples...
* The goto got a bad name for very similar reasons.
* 'C' pointers are "goto"s in address space.
* Anything with non-local effects, eg. Global variables.

Less obvious example is von Thun banish variable names from Joy to make rewriting easier. (I'm still debating internally whether he is right, or merely reflecting humans are bad at bookkeeping.)

The question this raises for

The question this raises for me is how does one write good code with state without using Haskell? Is Haskell really the only way? Programming is all about generating behavior and this means state. A correct, sound approach to programming state should solve all our programming problems. Is Improper use of goto's, pointers and globals the whole story? I don't hear any talk about how to use state only don't do this of that. Good programming seems to be all about correctly using state and to me this implies systems theory and now days coalgebra. It would be nice to hear some discussion along these lines.

This is getting WAY OT

Hey guys,

This disussion about... state? or whatever has gotten way off topic, and unfortunately has put the kybosh on this thread about inspiring books and papers.

If my little digression about the philosophy of PLT arising from the content of particular books inspired this going off the rails, then my apologies; I'll try to suck less in the future. ;-)

However, you may want to at least start a new thread with a more related topic heading if you want to continue this discussion, as a courtesy to the readers of LtU who prefer a more topic-focused style.

Thanks!

well

"Why did you decide PLT is cool?"

i was a videogame nut at the time and became interested to know the inner workings of the thing. :)

very mundane and low-level, yet it was what got me into. It's amusing that i turned into enjoying much more higher level abstractions and languages and yet enjoy to-the-metal C a lot. go figure... :)

The answer to the above OP

The answer to the above OP question: is it what you expected would seem to be a resounding no. Life is full of surprises. In my case I never made a conscious decision to get into programming. It was always something there that had to be done. My original training in "computers" would probably be described as "analog/hybrid". No coding required. Later on engineers began to use Fortran and Basic as part of their work. In my case programming took up more and more time, and I gradually got interested in programming languages as a way to do applications programming. This seems to where I am at today; looking for a good applications programming language.

Actually...

It was this website.

Me too.

Every single PLT paper or book I've read I found out about through LtU. And I don't know if LtU gives the right impression of the field, because it is the only impression of PLT that I experience.