The Four Questions

Page 4 of the lecture notes from Mitch Wand's first Principles of Programming Languages lecture:

When looking at a language, we will always ask four questions. As we proceed through the course, we will ask these questions in more and more sophisticated ways; I’ll show some of these subquestions now, even though we haven’t yet covered enough to understand what they mean:

  1. What are the values in the language?
    • What are the values manipulated by the language, and what operations on those values are represented in the language?
    • What are the expressed and denoted values in the language?
    • What are the types in the language?
  2. What are the scoping rules of the language?
    • How are variables bound? How are they used?
    • What variables are in scope where?
  3. What are the effects in the language?
    • Are there side-effects in the language?
    • Can execution of programs have effects in the world?
    • Can execution of programs have effects on other programs?
    • Can execution of a program fail to terminate?
    • Are there non-local control effects in the language?
  4. What are the static properties of the language?
    • What can we predict about the behavior of a program without knowing the run-time values?
    • How can we analyze a program to predict this behavior?

What do you consider the fundamental properties of a programming language?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Fifth Question: What are the costs?

Being an economist by degree, I always look at the opportunity costs of a language.

  • How hard is it to learn (and can I find others for support)?
  • Under what licensing terms (and what operating systems)?
  • How good are the libraries (or am I going to have to produce the functionality)?
  • How fast is the produced code (and am I going to play games to optimize the speed)?
  • How well does it play with my legacy applications (or are we talking a complete rewrite)?
  • etc....

    Sixth Question: What's the community?

    Or you could look at social aspects:

    • What's the history behind the language? Why was it invented?
    • What other languages have tight bonds to its original adoption group?
    • What kind of problems do people who use this language typically solve?
    • What core values are reflected in the language's design philosophy?
    • Who controls which direction the language evolves in? How big an effect does the community have on the design?
    • What mechanisms (a la CPAN, PEAR) does the language have for code-sharing and collaboration?
    • What's the development process for typical practitioners using the language?

    To begin with

    What's the big idea and where are the cool examples?

    Eager or lazy?

    This is somewhat tied up with side-effects. I don't know of an eagar langauge that doesn't have side-effects or of a lazy language that does.

    Laziness keeps you honest

    As on slide 24 of the HaskellRetrospective:

    Every call-by-value language has given into the siren call of side effects
    But in Haskell
        (print “yes”) + (print “no”)
    just does not make sense. Even worse is
        [print “yes”, print “no”]
    So effects (I/O, references, exceptions) are just not an option.
    Result: prolonged embarrassment. Stream-based I/O,
    continuation I/O...


    Is the fit between me and this language good enough that I want to spend time and energy helping answering Jonathan's and Chris's questions by something positive?

    how about

    what problems is the language especially suited for
    and do i already have a language that is especially good for solving those problems

    what are the various possibilities for integrating that lanuage with other languages i know.

    both summations of parts of chris and jonathan's questions.

    More importantly...

    How much fun is it?


    you sure have a sense of humor, Anton... :)

    That's why...

    ...we all love BRAINFUCK! (And LISP, *grin*)

    he said

    fun, not funny.[in relation to BRAINFUCK]

    FixNum Question

    As a language designer in search of unique and effective syntactic features, I always look at what a language has to offer to future languages, and how the nature of the language influences development paradigms.

    Most important question (for me)

    What facilities does the language give a developer or team of developers to organise medium- or large-sized programs?

    • What facilities does the language have for value abstraction and hiding?
    • How easy is it to embed a domain-specific language or library?
    • How much "scaffolding" is required before work at the domain-level can commence? get the idea.

    X. What are the programming concepts in the language?

    And how are those concepts exposed to the programmer?

    Languages can be ordered according to what programming concepts they contain. The set of concepts determines the reasoning techniques, the programming techniques, and what problems the language is suited for. For example, key questions are does the language have explicit state or concurrency. If it has state, is that state visible as passive or active objects? If it has concurrency, is it visible through monitors or transactions. And so forth.

    How does it scale to >1M LOC systems?

    #include the 'costs' and 'community' issues above, 'cause if those aren't answered the language simply isn't practical for "programming in the enormous".

    -At how many different granularities can logical structures be specified in the language? Do available structures support encapsulation? Explicit typing? Abstraction (particularly abstraction of interface from implementations)? Explicit dependency management? Explicit contracts? Extension/interception? Versioning?

    -How richly and cleanly can design intent be expressed as program structure, without resorting to ad hoc idioms? The more clearly design intent is inferrable from program structure, the easier it is to communicate via code, and the more powerful static and dynamic code analysis tools can be.

    -What support exists for identifying, managing and recovering from exceptional conditions?

    -Do comprehensive standards exist supporting static and dynamic introspection of program structure? Standards mean cheap and powerful tools, if the community exists.

    How does it let me avoid 1M LOC systems?

    Granted, there are some projects which are always going to be big complex undertakings. However, there seems to be a danger in seeing writing large amounts of code as an end in itself. So, considerations for programming-in-the-large should be matched by powerful composeable abstractions that allow writing succinct solutions.

    Can't be done, sadly

    However, there seems to be a danger in seeing writing large amounts of code as an end in itself

    1M LOC systems usually have price tags in the $50M range and up, and require teams of at least 100 people to specify, build, and support. No one is building things like this for fun (although some of us tasked to build these cathedrals may certainly be having fun mastering the challenges involved)

    So, considerations for programming-in-the-large should be matched by powerful composeable abstractions that allow writing succinct solutions.

    This works at smaller granularities (method and class, in OO terms), but doesn't scale so well for the larger structures that dominate the architectures of >1M LOC systems (class cluster, module, application, system). The functionalities involved are simply too large, too heterogenous, and stovepiped into too many different people's brains to allow much in the way of abstraction. As such, you'll at best get constant factor size improvements by using such techniques. Nice, and often pretty, but it's not enough to get you down into "succinct", I'm afraid.


    I think (given that this is L:tU) that Neil is referring to the type of metaprogramming abilities that exist in Lisp or Ruby. Lisp applications seem to bottom out at about 400k LOC, despite some pretty impressive functionality. The entire source code for the Symbolics Lisp Machine was about that size, including OS, networking, windowing system, IDE, and compilers for a half dozen or so languages. CMUCL is about 250k LOC. I heard Mirai was about 300k.

    Techniques like this often can get well more than constant-size improvements. The tighter you refactor your abstractions, the better foundation you have to build even more abstractions. Imagine if instead of writing whole class clusters, modules, and applications, you could write a macro that expands into a whole bunch of defclass forms.

    And that was supposed to be i

    And that was supposed to be in response to Dave's comment above. Oops.


    I don't believe the 400kLOC number for Genera. It is more like around 1MLOC. That's the number that people (like Howard Shrobe in his ILU 2003 talk) usually mention. This includes stuff like editor, mailer, and similar tools. With other some added stuff (like image processing, database, ...) it is easily 2.5MLOC, btw. - that's about what I have in my LMFS.


    Sounds similar to GNU Emacs.

    Yes - Metaprogramming

    I agree with Jonathan, and just a little further "out of the box" thinking no longer restricts you to Lisp, Ruby and the like as the "implementation language".

    I use Lisp to create C# programs with gains such as:
    * Base code 13k LOC
    * Each app has approx. 50x multiplier (e.g. 400 lines -> 26k for one app)

    "There are patterns everywhere"


    How does the language or its attendent libraries handle memory, threads, files, sockets etc.