Programming by poking: why MIT stopped teaching SICP

A blog post; excerpt:

In this talk at the NYC Lisp meetup, Gerry Sussman was asked why MIT stopped teaching the legendary 6.001 course, which was based on Sussman and Abelson’s classic text The Structure and Interpretation of Computer Programs (SICP). Sussman’s answer was that: (1) he and Hal Abelson got tired of teaching it (having done it since the 1980s). So in 1997, they walked into the department head’s office and said: “We quit. Figure out what to do.” And more importantly, (2) that they felt that the SICP curriculum no longer prepared engineers for what engineering is like today. Sussman said that in the 80s and 90s, engineers built complex systems by combining simple and well-understood parts. The goal of SICP was to provide the abstraction language for reasoning about such systems.

Today, this is no longer the case. Sussman pointed out that engineers now routinely write code for complicated hardware that they don’t fully understand (and often can’t understand because of trade secrecy.) The same is true at the software level, since programming environments consist of gigantic libraries with enormous functionality. According to Sussman, his students spend most of their time reading manuals for these libraries to figure out how to stitch them together to get a job done. He said that programming today is “More like science. You grab this piece of library and you poke at it. You write programs that poke it and see what it does. And you say, ‘Can I tweak it to do the thing I want?'”. The “analysis-by-synthesis” view of SICP — where you build a larger system out of smaller, simple parts — became irrelevant. Nowadays, we do programming by poking.

Also, see the Hacker news thread. I thought this comment was useful:

What should we consider fundamental?

A fair question, and a full answer would be too long for a comment (though it would fit in a blog post, which I'll go ahead and write now since this seems to be an issue). But I'll take a whack at the TL;DR version here.

AI, ML, and NLP and web design are application areas, not fundamentals. (You didn't list computer graphics, computer vision, robotics, embedded systems -- all applications, not fundamentals.) You can cover all the set theory and graph theory you need in a day. Most people get this in high school. The stuff you need is just not that complicated. You can safely skip category theory. What you do need is some amount of time spent on the idea that computer programs are mathematical objects which can be reasoned about mathematically. This is the part that the vast majority of people are missing nowadays, and it can be a little tricky to wrap your brain around at first. You need to understand what a fixed point is and why it matters. You need automata theory, but again, the basics are really not that complicated. You need to know about Turing-completeness, and that in addition to Turing machines there are PDAs and FSAs. You need to know that TMs can do things that PDAs can't (like parse context-sensitive grammars), and that PDAs can to things that FSAs can't (like parse context-free grammars) and that FSAs can parse regular expressions, and that's all they can do.

You need some programming language theory. You need to know what a binding is, and that there are two types of bindings that matter: lexical and dynamic. You need to know what an environment is (a mapping between names and bindings) and how environments are chained. You need to know how evaluation and compilation are related, and the role that environments play in both processes. You need to know the difference between static and dynamic typing. You need to know how to compile a high-level language down to an RTL. For operating systems, you need to know what a process is, what a thread is, some of the ways in which parallel processes lead to problems, and some of the mechanisms for dealing with those problems, including the fact that some of those mechanisms require hardware support (e.g. atomic test-and-set instructions). You need a few basic data structures. Mainly what you need is to understand that what data structures are really all about is making associative maps that are more efficient for certain operations under certain circumstances.

You need a little bit of database knowledge. Again, what you really need to know is that what databases are really all about is dealing with the architectural differences between RAM and non-volatile storage, and that a lot of these are going away now that these architectural differences are starting to disappear. That's really about it.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

"That's really about it."

"That's really about it."

That's an excellent summary. Building all of that knowledge from zero (with all its implicit context) is certainly a multi-year project.

Iirc, Shriram Krishnamurthi

Iirc, Shriram Krishnamurthi some years ago, in talking up the intro Scheme class they'd put together at Rice — formed as an alternative to the MIT Scheme intro — said they had other departments wanting to send their students over for the Scheme intro, to learn how to think. Which had me, just now, trying to remember where that quote came from about 'computer science isn't a science and its significance has little to do with computers'. Of course, that quote is from the preface to the first edition of the Wizard Book.

But but but

Programing by synthesis of well understood parts is graceful and beautiful and fun. Programming by trying to get never-well-documented libraries to barely do something is f***ing torture.

Weeding out freshmen is a

Weeding out freshmen is a proud tradition...


I fail to see the connection. I'm talking about the state of programming not about school.

Torture leads to weeding-out

He's making a joke, that inflicting horrible pain on students is a good way of getting them to drop the subject.


Languages provide abstractions (does anyone still recall the data abstraction/procedural abstractions/control abstraction typology?). Consider how people studying SICP knew about lazy data-structures (streams) when this was something mainstream programmers weren't thinking about. And how about Luke's reminders about Thinking Machines LISP and so forth.


Hmm .. when learning to drive a car one notices the engine is over-revving and the car isn't going fast enough. We have to change gears. So ease off the accelerator, depress the clutch pedal, shift the gear stick (that way, stupid!), release the clutch slowly, and depress the accelerator -- at which point you crash into a telegraph pole because you were so focused on gear shifting.

A few cars (and telegraph poles) later, you're watching for other traffic, line marks on the road, turn signs, and .. hey don't forget pedestrians and cyclists .. not to mention police.

Later .. you just drive. What is the easiest way to the school to pick up the kids at this time? Don't forget the police speed radar sometimes hangs out on that hill!

When I'm programming I do not do much reasoning. It's automatic. It's become intuition. This algorithm .. just doesn't feel right. Oh, that is beautiful because .. er .. (thinking now .. ) the boundary conditions are handled without special cases.

Here's how I program: as a case study, adding row polymorphism to Felix compiler (which is written in Ocaml). First, I look at some video lectures and read a couple of papers. I kind of get the idea. I think about (reason) how to add it to my system. Then, I throw in a new term to the (encoding of the Felix) type system and hit the compile button. The compile fails. I fix it. Compile again. Repeat. I let the COMPILER do the reasoning! Stuff thinking, I'm lazy.

Oh dear. This error here .. I can't fix it. There's not enough information. The unification engine I have can't handle constraints, and the term I used doesn't carry them anyhow. Back to drawing board. THINKING. No, stuff that I'm lazy! Let someone else do the thinking.

Hmm .. didn't Daan Leijen write a paper on a way to do it without constraints. Yep. Reading. Thinking. OK, lets try that. Revert failed extensions to type constructors, try these ones. Error. Fix. Error. Fix. Dang, I need to have expressions as well as types. Add stuff. Repeat. Add grammar. Now we need some operators. Do some. As above.

Finally, test. Woops. What went wrong? Debug statements. Binary chop. Oh, that looks wrong. Thinking. Fix. Repeat.

Finally it works. Who did the reasoning? I did plenty, but not in the programming part, in the design part. Who else? Ocaml did most of the reasoning. Thank you type system. Theoreticians did a lot of reasoning I guess, thanks especially to Leijen.

Programming is not about reasoning. It's about navigating a complex picture. The pictures are wholistic, you can't reason about them. You have to FEEL.

AFTER it works .. THEN you reason about why. Seek to increase confidence that testing cannot provide. But you're not programming then, you're trying to reason about the code you wrote in little pieces with a process in which you use every reasoning tool available *except* your brain.

The only thing i really had to think about was what the encoding of the new term was. Everything else followed from that. It's marginally unusual: Felix doesn't have row variables, it uses a record variable instead.

Here is the new type term:

  | BTYP_polyrecord of (string * t) list * t

and new expression term:

  | BEXPR_polyrecord of (string * t) list * t

and that is the total result of any "reasoning" (and it wasn't the first choice, I messed up twice before getting it right).

hear hear

I think I have much the same experience.

But it also makes me sad. Mainly because when I see other people writing crappy code (that I have to live with or might want to wish to use) then I have to see if I can figure out how to get them to see when things are crappy on their own. So formalisms and reasoning and commentary and all that need to come together to improve things across the board.

The Jokes Write Themselves

"You can safely skip category theory. What you do need is some amount of time spent on the idea that computer programs are mathematical objects which can be reasoned about mathematically."

Yeah, you don't have to have category theory to accomplish that. You'll just replace it with something more verbose, with a lot of implicit superfluous structure (set theory), and with a logic (classical FOL or HOL) that kinda-sorta maps to algorithms and types, but then you'll have to backpedal and induce a lot of pain and anguish when you try to remove the stuff, like the Law of Excluded Middle, that doesn't actually work (until/unless you want to type Peirce's Law via call/cc, pace Tim Griffin). Whereas you go from category theory to topos theory to intuitionistic logic, and there you are at Curry-Howard with zero additional cruft you then have to disavow/remove/explain away.

I dunno about anyone else, but I'm done with programming pop culture, and even academic computer science that just takes the inappropriate classical mathematical/logical foundations as givens. We know what the better frameworks are, and refusing to use them as foundations does a disservice to students and practitioners alike.

Sic et non

If you start too concrete, abstracting later is apt to be a pain. If you start too abstract, concretizing later is apt to be a pain. There are also conceptual problems at both ends of the spectrum: too concrete and you get lost in details, too abstract and you're in a rudderless sea of "generalized abstract nonsense".

(On a side note, I'm amazed google gives me nothing domain-specific for "how many category theorists does it take to screw in a lightbulb?"; surely category theorists, at least, should be creative enough for that.)

Personally I enjoy the

Personally I enjoy the process of abstracting, but the process of concreteicising is much more of a pain. Possibly because many of the possible instantiations of an abstraction aren't things you'd /want/ to do. While a single concrete thing can embody many different abstractions simultaneously and is also something you may want to use later. I assume that the best place to start is the level that seems most real to the learner, and then expand both up and down to the more distant things.

I'll Bite

Whereas you go from category theory to topos theory to intuitionistic logic, and there you are at Curry-Howard with zero additional cruft you then have to disavow/remove/explain away.

This is simply not true, or true at the level "a compiler is an arrow." I am not interested in people who can explain to me that compilers are arrows, I am interested in people who can explain how to implement compilers. Neither is there anything in category theory which helps you study set theory or different logics particularly well.

I dunno about anyone else, but I'm done with programming pop culture, and even academic computer science that just takes the inappropriate classical mathematical/logical foundations as givens. We know what the better frameworks are, and refusing to use them as foundations does a disservice to students and practitioners alike.

This is a straw man. Except for practitioners of category theory there is no-one who really believes that it can be a practical foundation for math. It's simply too sloppy, no grammar, no logic, nobody really knows what it does. Abstract nonsense.

People can do category theory, I don't care, but I don't think it adds a lot to most students skillsets, time they could have spend better learning other topics, and I don't think it contributes a lot to CS in general.

not useful to me

My impression is that category theory is useful for having shorter proofs in what would be set theory on topics like cs.

If I ever have to go to the foundation of set theory and write a bunch of proofs in order to write a computer program, you will be the first to know.

So yes, it's not a practical tool to a programmer any more than ZFC is useful in real life.

Not programmer, CS

Better differential equations to implement a control system? Better Bayesian theory/fuzzy logic/neural networks? Better scheduling solutions for real-time systems? Better analysis of the performance of algorithms? Better guarantees over the properties of protocols? Better user interface theory? (I could go on for a while) ...

You talk about programmers, I talk about CS, and even within the foundations of CS it is a niche topic at best only suited for studying some algebraic properties or functional compositions.

It's not very useful to anyone.

It is probably useful to

It is probably useful to someone (as you state).

As an aside, is there really such a thing as "generically useful CS foundations" anymore? It seems to me that programming and even CS has already divided into many different topics, each with their own foundational core that has very little overlap beyond engineering/science/math basics.


Algorithms and their complexity, or feasibility, seems to remain central to almost all CS does. It's hard to come up with examples where complexity doesn't rear its ugly head.

Apart from that, I think most CS students are better of with general knowledge of applied math than foundational issues. Dijkstra once pushed an agenda that programmers should be able to prove everything, hence CS adopted the foundational research from the mathematicians.

But history seems to have taken another course than that what Dijkstra envisioned. Math is still important, a computer is an expensive calculator, but I am not sure CS students are better of being able to implement fluid dynamics simulators than anything else. Seems more relevant than being able to reproduce the Hilbert program.

If you don't know

> Neither is there anything in category theory which helps you study set theory or different logics particularly well.

If you don't know why this is spectacularly wrong, there's literally nothing I can do to help you.

> It's simply too sloppy, no grammar, no logic, nobody really knows what it does.

That's hysterical. Literally.

It is spectacularly right

Prove me wrong and we can talk. Am I supposed to take your word for it? Where are the proof checkers based on category theory?

Addendum: To make my position clear again. Category theory is highly informal and that sloppy that I have described it here sometimes as an informalism. Math about math, I still haven't seen anything yet in category theory which has prompted me to change my mind.

> Prove me wrong and we can

> Prove me wrong and we can talk.

I don't have to prove anything to you. You're impervious to being informed.

> Am I supposed to take your word for it?

No; you're supposed to read nearly anything Bart Jacobs has written, and Categories for Types, and Practical Foundations of Mathematics, and the HOTT book, and... there's an extensive literature on this for you to avail yourself of. But I'm well aware that you won't, because it would subvert your extremely willful ignorance.

The problem with this attitude is

that you have made a grand statement with no backing. It is irrelevant whether you think marco is

impervious to being informed.
What is relevant is that you have made the grand statement in a relatively public place and have been asked to prove your position. Others in this forum might well like you to demonstrate the appropriateness of your position. Otherwise, you just come off as simply arrogant and unhelpful.

So why should I spend my valuable time reading anything from Bart Jacobs? What influence in practical terms has he had on anything? Since I have never come across him in my extensive reading - who is he and what does he bring to the table that would further my interest in trying to understand what he is presenting? What practical applications would his work provide me in doing anything for customers or clients?

If your responses are meaningful then I have an additional area in which I can expand my knowledge base.

Do you think category theory is symbolic logic?

Simple question: Tell me what you think a type describes in category theory. You don't know, right? No idea whether it's something syntactical or denotes some proper construct, no idea what that construct really would be in what semantic domain.

And I have about two dozen more questions I can ask you about the proper underpinning of category theory. Substitution principles? Assumed notion of equality? Do we have a category of categories, and what rules that out (syntactically)? What logic is used to describe it? Does an application denote a rewrite? ..

Okay. No idea what a type denotes in category theory. That's fine right? No idea what a type describes in ordinary math, anyway; something set theoretic. In a similar fashion you can still use category theory to study superficial properties of algebras. And come up with nice manners of describing abstract categories sometimes in a point free style, I'll grant you that.

But the only reason I think category theory has anything to do with foundations of mathematics is that it is the only setting in which algebras are studied in a particular manner, and that's about it.

So, you are a far cry from founding math on the basis of category theory. Stated differently, (I believe they are trying exactly that these days) they'll have a large amount of problems to solve.

Maybe that'll pan out. I don't have high hopes for it. And once you start defining what, for instance, types are, you have no idea whether all results still hold. It simply works that way.

People can do category

People can do category theory, I don't care, but I don't think it adds a lot to most students skillsets, time they could have spend better learning other topics, and I don't think it contributes a lot to CS in general.

Probably the most discussed pure abstraction of the past 30 years was developed using category theory. Haskell definitely wouldn't be where it is without it.

Haskell definitely wouldn't

Haskell definitely wouldn't be where it is without it.

Haskell: "I resemble that remark."


Haskell definitely wouldn't be where it is now without Monads. It would be in another place. As the first major language to embrace purity at all cost, they would have (sooner not later) come up with a nice alternative way of looking at things. I don't take it as a given that this other place would have been worse.

I think claiming we'd be in

I think claiming we'd be in a better place would be a tough sell. Monads were already 12 years old by that point, type and effect systems were brand new and crude, and uniqueness types as found in Clean just didn't fit with Haskell's goals.

I'm not sure what else could have slipped into that place. Perhaps this? Oleg demonstrated that to have a nice translation with delimited continuations a few years later, and which would later become algebraic effects and handlers. That was a long road though. Would not having monads have accelerated that journey much?

Easy to argue, hard to prove

I'm just observing that necessity is the mother of invention and we might be better off now if we hadn't had monads back then. Even if something like monads were discovered, we might have at least avoided monad transformers... :)

That's not true either

If you do a lot of functional programming a monad really is a naturally occurring construct. I already knew it before people popularized the categorical name. (In the sense that overloading a constant and function application to calculate within a certain frame was not a new idea.)

The route probably was that a popular FP construct was given a categorical underpinning. It usually goes that way. And I always found the categorical underpinning of monads nonsensical. But I guess that's known on this site, and I forgot half the reasons why.

This is definitely wrong

The route probably was that a popular FP construct was given a categorical underpinning. It usually goes that way. And I always found the categorical underpinning of monads nonsensical. But I guess that's known on this site, and I forgot half the reasons why.

This is not right. The history goes roughly as follows:

1. Godemont invents monads in 1958, under the name "standard construction".

2. In 1963, Bill Lawvere applies them to model constructions in formal algebra -- basically, he works out how to use monads to model substitution in theories of languages without binders, like the formal language of monoids, groups, and rings.

3. In the late 1960s (roughtly 1968-1969), Dana Scott and Christopher Strachey invent denotational semantics. Initially, they work in very concrete models of languages (Scott's D-infinity model of the lambda calculus was initially defined in terms of complete lattices and continuous functions, and then weakened to CPOs).

4. Throughout the 1970s, denotational models of more and more effectful languages -- state, control, concurrency, and IO -- are invented.

5. As the complexity of the languages under study goes up, people begin using a variety of semantic structures -- not just lattices and domains, but also metric spaces and so on. To organize the structure common to all of these approaches, people begin using categorical methods. Michael Smyth and Gordon Plotkin's 1982 paper The Category-Theoretic Solution of Recursive Domain Equations is a good example that remains relevant to this day. (I learned how to solve domain equations from it, in fact.)

6. In 1989, Eugenio Moggi recognized that there was a common structure to how effects are modelled in semantics -- namely, monads -- and showed how this worked for a very wide variety of effects.

7. Shortly after publication of this work, Phil Wadler recognized that Moggi's semantics could be used to replace Haskell's IO system (which had a very clumsy CPS-based API), with a much simpler monadic API. See his 1992 paper The essence of functional programming. This led to the adoption of monadic IO in Haskell 1.3, in 1996.

8. In his PhD thesis, Andrejz Filinski proved a representation theorem for definable monadic effects -- he proved that delimited continuations (or alternately, full continuations plus state) could be used to implement and definable monadic effect.

9. Shortly after the turn of the millenium (2001-2003), Gordon Plotkin and John Power proposed using Lawvere's idea of modelling substitution on trees with monads to turn an algebraic theories of effects directly into a categorical semantics of effects.

10. A few years later, Matija Pretnar (first with Gordon Plotkin and then Andrej Bauer) made use of Filinski's representation theorem for effects to invent "effect handlers". The idea was that you could separate the definition of an effect from its use by client code, and dynamically control how effects get implemented. See Sam Lindley, Ohad Kammar and Nicolas Oury's 2013 paper Handlers in Action.

So far, the theory has preceded the application, not just once, but *twice*. Furthermore, I'm pretty sure this will happen at least once more -- in his work on call-by-push-value, Paul Blain Levy has shown how the decomposition of a monad into an adjunction naturally splits semantics into a living in a pair of categories, one for values and another category of algebras for effects, which is sure to shape the design of the next generation of functional language.

I'd thought the point intended

I'd thought the point intended was that monads as a practical technique are separate from the mathematical structure and can and do occur to programmers without being provided to them. The history of the mathematical ideas is the most documentable part of the history, but not necessarily the whole story. I don't mean to take sides on the point; just hoping to facilitate clarity. Of possible relevance: You Could Have Invented Monads! (And Maybe You Already Have.)

Yah, it is a naturally occurring construct

Well. It's somewhat of a natural occurring construct like fold. I.e., if you start doing FP naively then you naturally start of with lists, and at some point you realize, if I have a list then it's natural to fold it with an operator. Take some list [2, 7, 3] then a lot of computations will have the form (2 + (7 + 3)) for some operator +.

And then you start thinking about what you'ld do with an empty list, and how you'ld fold a list with an operator which associates differently, etc. And how you'ld fold trees. The idea of folding a structure naturally occurs and naturally generalizes into a lot of different directions. (And the dual also naturally occurs, if you can fold you can also unfold, that's a really trivial realization.)

And the same is true for the monad, at least that definition of monad such as it was popularized by Wadler, and a number of other people.

I.e., I would call naturally occurring some central realization that fundamental to functional programming are values and functions. So if you overload that, the construction of a constant and function application, you can generate arbitrary calculations within any domain.

So, IF you define the construction of a value within a domain with some abstract injection operator (unit of type 'a -> M a'), and overload function application with some abstract function (bind of type '(a -> M b) -> M a -> M b'), THEN you're set to do any computation within an arbitrary domain M. (If you erase M in the types you see you simply overloaded the notion of constant construction and function application.)

With a bit of twiddling with types the monad popularized in Haskell is simply a naturally occurring construct; a lot like fold, map, filter are.

I do agree category theory takes that notion further, but also at the expense that you need to understand that (steep to understand) framework and that framework seems to imply you need to abide to the algebraic laws seemingly required by that theory. Whereas I would say one has simply overloaded constant creation and function application for some specific domain M, and you're free to abuse that idea further in any form you see fit as a programmer.

(The latter is what I don't like about category theory. It seems to shoehorn practitioners into a certain manner of doing things. If I look at folding a tree I am going to fold left-to-right and right-to-left as if it was a list (every container is a list), but also along its algebraic constructors, and also depth-first, and breadth-first. Similarly, for monads, it often makes a lot of sense to approach some calculations algebraically and define multiple binds for a specific domain, for instance.)

jeeze, can't we all get along?

Shouldn't we let all sides inform each other in a melange that gives us a richer result than any single approach would have? The number of non-formalized things in the world that people do and then end up with broken horrible shyte is beyond measure, and I for one love it when somebody smart takes the time to formalize some things so we can know wtf we are actually doing and what the ramifications are. On the other hand, of course I don't want ivory tower constructs to be obtuse and basically non-performant. Blend the strengths. I would never throw anything out. I might not have time to learn it, but I am always extremely chuffed when somebody else brings whatever they can bring!

I mean, that's one of the main reasons LtU is so precious to me: the depth and breadth and variety of insights that can come together and interplay here!

Party Line

Hey, you are the audience and you're supposed to applaud the party line! Otherwise, you might disappear overnight and the next day people will start quietly inquiring where you went.

Quietly inquiring? Who

Quietly inquiring? Who would dare? If someone is disappeared for bucking the party line, cautious citizens will play along, surely?

It does seem we may be at risk of overdramatizing, doesn't it.


Just wait and see what will happen once I am in power.

If you start too concrete,

> If you start too concrete, abstracting later is apt to be a pain. If you start too abstract, concretizing later is apt to be a pain. There are also conceptual problems at both ends of the spectrum: too concrete and you get lost in details, too abstract and you're in a rudderless sea of "generalized abstract nonsense".

I certainly agree with this. The good news today (as opposed to when I studied computer science) is that we have quite good software tools for explicating the entire spectrum. For example, you can install your favorite Scheme system today, install miniKanren, and start playing with the typing relation—inferring and checking—bidirectionally, and have a very tangible, hands-on system for exploring the intuitionistic logic represented, via the Curry-Howard Isomorphism, by the type system in question, while also nicely explicating that the type system is not some magical thing that's intrinsic to computation, thanks to the good ol' Scheme system underneath. You can then connect this to the literature, e.g. Categories for Types, Topoi: The Categorial Analysis of Logic, Kleene and Rosser 1935, etc.

One of the principle reasons I'm a Constructivist (apart from the general desire not to talk logical nonsense) is precisely that it provides the clearest bijection between theory and practice. So Curry-Howard isn't just a nice theoretical principle that helps me reason about my code (although it is that, and I take professional advantage of it literally every day), it's also the lens through which the abstract becomes concrete, and vice-versa. I don't trust theories I can't run; I don't trust stuff I can run (once, in some context) that doesn't yield a general theory.

Anyone else see a familiar issue?

I'm old enough to remember the pre-6.001 intro to computer science at MIT, so I may be biased, but it was a truly brilliant innovation, especially once they got rid of the vestigial bit of Algol programming mandated by the department. It definitely taught you a new way of looking at complex problems and behaviors, and provided you with a toolkit for figuring out how to solve those problems and emulate those behaviors. Deceptively simple at times, like the classic and thin G Polya book How to Solve It.

I'm a bit worried about suggestions that we need to simplify or temporarily replace essential parts with substitutes, and then teach those essential parts later.

All I can think here is that I took elementary mechanics twice, once without calculus and once with, and it was so much simpler once you had the right math to describe what was going on with the physics. And conversely, it was easier to understand the calculus when you had some physical intuition to base that understanding on. The same is true in crystallography which is a bitch without linear algebra but almost frighteningly simple with it.

Turns out that it's often a big win if you can teach two subjects together, and that if you can't, you end up teaching much of the material twice.

Any possibility that we can combine more of the underlying math bits with the "computer science" bits at the same time? Here's where I need to put on my "Just another dumb engineer" hat, and will leave the best sequencing and combination of these subjects to the experts.

VPRI did a project along those lines a while ago

I went through 6.001 near the beginning, when it was photocopied Lecture Notes.

Alan Kay founded a group in the late 2000s to work on the "right math" for radically simplifying software, based on choosing appropriate mathematical formalisms, among other things.

They implemented a networking stack in something like 2K of RAM.

I haven't seen the PI's Final Report - it may still be "in preparation", but the NSF funded Alan Kay's "Viewpoints" project for five years.

You can see artifacts at, including papers written as recently as 2017.

I'd love to find a funding source to get Alan Kay, Gregor Kiczales, Alan Borning, and some other folks together to collaborate on a "Unified Programming Environment" just to see how far you can take the VPRI's ideas from research to implementation.

Syntax is fleeting, but Meta-Object Protocols seem useful in this effort. I haven't read about them recently enough to know if they are a complete solution.