A StackExchange Site for Programming Language Theory

I recently created a proposal for a StackExchange site for Programming Language Theory. It is currently in the Definition stage and it requires a plethora of good quality questions - questions which you would expect to see on the actual site once it is created. There are already a few example questions. However most of the questions are by users who seem to be only enthusiasts. We need more followers who are experts at PLT to give the site a definite shape.

Update: I (Ehud) am promoting this thread to the home page. It seems that the proposal has a good chance, if enough people commit to participate (see the discussion thread). I presume LtU readers would want to know about this process, and make up their own minds about whether they want to join or not.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Programming Language Technology

Doesn't PLT mean programming language technology, not just theory? I would stay away from something with the title of "programming language theory" since it probably is catering to the POPL crowd.

Programming Language Design?

Would "Design" work as a compromise between "Theory" and "Technology"? I tend to consider "doing theory" and "design" as two different kind of activies having different result (design often being the part about finding sweet spot that it is so hard to get good reviews about), but I also understand "Design" as a unifying term for "what people that make languages do" that obviously relies both on theory and technology.

I'm personally quite happy with pl.programming-languages on cstheory.stackexchange as a place to get my research-level musings on lambda-calculus, categories and all things POPL. I'm not sure the (very small but with an *excellent* signal-to-noise ratio) community there would move to another place. Maybe it's actually better to separate Sean McDirmid from the category theorists, for both to be more at ease, but I'm not convinced.

I was referring to the PLT

I was referring to the PLT (Racket) group's adoption of the term.

I was once thrown into a room full of category theorists before; we even took baths [1] together. It wasn't that bad.

[1] http://www.cs.ox.ac.uk/ralf.hinze/WG2.8/27/minutes.html

But I'm not sure many people are interested in talking about language design.

Interest

> But I'm not sure many people are interested in talking about language design.

On the contrary, I think many people are, to the point that it may be exhausting. I feel like there are more toy languages around than ever, but most are itch-scratching projects by people that did no do the homework of actually studying the related work. Discussing design in this context is a noble task, but also very taxing.

Yep: high quality

Yep: high quality discussions in language design are a problem. Design topics (in any field) appear incredibly approachable but design requires study and practice as other disciplines do.

PLT

I've written at length on the problems with PLT as a term for our discipline previously on LtU.

I also think that this StackExchange would be a bad idea.

Also**2, PLT in the sense of Matthias Felleisen's research group doesn't stand (in the acronym sense) for anything and never stood for PL Theory or PL Technology.

Nice to know, I always just

Nice to know, I always just assumed PLT stood for Programming Language Technology. Anyways, it is their brand and I wouldn't try to hijack it for something else.

Wasn't there one?

I remember there being a beta stackexchange site for this some... Maybe 5-6 years ago now? It never took off. There's just not enough people both interested and qualified to reach critical mass for that sort of site. That and there are very few questions that can be answered concisely enough for that format.

(Though I for one would certainly welcome a more discussion oriented and informal internet alternative).

advocacy

I'm interested in this one getting pushed through, but it needs more questions, more followers, and more votes. Please consider doing so.

I don't believe the Computer Science stack exchange is suitable for questions relating to the intersection of PL with HCI or UX or various social aspects (design of package management, market integration, security), nor with design in general. A Programming Language Theory stack exchange seems to have a valid niche.

CS might not be a science

I am starting to believe that CS is a misnomer in the sense that it isn't a science at all but an engineering discipline. (Whoever thought of EE instead of ES?)

Then again, who cares about a word?

Platonic engineering

It's an engineering discipline in which the things being engineered are themselves Platonic mathematical entities. Whether that's engineering or mathematics is a question for... philosophers. It's probably not science; the rule of thumb is, if the name of the subject has the word "science" in it, it's not a science. Biology and physics are safe; philosophy is grandfathered; civil engineering is honest about what it is; but social science, political science, and computer science doth protest too much.

The true meaning of the color pink

Good observation on that anything with the word science in it isn't. I agree it's philosophy. The problem with the connotation science is that people might start believing it, or maybe even do.

I once inadvertently walked into a technical university. To my dismay it turned out they were doing something like fundamental type theory (too). No problems with type theory but if I find it in a technical university someone has his, or her, priorities very wrong.

If it is true engineering, then maybe most universities should buy data-centres and hobby algorithms and tools together for that instead of, well, the other stuff.

(Not to mention that I am waiting for people to start saying that a compiler is a functor from input to output. "See, all questions and problems solved." Turned out I was right since I read some publication where he, or she, was 'reducing' semantics to philosophy too. I need to figure out whether that makes any sense like they did it, though.)

Engineering requires theoretical underpinnings

I don't know about you but my undergraduate electrical engineering required the theoretical underpinnings. From pure mathematics courses to quantum mechanics (the basics), from numerical mathematics to electromagnetic field theory. The basis was that if you understood the theory as well as the practical, you would make a better engineer.

None of it has been wasted.

Pushing a Hilbert Program vs Skillset

Yeah. But as an engineer you learn the math you need to have to hobby systems together.

CS has had none, or negligible, impact on industry (almost all advances are due to EE, applied math, or physics) and I sometimes wonder why that is. And personally, I think they hired too many mathematicians pushing a new Hilbert program through the field. Nothing against math but CS students, who usually become engineers, might be learning too much of the wrong math.

I might have this view since I am interested in fundamentals too so I probably have a skewed vision. I probably read too much fundamental stuff.

(Why is CS pushing a Hilbert program when even the mathematics don't do that. Let them do it.)

(Maybe the simple answer is that logicians, or category theorists, who couldn't find a job in the math department ended up in the CS department. Sometimes life works like that.)

Nowadays I s'pose there are

Nowadays I s'pose there are schools with CS departments created ab initio, but it used to be, CS departments were of two flavors — those split from math departments, and those split from EE departments. Software and hardware. Figures there'd be a massive mathematical influence there. Software being a sort of massive mathematical structure.

As for a Hilbert's program, there seems to be a deep affinity between Hilbert and programmers. As Quine noted, the three early-twentieth-century schools of thought on foundations of mathematics are just the three Scholastic positions on Universals: Platonism, Conceptualism, and Nominalism. That associates Logicism with Platonism, thus Russell and Whitehead with Plato; Intuitionism with Conceptualism, Brouwer with Locke; and Formalism with Nominalism — Hilbert with William of Ockham. CS people are, it seems to me, deeply appreciative of the attitude behind Occam's Razor, so it figures their attitude would resemble Hilbert's.

Software being a sort of massive mathematical structure

An electronic device is a mathematical structure too. I don't see EE pushing a Hilbert program.

An electronic device isn't a

An electronic device isn't a purely mathematical structure.

Of course it is

Philosophy. It is, or neither is program. It is your attitude which is marginalizing CS. I find it offensive.

some disconnect?

<blinks> I admit, with no ill will, I honestly don't know where you're coming from with that remark. I don't think the difference between a physical construct and a mental construct belittles or marginalizes either; but I'm not even sure whether that's related to what you're saying.

Too much fundamentals marginalizes CS

Your remark was that a program is a pure mathematical construct. If you want to know why CS, or PLT, has had no impact on industry then it is exactly because of that believe.

A program consumes energy and has a running cost. Therefore, we are stuck with C. I observe that on the computer I am working on: a program is a concrete physical entity, has a cost, thus 97% of what runs is C.

CS burned through 10,000 man years of type theory, PLT, and related academic endeavours to end up with Haskell. One of the absolutely worst engineered languages in the world, without any good software engineering in the large constructs, with a slow operational model, and with a flawed (mathematical) approach to software engineering through type classes.

And that is all due to the fact that the assumption is that "programming is math" and "everything should be correct". No industrial relevance, if you want that: design a better C. You can do philosophy, I don't mind. You can hire mathematicians and logicians, they'll immediately push a will-never-pay-off academic agenda since a mathematician fundamentally doesn't care about industrial relevance but will immediately experiment with applying category theory to cryptography. Teaching a pig to fly with applied philosophy.

I know the industrial value of a type theorist PhD: He gets hired as a systems administrator. True story. "Nice kid. Smart in the wrong manner. No skill set." Want to compare that to a PhD in solid state physics?

I don't care people do fundamentals. But if you want to marginalize this largely irrelevant field further, then your attitude to CS will do fine.

CS burned through 10,000 man

CS burned through 10,000 man years of type theory, PLT, and related academic endeavours to end up with Haskell. One of the absolutely worst engineered languages in the world, without any good software engineering in the large constructs, with a slow operational model, and with a flawed approach to software engineering through type classes.

Do you think the "languages of the future" are going to borrow more from Haskell or more from C?

I already stated that

Haskell is an okay-ish incubator and a fine programming language for small mathematical prototypes. I already stated that elsewhere. In that respect it didn't change any qualities w.r.t. with what they had in the 80-ies.

But pushing a GPL where there is none with disregard for how to engineer a language. Yeah. They started to believe their own horsedung.

Thanks for explaining where

Thanks for explaining where you're coming from. Makes things much clearer.

You're mistaken about what my attitude toward CS is, though. I'm just observing that software itself is mind-stuff. Running it has consequences in the physical world, as you point out. What you're calling Computer Science seems to be what I would call theoretical computer science — type theory and whatnot; and I've maintained for years that theoretical CS is one of the purer branches of mathematics. (Possibly you're restricting "CS" even more; I suspect a bit of cherry-picking in your assessment that CS has had no effect on industry, but it's just a suspicion.)

It seems to me there is nothing else quite like CS (rather justifying the existence of CS as a subject of study): it's at the intersection of mind-stuff with the physical world, and you lose overall perspective on it if you try to ignore either half of that intersection. I think theoretical computer science is still kind of pathetic because it doesn't have any handle at all on most of the really important primitive concepts it ought to be studying. (I've been struggling with the concept of abstractive power, but that's just one instance, not representative of the range of concepts needed.)

My own intuition says although typing can seem harmless enough in really simple cases, sophisticated type systems disastrously obscure things and, more broadly, not only is type theory a bad way to handle correctness proofs, but we don't yet have a clue how "correctness" proofs can be usefully handled.

Tendentious, Unfounded, and Needlessly Rude

The above three are true because sometimes that's the best manner of sharpening a discussion.

cherry-picking in your assessment that CS has had no effect on industry

I do cherrypick that argument but I am willing to defend it. But there is a case that CS is both a perfume industry, "It looks like math therefor they are relevant," as well as has a banking mentality, "I am being paid well, therefore I am worth it."

EE doesn't bother too much with philosophy though I don't doubt they recognize that doing some work on correctness is worth it.

If CS only has a marginal impact on industry and PhDs don't increase their value then the field is doing something wrong. I think that would be unheard of in physics. Okay, they have fundamental theorists too as a cheap manner of doing experiments but at least they know and recognize it doesn't have industrial relevance.

There is a severely good case to see CS as "just another engineering discipline" which should pay off in the short run where PhDs should hobby algorithms together around data-centres and what-not instead of pursuing lofty mathematical goals.

:-(

I'm sorry you see my remarks in this light. I, in my turn, perceive the aggression to be entirely on your part; I was enjoying our discussion until... well, I've noted what it looks like to me, and you're claiming attacks and rudeness on my part. It seems we're not going to get anywhere further of interest here, which makes me very sad.

Change :-( to :-)

Man. I am talking about myself. Claiming that "CS has no impact" and that applying category theory to cryptography is "teaching a pig to fly with applied philosophy" is rude and won't buy me kudos anywhere.

You're polite and I enjoy this discussion.

:-)

Okay. I guess I'm muddling things by being hypersensitive.

I've got mixed feelings about category theory, myself. I find some of the basic concepts — as opposed to mathematical details — quite enlightening (which oddly reminds me of James Clerk Maxwell advocating doing calculations using Cartesian coordinates and then expressing relations using quaternions to clarify the concepts). I find most actual category theory... unhelpfully esoteric, and am reminded of Felix Klein's recommendation that "a mathematical subject not be considered exhausted until it has become intuitively evident". Those few bits of category theory I've managed to get an intuitive handle on, I somewhat approve of, yet most of it still comes across to me as "generalized abstract nonsense" (the phrase "applied philosophy" does make me smile).

I somewhat recall it's been suggested science has produced nothing of interest, all advances really coming from engineering. I suspect this may be drawing an artificial line between the two; and it also worries me because I really believe in the value of basic research, which I feel we neglect at our peril. But most of category theory still looks to me like generalized abstract nonsense.

There does seem to be an extraordinary contrast between the high-powered math, which I've done (when I had to), and serious hacking, which I've also done (when I had to). And yet... they still feel to me like somehow they're doing the same thing.

I remember conversations with a dbms architect, many years ago, who suggested iirc complexity theory was about the only theoretical CS of any practical use, inasmuch as being broadly aware of the complexity classes could be helpful. Deadlock, I recall they said, is a non-problem that theorists make a huge deal of; it practically never happens, and if it does a sysop can reset the system or just kill a process.

80% of CS PhDs should have an increased industrial value

If I value category theory as an engineer: Another attempt at foundational mathematics by being deliberately vague about it. Yah. Maybe you should do that in the math department, or have your butt kicked as an engineer.

I recently saw a John Baez talk where he postulated an "hypothesis" on nrank categories. An hypothesis, since he had no clue what an nrank category could possibly be. Okay...

Yah. Seems to me category theorists enjoy discussing the mathematical feelings they develop. Like I said: they're usually trying to figure out the true meaning of the color pink. Great science.

(I think Baez is that good that he can get a feeling to work. But I can't care much about applied category theory. It belongs in the math department.)

'foundational mathematics'

My interest in category theory is as an aid to thought, not a basis for it. But, the term 'foundational mathematics' in the above stirs my thoughts concerning suitable mathematical foundations, which relates to what I see as the primary conceptual contribution computer science has to offer to mathematics...

I'm wary, in this sort of discussion, of the distinction between "should be done better" and "should be done". I'm in favor of research, nearly full-stop, and I'm not inclined to advocate shutting down someone's research merely because I think it unpromising. But I do find some lines of research unpromising, and thus think some kinds of research could be done better.

Which seems remarkably akin to the basic distinction between traditional mathematics asking whether something can be done, and computer science (at its best) asking how cleanly it can be done. (Imagine getting a program to work, versus getting it to work efficiently, versus expressing it clearly so it's maintainable.) To adapt a joke that's been on my mind lately: a mathematician and a computer scientist are each presented with a wood stove, wood, matches, a pan, and a hand pump, and asked how they'd go about boiling water. They both produce much the same procedure — pump water into the pan, put the pan on the stove; put wood in the stove, light it with the matches; wait for the water to boil. They they're asked how they'd modify the procedure if the pan was already full of water. The computer scientist says, just skip the step about filling the pan. The mathematician says, dump the water out of the pan, thus reducing it to a prevously solved problem. Some computer scientists get too much like traditional mathematicians.

When you say 'foundational mathematics', though, I realize that to me, mathematics should clarify thought, and consequently I don't trust a mathematical foundation that's less clear instead of more. This, to me, is why computer scientists would be likely to identify with a mathematician like David Hilbert whose attitude is similar to that of William of Ockham: cut to the chase! Though, when I talk about some computer scientists getting too much like traditional mathematicians, I note that type theory claims to trace its lineage back to... Russell and Whitehead.

This is why I see computer science having something to offer to mathematics: mathematicians, too, should care how clear their reasoning is.

Physicists know how to protect their field

They usually march the philosophers and abstract mathematicians out the front door. Though they do allow for shades of gray and liberties sometimes; if you're that good.

The proliferation of abstract mathematical models flawed to solve basic questions since they're hindered by fundamental theoretical results but pursued anyway since it's math, the experiments with esoteric forms of math which one should steer clear of, the abundance of dead and irrelevant unfollowed mathematical exercises, the non-development of programming languages but a fixation with getting math to work, the fact that industry is constantly ahead of everything and all of CS is reduced to explaining what industry develops.

CS is not ahead of the game, CS is infected by a prolongation of the Hilbert program, a playing field of abstract mathematics, and a deplorable number of ill attempts to keep flawed irrelevant mathematical models alive.

I think I am looking at an underdeveloped engineering field with too many mathematicians reigning free to develop their hobbies.

You seem to state more philosophy to explain to me that that is a good thing. Foundational, and most of, CS is by and large irrelevant; it's a perfume industry. You pay a high price for someone to give you pretty, but irrelevant, documents.

paradigms

You seem to state more philosophy to explain to me that that is a good thing.

Though we aren't simply agreeing, my position isn't in all respects diametrically opposite to yours. I wouldn't call what I'm saying 'philosophy' (though that's partly about word usage), and to whatever extent I'd agree with your description of what's going on, I don't claim it's a good thing.

I think I am looking at an underdeveloped engineering field with too many mathematicians reigning free to develop their hobbies.

Actually, I find myself more ambivalent than I started out regarding the science-versus-engineering question, but more to the point, I'm realizing as we discuss this that the situation fits something rather stronger, and more specific, than "underdeveloped": we're talking about a preparadigm field, in the Kuhnian sense. You can bemoan how chaotic a preparadigm field is relative to a paradigm one (like physics), but bemoaning it won't solve the problem; devising a successful paradigm is the only thing that'll do that. (Of course, physics is pushing somewhat into the realm of "philosophy" atm, because lately it's something of a post-paradigm field, with people casting around for a new paradigm and coming up with things like string theory.)

A paradigm for multidisciplinary engineering

I wouldn't know, probably like any economist I am only good at predicting the past. And LtU colors my vision too.

But my bet is that the over-emphasis on the role of fundamental math will go out of the window once (possibly in favor of simple applied math); people simply want results and its a nice thing to do anyway. Dijkstra wasn't a fool but even he could not predict how wide the scope of computing would become.

To me, it doesn't make much sense to do fundamental research too much in any form. Buy a lab. Hobby around. Keep it simple and change scope often. Work a lot with the other fields. That is what I would do.

Obsession

But my bet is that the over-emphasis on the role of fundamental math will go out of the window

A paradigm, according to Kuhn, allows rapid advancement because researchers within it aren't distracted by fundamental questions. The paradigm doesn't necessarily answer such questions, but it does delinate which questions are to be asked. This puts me in mind of the other current example I know of a pre-paradigm field: memetics. I've argued (on my blog) that memetics research is obsessed with defining what a meme is, when in ordinary biology the definition of gene was a late development, not a starting point. The idea is that if we had a viable paradigm to guide memetic research, it wouldn't even try to define what a meme is, normal research would procede without people obsessing over the lack of such a definition, and presumably at some point in the far future, after we'd learned lots and lots about memetic organisms, we'd work out what sort of definition of meme would have most explanatory power for our understanding of memetic organisms. Likewise, the fundamental-math obsession in CS may also be due to a lack of a viable paradigm to delineate normal computer science research.

You're more high-brow than I am

I am sorry but the way I look at it we're still fixing Dijkstra's punch card problem. It was simply annoying to punch holes and find out that didn't work.

Fundamental science is fine to do but with the scope of the field, we did progress beyond punch cards, it makes much more sense to educate pragmatic applied mathematicians in the theoretical part of CS than anything else.

I figure what we need is the

I figure what we need is the CS equivalent of an Isaac Newton, to discover the laws of motion and law of gravitation and differential calculus. But we'd better not hold our breath waiting for this genius to show up, 'cause we don't actually know how many years, decades, or centuries it might take. So this isn't a practical plan of action, just an aid to understanding why the problem exists.

It seems as if you have a plan of action, but I'm not entirely sure what it is, specifically.

No Plan

It seems as if you have a plan of action, but I'm not entirely sure what it is, specifically.

No idea. Uhm. My plan of action is to observe over the Internet whether the current scheme of narcism will survive?

I need to backtrack to see how we ended up with this conversation anyway.

Popcorn-worthy

Heh. That offers some entertainment value. :-D  Alternatively, if amongst us we could come up with laws of motion and gravitation and a differential calculus, we could show 'em up.

Intuitionistic Logic + Lambda Calculus

Some random thoughts, I have been reading the same book on category theory for several years now and can't really get past about page 75, as every time I pick it up I find it unintuitive and have to backtrack to re-understand what is going on. Computational Trinitarianism http://ncatlab.org/nlab/show/computational+trinitarianism shows the mappings from logic, category theory and type theory... Are these then meta-categories and we can map concepts between them with meta-functors? Reading through the different approaches, the logic approach seems simplest and to make the most sense. We know logic and types (and these programming types seem different from 'type-theory' - so I think they are actually a branch of logic) are the same thing. We also know that negation causes big problems, and recursive types need to be handled with care, again this is more obvious in logic as the unsoundness is easier to express in a comprehensible way. So I vote for intuitionistic logic, as the fundamental system. It seems to work well for proof-assistants (Coq, Twelf etc) for expressing theories. This all ties in nicely with Prolog-like logic languages, and theorem provers. Prolog is a flawed theorem prover (with unsound unification, a incomplete search and incomplete inference), but fundamentally Prolog-like theorem provers search automatically for the proof, whereas Coq and other proof-assistants have an assisted manual search. In both the proof is a set of reduction steps (effectively a lambda calculus expression) that prove the theory (Coq uses type-theory, but as seen above this can be mapped to logic as in Prolog).

So lambda-calculus would be the fundamental laws of motion, and intuitionistic logic would be the differential calculus (sort of).

Resonance

We already have those tools and they haven't produced the sort of results I'd expect to ensue if somebody hit on the resonating natural structure of the field. I'm inclined to think the tools we have on hand are asking the wrong questions. So I'm looking for different questions; if I'm wrong, then the right questions are already being asked, and things will sort themselves out, with negative consequenes only for me (having wasted my time and effort on a misguided quest); but if I'm right, the field really needs somebody to be looking for different questions.

What would you expect?

Sounds interesting, what sort of results would you expect?

Naively, it would seem the ability to prove mathematical theorems in a formal way from the ground up using logic and lambdas was a fairly strong result. Perhaps the objection is that this solves mathematics and not computing? But is computing is just mechanised mathematics?

As for different questions, what are we trying to achieve? I think abstraction is a key concept, as is abstractive power. A good abstraction has natural structure, and is simple but powerful. I have a hint that this is something you are interested in?

My feeling at the moment is that these things are related. Stepanov's "Elements of Programming" shows how algorithms can be treated in a mathematical way, and builds up key higher level concepts like containers, iterators, ranges etc, staring with mathematical concepts like power series, rings, groups etc... Stepanov's treatment of this "Standard Library" stuff is the best I have seen, and so often this gets left out of language design. This relates to the earlier discussion because I think the algorithms themselves are best expressed imperatively with value-semantics (this seems to lead to the most readable algorithm descriptions) and the 'generic-plumbing' by a declarative logic language (as this leads to the most readable code where we are only concerned with the what and not the how). Not only does this let us express the "Elements of Programming" in a natural way, it lets us prove things about them, and have automated derivation of implementations for some cases, as we have the same natural structure as the proof-assistants and automated theorem provers.

Expectation

My (entirely personal) expectation is that twenty years for now, for any program for which I can write a specification, I can write the program in such a way that the compiler checks that it matches the spec -- no bugs. I think we're going approximately in the right direction for this to become true.

(Not to say that John is wrong: I think his instance of Pascal's Wager is spot on, and that whoever is genuinely interested in other approaches should be given the means to look for them.)

Get that out of your head

There are trivial problems for which it is known that this is completely intractable. If this would be possible it would have been solved in the eighties; nothing fundamental changed since that and the entire field is blocked by fundamental results. You're asking for something which will solve cryptographic puzzles.

SAT Solvers

SAT solvers will solve crypto puzzles (given enough time), and answer-set-programming (and AnsProlog) provides a very nice interface to SAT/SMT type solving for all NP problems. So you can at least express these problems nicely, even if solving them is hard. The thing with SAT solvers is although they don't speed up something like integer-factorisation of the product of two primes, they do achieve a significant speed-up on a large chunk of NP problems. The key is that NP is the worst case complexity, many interesting and useful cases are average or best case.

Answer-set-programming is a form of logic programming, and the SAT/SMT is doing a proof search as a theorem-prover, so this all ties in nicely with the framework I outlined above, and is the kind of thing I am thinking for the meta-language part of my project.

Factoring

Ask them what the upper bound of bits is they can factor in a multiplication.

Number Field Sieve

The number-field-sieve is still the fastest factorisation algorithm as far as I know, but did you miss the part about many problems being solved fast by SAT solvers? The thing is you don't have to be able to prove everything, just some useful things. Much like type systems will not type every program, but they are still useful if they type 'normal' programs.

Maybe I will try some factoring with the SAT solver I wrote, although its using about a decade old technology - I am in the process of updating it to use multi-step look-ahead, which happens to be the fastest kind of SAT solver for factorization problems... clause learning with resets does not perform as well for that class of problems.

Spacetime is O(N^2)

As far as I know. Probably they went from, I need to heavily guesstimate here, 14 bits to 27 in the recent years.

I like SAT, I know the field somewhat so there is no need to explain. It has an application domain in EE, at some point they might prove an FPU correct, or abstractly correct by induction. Then you can toy with it in protocol verification but it is mostly completely useless for program correctness.

And now we're doing program correctness again because of the closed scope of the field. You can also use a SAT solver very well to design games around, solve planning problems, do language semantics experiments, simulate sports games, calculate through casino bets, etc.

See the problem? Narrow mindedness forces SAT to be applied to protocol verification, useless dead end without pay off, instead of applying it to a broad scope, where it will pay off.

The good thing about SAT is that at least it is an applied engineering problem with a mathematical twist.

Factoring. 14, 27, or many more bits?

Bleh. Now I want to know. Anyone got a reference to this concrete problem being solved by a SAT solver?

Going to do the experiment myself

But I lost a number of old sources regarding this. Will take me another month, or two, to program it all again.

Anyone else interested?

(Hmm. I forgot a lot of things. The complexity of multiplication is lower than quadratic but I used naive multiplication in my own experiments. Didn't run them on a SAT tool though. And an experiment was already done in 1997 which set the record then at 40-50 variables. Maybe they broke 100 bit RSA by now, no idea.)

Hm

Which "fundamental results" are you referring to? Note that I'm willing to write the program myself (along with some correctness arguments such as some type annotations or proof scripts), and the automation only need to *check* that the specification is met. I'm not asking for the program itself to be guessed.

The biggest unknown is our ability to come up with specs for enough useful programs. Although this has been done for many interesting programs (typically algorithms and compilers), it is also something we don't know how to do for many interesting programs (typically GUI), and there are some important aspects of programs we don't know very well how to specify and reason about (typically memory usage).

Uh? Can't you guess most of them?

Proving is undecidable and only feasible with HCI at the moment. Most decidable decision problems are in some subclass of NP. Sat is restricted to toy models over relatively small state spaces.

If I do an interactive proof, I am a sloppy machine instructing a strict machine through hoops. Maybe if you can advance AI to make intelligent machines you can trick the machine to do a proof automatically. But that hasn't happened yet despite all 'advances' in AI.

Or you can prove P=NP. Personally, I have the feeling that might be true, as well as Knuth. But I also don't see why.

Survey Propogation

The survey propagation algorithn can solve industrial-sized SAT problems (millions of variables), so to say its restricted to toy models in small state spaces seems inaccurate.

There are also goal-oriented techniques for answer-set-programming that look interesting.

Millions of variables correspond directly to the circuit

It depends on the structure of what they feed it. If I assume 20 bits then that gives 400 * c, let c be around 5, then say roughly 2000 variables per circuit. Write a small program like a tiny matrix multiplication, or a small loop, add some comparisons and you have your few million. Just a number of multiplications.

Mind you, I would like to see it concretely but I estimate this is about right. And maybe the program is something else or far more trivial than this.

Bleh. Randomized Sat instances and Clause densities

I read some of the survey propagation work. So far it's all random sat instance, I don't think that buys you anything, and clause densities, I don't think that buys you anything either.

Intel is screaming for people who can prove an FPU correct, everything else is window dressing. Even if you can do that with an abstract model where n=10, I don't know EDA that well, good enough since it then very likely works for n=32 too if you only need to change two constants.

I am not buying this story about survey propagation at all unless they throw it against hard-core industrial problems. It might do better than what we have now, but I want too see it.

(On a positive note they might have shown that randomized sat instances are a worthless metric, what I assumed anyway.)

Knuth on SP

Knuth's comments on survey propagation were "Success ... isn't guaranteed. But hey, when it works, it's sometimes the only known way to solve a particularly tough problem."

It really behaves like belief propagation in a Bayesian network, and I think what it is doing is equivalent to getting pseudo-satisfaction by looking at the big-picture. You then use something like the WalkSat algorithm to finish it off. Knuth was solving random 3-sat in the "difficult" region with 800,000 to 1,000,000 variables and 4M+ clauses on a PC.

There is no difficult region

It has been my hypothesis since forever I started reading on SAT that I can fine-tune any k-Sat generator to generate either trivial or intractable problems exactly on any threshold. I cannot prove it to you but I don't think it exists.

There are underconstrained and overconstrained problems but you cannot assume that a combinatorial problem can be approximated by random generated instances.

Matchsticks

God. I cannot prove it to you but the mental model you should have is that these people throw a number of matchsticks on a large heap in the hope it encodes a combinatorial problem.

Will that ever encode a combinatorial problem? Of course not. But you observe that around places some small combinatorial problems emerge and if they stick not too much and not too little it becomes harder. Hence the enormous amount of variables and clauses they can handle.

But it won't solve factoring.

(Oh. Hmm. It's probably actually easy to prove.)

Problems other than factoring

Factoring is just one problem, and it has a fairly narrow use in breaking cryptography. I can think of very few programming constraints that would result in a factoring problem. Most simply require algebraic re-arrangement or Gaussian-elimination to solve. The case of N = P * Q, where both P and Q are unknown seems unlikely. For example in array addressing you will know the stride and the start address as well as the offset. Few pre/post checks will be NP hard, and all NP problems are not as hard as Factoring. Not all factoring is as hard as factoring the sum of a pair of primes, we can find a pair of factors for an even number very quickly.

Factoring naturally emerges in model checking

Factoring, or rather, multiplication, naturally emerges when you translate a program, or specification, to a propositional formula. (I should write down an example down for myself though.) That is, if you assume a bit representation for numbers; you can also try and start reasoning over abstract algebraic representations of mundane arithmetic.

But anywhere you write down an abstract equation "x * y = z", where you're going to search the entire search space, you're stuck with the observation that the digital circuit representation of multiplication has timespace complexity O(n^2) (that's what I remember at least); you cannot go lower than a quadratic representation. And DPLL solvers running over a quadratic complex problem thought to be somewhere in an NP class severely limits anything you'll be able to do with a model checker over a pragmatic specification language.

The only bright light in the field is that primality testing is P whereas factoring is in NP. That gives a very small hope that at some point the two classes might turn out to be equal. (But I probably lack the intuition why they must be separated.)

(I am reading a paper on the exponential BDD representation of a 14 bit multiplier. Cannot be true somewhere since you know^1 there is a quadratic representation but at the same time as you start searching the representation, yah, it blows up in your face.

1. Multiplication to circuit quadratic, circuit to NAND linear, NAND to 3SAT linear, 3SAT to BDD linear. Unsure about the last step, I have an assumption somewhere that by the introduction of extra variables you can keep the translation linear, no idea. Ah. The elimination of variables, something I don't always assume, messes with the last step since that's state space exploration.)

model checking

I don't see where factoring comes into model checking? Got any examples? What about using model elimination instead?

Maybe you'll get lucky

Is a strategy too.

F-star

Have you looked at what they can do with f-star?

Never heard of it.

Never heard of it.

F-star link

http://research.microsoft.com/en-us/projects/fstar/
There is a tutorial here:
http://rise4fun.com/FStar/tutorial/guide
with an interactive online execution environment to try things out in.

Ah

I thought model extraction in relation to general problem of satisfiability.

Yah. Seen it

Not my thing. I like minimal stuff. Even in the case of SAT I directly go to NAND to see whether I can understand, possibly improve, what they're doing in the field of 3SAT.

Lots of TODOs. I doubt they can get it to work but it's so complex that I cannot even figure out what they're doing exactly.

I'm sure you have, since it

I'm sure you have, since it was posted to LtU (and here).

Yeah well

I am on LtU, or other fora, until someone kicks me off from time to time, or I lose interest. I don't read everything. (Followed the link, seen it, forgot the name.)

Found Knuth on a Satisfiability Shootout

He won the race on Randomized k-Sat instances but didn't appear anywhere in the industrial challenges. Link is here.

It's a hard problem.

Hm

I'm fine with providing proofs myself as long as this is not too much work (current systems for certified programming are not usable enough to met this bar from my perspective); yes, with human-computer interaction. I want automated theorem proving to help with the easy cases (which are many).

You could say I'm interested

You could say I'm interested in abstraction, yes. (Which reminds me, I still owe an answer on the last question there.)

If I could say exactly what question we ought to be asking, I'd be asking it. But, as a starting point, I'd like to better understand the deep dynamics of this. Seems like Parnas was trying to get at some of this stuff but his model somehow didn't have enough breadth.

Made me laugh...

It made me laugh, but perhaps it should not. Personally I find the answer is zero tolerance, as soon as it starts to look unwieldy, that tells me I have the wrong abstraction, so throw it all away and start again with a different abstraction building on the knowledge of the first attempt. Sometimes the correct abstraction is not obvious, in which case I tend to suspend development, work on a different project, or just do something completely unrelated. Normally after a few nights sleep something pops up that gives me a new lead.

I think of programming much like Sherlock Holmes (or better: Auguste Dupin) solving a case, and perhaps this is some kind of deductive method? There is an elegant abstraction for this problem, I just have to follow the clues to find it.

Zero tolerance

As I understand it, part of the point is that the guy in the cartoon had zero tolerance too; that's why he started over. It seems as if the point of abstraction is, or should be, that eventually you get a given element right and never again have to redo it; over time, the number of parts you never have to redo increases, without bound. In the long run, all programming is bottom-up. But this sort of thing keeps happening, to folks who set out to have zero tolerance. Perhaps the reason it happens isn't because we didn't try hard enough, but some other fundamental dynamics of the system, which we need to understand better, and find better ways to cope with.

finding abstractions

that eventually you get a given element right and never again have to redo it; over time, the number of parts you never have to redo increases, without bound

It seems to me that the "number of parts" will quickly grow beyond any human's working memory, and "finding" the right abstraction (and recognizing it) will be very little easier than it was before.

Zero tolerance is not very useful in the long run.

all programming is bottom-up

Bottom up is nice. But I'm also interested in top-down methodologies.

One possibility: developers glue together ambiguous 'spaces' of abstractions, which represent many possible programs that can be easily implemented. Then (alongside normal updates) we iteratively refine this space with support of fitness heuristics, software tests, and an IDE. Hopefully, the 'spaces' of abstractions can be pretty high level, like pseudo-code.

In such a system, we never seeking the perfect abstraction, but a lot of rework and sweeping changes in the implementation details are shifted to the compiler to meet new higher-layer types or constraints.

The trouble with tribbles

It seems to me that the "number of parts" will quickly grow beyond any human's working memory, and "finding" the right abstraction (and recognizing it) will be very little easier than it was before.

I agree that's a potential problem; excessive proliferation of parts should be considered one of the things that can effectively bound abstractive radius. I'm not convinced it's an essential (rather than accidental) problem in the near term, though; if the number of parts really blows up that fast, I take the blow-up as a sign we're not doing abstraction right. If, of course, it can be done right; a theoretical result that says certain things can't be done would be of great interest... if it were sufficiently compelling. (A conservation law of some sort? But I suspect we don't even have a name yet for what's being conserved.)

Bottom up is nice. But I'm also interested in top-down methodologies.

Recalling again my favorite P.J. Plauger quote: Top-down is a great way to redesign a program you already know how to write.

developers glue together ambiguous 'spaces' of abstractions, which represent many possible programs that can be easily implemented.

Interesting. I don't pretend to entirely follow; but this is a very familiar stage of an idea, where one has it oneself (to some extent) but hasn't yet licked the separate, and often even more difficult, problem of how to explain it to others.

I suspect, though, that at its essence this is still just abstraction, perhaps with a shift of focus putting it in a more concentrated form, but, still subject to the same dynamical laws — whatever those are.

if the number of parts

if the number of parts really blows up that fast, I take the blow-up as a sign we're not doing abstraction right

The number of parts blows up fast simply because there are lots of little problems to solve, lots of problem-specific or business-specific parts to model and integrate.

I think this aggregation is essential, but not necessarily a problem with good search and visualization tools. Can we automatically recognize when someone is reinventing an existing component? Can we make it easy to find relevant parts in a given context?

at its essence this is still just abstraction

I think there's a significant difference between ambiguity and abstraction. But there may be a relationship between the two, and either can be useful when leveraged in an appropriate context.

All abstractions leak

The fundamental problem in CS is that everything comes at a cost. Energy, running time, memory consumption. If it wouldn't be for that, all abstractions would be safe. It's the reason most things converge to C, and C modules expose their innards. Energy, time, memory.

But don't tell a philosopher that.

Wikipedia: Leaky Abstraction

Cost, abstraction, and types

I agree costs are a problem with abstractions. They're also a problem with types. I suspect types may be irredeemable, but I'm unconvinced of the same about abstractions.

Seems you're (over?)using the term philosopher in a somewhat nonstandard way.

I also suspect the 2002 date in the Wikipedia article may be fable.

Only Einstein can discuss God

I find that a healthy and pragmatic attitude.

Its the same.

I can see this working, but I don't think it contradicts anything. I can see a search system that takes a specification and tries combining existing abstractions to produce a solution. Each successful attempt could be given a complexity number and the search could attempt to minimise the complexity number for the specification, hence finding the best available abstraction.

This would be mechanising the exact manual process I was describing as zero-tolerance. Perhaps optimal abstraction (with current available abstractions) would be a better term?

Stepanov on Redoing

Stepanov suggests that algorithms are never 'done', and that you continuously improve them. They should be published so that the best implementation is available for use and the community can improve them. I tend to agree, its never done, you can always do better, although there are diminishing returns.

Also even bottom level abstractions can change, a new feature can require changing a list to an array for example. My point applies, design the system so that changes like this are possible (using interfaces for example) and make the change. Sometimes this can be so fundamental that restarting is the only option. I don't believe there is any way around this. Its like replacing a clockwork watch with a digital watch, nothing is reusable apart from may be the strap.

Generic programming / parametric polymorphism seems the best way of doing this to me. However I do think the 'sausage machine' plumbing would be better done in a declarative language.

Familiar approaches

My intuition says this starts out with some more-or-less-valid points and slowly goes awry by assuming that familiar approaches are the only ones possible. This would predictably overlook any approach that violates an assumption shared by all the familiar ones; and the sort of alternative I'd hope to find probably would violate some assumption shared by all the familiar approaches.

I don't even make it beyond page one

Without immediately completely rejecting it. And then my head refuses to parse the bullshit.

We're still fixing Dijkstra's punch card problem!

I couldn't withhold that gem out of my mind anymore.

Not sure CS is that

Not sure CS is that lobotomized and disconnected from practice. See the TAP stuff for example, which looks at both the program code and the data produced by the program. The "punch card problem" is an obsession with the code and with the code alone because it was once expensive to work with feedback. It might still be expensive occasionally but the pressure is mostly gone. The discussions about "life programming" having taken place on LtU in the past months have attempted to push the consequences to the opposite end. I also don't think that "platonic engineering" covers that well except when you consider the bit or bit sequences as a platonic bodies something I might accept.

Nothing wrong with that or Narrow minded Excercise

There is nothing wrong with testing and proofs, someone should do it. But the testing field is ´flawed´ by a number of well-known blocking complexity results, leading to endless rehashes of the same theory and checks on very small algorithms which will not terminate, and the proving field is ´flawed´ by a lack of good tools and the cost of transcribing proofs in a formal form, leading to frustrating experiments where nothing can be proven since the investment is too high because anything can bite you. Long term goal...

But there is a case that because of the notion that CS should be math, the long term goal of formal correctness, the idea that you should do ´sciency´ stuff, and the lack of scope of professors through peer pressure, that CS is doing a narrow minded self-serving exercise where money is wasted on good people who could have developed prototypes or leading edge products for industry. Whereas these people likely aren't even better than a well-trained test engineer, even if they know the theory; which in an industrial sense, they often don't.

The only useful tool I am marginally aware of which made it out of this field is QuickCheck, and some variants. And they have been going at this field for at least thirty years. Thirty years of expensive conferences equals one hardly used tool.

Somewhere, it makes more sense to buy a radar and study the migration pattern of doves around your house with mathlab and generate pretty pictures about it than continue this mostly self serving industry. I'ld rather see the mathematicians involved apply math to complex data sets than this.

(To be honest. The entire field of academic testing, in a naive manner, fits into one book. If you know how to describe a language over finite domains, and you know how to translate it to statements over simple automata, and you know how to translate that to a propositional formula, then you're mostly done. Then there are the extra blocking complexity results. Anything they are doing is often just a complex rehash of that. Someone forgot to write the book and shelve it.)

What I meant was something

What I meant was something like DASH not modeling a program as a finite automaton and then performing an exhaustive test - with the complexity constraints you mentioned.

Design == mind stuff

It seems to me there is nothing else quite like CS (rather justifying the existence of CS as a subject of study): it's at the intersection of mind-stuff with the physical world, and you lose overall perspective on it if you try to ignore either half of that intersection.

While I understand where you're coming from, I don't think CS is quite as unique as you think in that regard. All engineering is at the intersection of mind stuff and the physical world. All designs start as mind stuff (otherwise they wouldn't be designs, they'd already exist). Often they are largely manipulated as mind stuff for most of the project. Eventually they are realized in the physical world. Some mind stuff becomes a physical bridge. Some mind stuff becomes a physical aircraft. And some mind stuff can be realized as either pure application-specific hardware or as software running on general purpose hardware. Same mind stuff. Different implementation. Turns out that it's a lot cheaper and easier to turn general purpose hardware into a specific device via software than it is to create an application-specific circuit to do the same thing though, so that tends to happen more often.

To extent that CS mind stuff differs from other engineering mind stuff, it's largely that CS mind stuff tends to focus on discrete systems and complex nonlinear behaviors. Although arguably a lot of digital hardware design has the same focus (so it's perhaps no surprise that we now have languages like VHDL and Verilog that permit hardware design by what amounts to programming), as does complex process control (see "discrete event dynamic systems"). But given the way everybody wants to put a microcontroller in everything to create complex behavior these days, it's a good bet that many other fields of engineering are going to be working a lot more with that discrete-system mind stuff in the near future.

Designing mind-stuff

I was trying to articulate a somewhat different distinction. It's not a question of whether the design is mind-stuff, but whether the thing being designed is mind-stuff. Moreover, it would seem that yet another layer is added when one is designing a programming language, which is designing mind-stuff for use in designing mind-stuff — casting a curious light on my sometime claim that all programming is language design.

Software is a design... which is why its mind stuff

We'll have to agree to disagree, I guess. I see it as a distinction without a difference. What you design is a behavior ("mind stuff"). How you realize that behavior is not, and cannot, be "mind stuff", else it would have no impact on the physical world. This is essentially the point marco was making earlier. Actual software running on an actual microprocessor is no more and no less "mind stuff" than an actual circuit design implemented as an actual collection of transistors. The designed program may be "mind stuff", but then so is a circuit design until it is actually turned into hardware. There are Esterel programs that can be realized as hardware, software, or a mix of both. If there's a difference, I suppose it might be that the software gets converted from "mind stuff" to physical reality on the fly at the time of use, rather than prior to use.

It's perhaps worth keeping

It's perhaps worth keeping in mind the context of my remark about mind-stuff. The question was why deep math (such as Hilbert programs) comes into CS, and I was suggesting that the act of software construction is deeper in math than most engineering disciplines because the stuff being engineered is more purely mathematical — more Platonic, more abstract — than that of other engineering disciplines. Although I've since noted in the discussion that CS is also a preparadigm field (and Kuhn does note, iirc, that research in a preparadigm field is largely occupied with arguing about fundamental questions of the definition of the subject, which distract from the sort of straightforward objective progress enjoyed by a paradigm field), I still maintain that the stuff being engineered in CS is more Platonic, more abstract, more purely mathematical. As you say, one is designing behavior — but if I'm designing an airfoil, I'm not designing behavior, I'm designing a physical thing in order to induce behavior. Verily, "software gets converted from 'mind stuff' to physical reality on the fly at the time of use, rather than prior to use." I'm suggesting this can make a profound difference in how the designer thinks about what they're doing — especially when, as inevitably happens with large software systems, much of their time and effort has to be devoted to managing the software itself.

Funny

I know the industrial value of a type theorist PhD: He gets hired as a systems administrator. True story.

It's funny; one of my good friends has a PhD in an aspect of type theory, and he works as am SRE (sysadmin) at Google.

wrong place

(The UI on this software has some real ergonomic problems.)

Commitment Phase

The Stack Exchange mentioned in the OP has reached commitment phase. It needs 200 people. If you're interested, please participate.

Examples?

I didn't follow this proposal at all after the initial posting here. If you did, would you care to give links to some interesting discussion that happened there?

The StackExchange proposal

The StackExchange proposal process involves:

  1. asking good questions, as determined by votes for questions. Each person can ask a few questions and cast a few votes.
  2. getting a bunch of people to commit to participate for the beta
  3. starting the beta; allowing questions to be actually asked and answered

Since we're in phase 2 at the moment, we haven't reached the point of 'interesting discussions'. You can browse the example questions, though - there are only 53 of them.