Simplicity

I'm on a campaign, I'm joining a revolution, and it's guiding maxim is this...

"Simplicity is the Price of Reliability, a price which I'm now fully prepared to pay."

The full quote comes from The Emperor's Old Clothes C.A.R. Hoare 1980 ACM Turing Award Lecture

"The price of reliability is the pursuit of the utmost simplicity. It is the price the very rich find most hard to pay."

While more features go into a language, in particular threading, I cannot write reliable programs, since the language compiler / interpretor itself becomes ever more complex.

So I'm on a hunt for the some other language to be my prime tool.... and Threading is at the top of my list of "features not to have".

I've learnt my limitations... unlikely most programmers, I know I'm not smart enough to code reliably in C / C++ like languages. (I didn't say I'm less smart than most programmers, merely that unlike most programmers I know I'm not smart enough.)

The code implementing Rubies interpretor is, ahh, um, threaded through with support for threading...

Common Lisp has the simplicity of Syntax but is a horrendous sprawl of semantics.

Scheme is near the top of my list, but I suspect a touch more expressiveness in semantics may make large programs simpler.

Scala appeals... but building it ontop of JVM strikes me as a very very bad start.

I'm begging for suggestions.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

With such reasoning, you

With such reasoning, you should stay away from languages with GC. And maybe lambdas. And perhaps Turing completeness. And hope it never gets popular and someone branches it with a version with even 1970 era optimizations. And lordy, don't run it on an x86.

Perhaps an abacus that you yourself made is the best bet. Anybody else wanting to use it can make their own -- we only assume a wood supply and some other things :)

[This was largely in jest -- I'm actually wondering about what it means to guarantee a simple implementation for a simple semantics. For every translation to a lower-level machine/semantic model, what's the property?]

Why exclude GC?

...unless it were in favour of a linear logic language like linear lisp.

How about this for a thought experiment....

Consider an reasonable major commercial software product. Say a man decades worth of software.

Now create some metric, perhaps an index of several metrics, (lines of code, maintability, understandability, defect density, defects) applied to the SOFTWARE AND THE SOURCE FOR THE LANGUAGE AND LIBRARIES THAT IT IS WRITTEN IN ASSUMING THAT IT IS A "FULLY BOOTSTRAPPED" language (ie. language implemented in itself).

What then would be the best language?

I bet under that "360 review", you wouldn't exclude GC'd languages.

360 Review

... would also need to include the implementation of the OS facilities that the "commercial software product" leverages or expects, along with external tools (e.g. databases).

I do agree that GC would be a reasonable part of the simple language. Garbage-collectors can be very simple. Complexity is added to GC mostly to achieve performance or other useful properties (versioning or transactions, for example). This can be a useful tradeoff given the amount of code often spent fighting for performance in commercial software products.

I agree...

...one of the advantages of Ruby is much of the external tools and libraries are in Ruby and hence much simpler than a C + external tools. Much of the pain that does occur, occurs when you are forced into talking to external tools like databases.

Alas, the world optimizes (sometimes prematurely) and drop down into C... or must communicate with the huge burden of legacy code. Alas, then things usually start to hurt a little.

I've seen extensions to GC

I've seen extensions to GC for memory accounting/quotas, extensions for distributed GC, and I can see how a GC extension can be used for orthogonal persistence, though I haven't come across an actual implementation or paper. I've never seen anything relating to transactions or versioning though. Have any links handy?

GC must often integrate with

GC must often integrate with other features in order to achieve them simultaneously. (This follows from the general principle that features orthogonal to a user become cross-cutting to the implementation.) I.e. the GC must be aware of versioned pointers if it is to collect unreachable versions while avoiding spatial growth in the number of maintained pointers. Transaction-aware GC allows the GC to effectively leverage the transaction log to quickly identify possible GC targets... and then to GC the transaction log.

I did not mean to imply that GC was achieving transactions all on its lonesome!

And I can't think of any papers, though I've read more than a few regarding GC with versioned pointers.

Right, the runtime,

Right, the runtime, including GC, always requires cooperation with the compiler. I just hadn't come across any such GCs, though I may have skipped them given I probably wasn't interested in such things at the time.

You obviously want to use

You obviously want to use Forth right?

Anyways, to quote Einstein: "Make everything as simple as possible, but not simpler." Some complexity is good, a sea of complexity is not good.

NOT!!

I think the parent comment raises an interesting point; "simplicity" often seems to be defined in terms of minimizing something -- but what? Some like to minimize the number of moving parts, others prefer to minimize the number of layers of abstraction.

Perhaps I never hung out with the cool kids who "got it", but I finally gave up on FORTH (and at least one other similar language) due to the insistence on purely operational definitions. Defining all constructs in concrete terms ("it makes the computer do the following things") may seem simple, but ultimately leads to problems:

  1. any omitted detail (intentionally or not) becomes a source of hidden side effects which can render futile one's attempts to reason about the behavior of code, but
  2. attempting to make all details explicit usually:
    1. locks one to a particular set of implementation decisions,
    2. clutters one's understand by mixing levels,
    3. makes the implementation more brittle, or
    4. all of the above.

Forbidding abstraction hardly seems an adequate response to the problem of "leaky abstractions".

Forth... almost.

It does somethings right.
* Very simple syntax.
* Very simple execution model.
* Very simple threading model.

Alas I think Joel is right. Something like Schemes "tower of numbers" is a simplifying factor. It makes it more analyzable.

So if I were to go all postfixy... I'd go Joy, or Factor or Cat, or maybe K.

If only we knew what that means

…to quote Einstein: "Make everything as simple as possible, but not simpler."

Nice phrase which I have always had problems interpreting in the context of programming languages. Apparently it's not only me. Two languages whose design, according to their respective authors, is motivated by that same quote, are Oberon and C++. If both are examples of ‘just enough complexity’, then any other language is.

Simple vs Simplistic

All too often I see people pursue 'simple' and achieve 'simplistic'. Simplistic is failing to capture the essential complexity of the problem domain or solution domain (correctness, concurrency, distribution, parallelization, utilization, efficiency, safety, security, robustness, resilience, etc.).

You can't avoid essential complexity by pretending it doesn't exist; you'll only end up building frameworks and hacks to work around a language missing essential features. That isn't simple. That's a side-effect of being simplistic.

If John Carter's objection is directed towards the shared-state process model for concurrency called 'threading', I certainly agree with the objection to it. There are models of concurrency that are far more manageable, secure, safe, and optimizable, such as dataflow concurrency, complex event processing, and actors-based concurrency.

But if the problem domain is concurrent, there's no avoiding concurrency. If you try, users will just end up hacking some awful process model around the language, much like POSIX C/C++ did, then fighting to improve performance through one sprawling framework after another (shared memory, DLLs, message queues, etc.)

And, John, perhaps you should consider giving up mutable state for simplicity, or at least making many [explicit] uses of it (i.e. for caching, whether that caching be for performance or disruption tolerance) unnecessary.

Dataflow programming seems to be a solution that allows a great deal of both concurrency and automatic caching.

I think I can propose a simple distinction....

If I find the source code for language compiler / interpretor / support libraries strewn with support for concurrency.... I reject it.

Let the OS manage concurrency at the process level. Ruby and glibc sources are now befouled with streams of code that works...99.999% of the time. Except I need to hit it with 20 or more 4 core boxes running at 3 Ghz for weeks. ie. I'm suffering from a stream of extremely hard to trace very very low probability bugs...that hit me too often to accept. After tracking these bugs for several years now I'm convinced that this approach is a dead end.

The fix rate for them is very slow, because it is very hard to create clear and simple test cases, and usually even harder to create "obviously right" fixes.

Not so simple a distinction

The distinction you propose between 'Language' and 'Operating System' is a rather weak one. In many ways, one may consider the 'abstract machine' upon which a language is designed to run or leverage to implicitly be part of the language.

The heuristic you propose amounts to rejecting languages whose abstract machines require a translation layer (implemented in compiler or interpreter) above what (coincidentally) happens to be a 'host' abstract machine (what you call an 'operating system') with regards to IPC and concurrency. I.e. you'll disfavor languages with portable concurrency and process models. If this policy of yours is extended, you'll also disfavor languages that have different persistence models than use of filesystems, and so on.

Unfortunately, there is no guarantee that the OS-layer process concurrency and inter-process communications models are any 'good'. There are a panoply of communications mechanisms in what you're probably considering the typical OS: environment variables, shared memory file mappings, fifos, sockets, command line parameters, etc. Add persistence to that and you get more mechanisms: files, init.d, cron. One can reasonably argue that these are part of the OS communications language, and that some of them are workarounds because of performance issues poorly met by earlier, more 'simplistic' designs. Weaknesses of the Windows/Unix OS designs for IPC and persistence include: poor type-safety, unnecessary serialization, complex (fragile) handshake protocols for version management, complex state maintenance to track callbacks and connections per-process, low-efficiency multi-cast, ill-defined multi-process persistence, potential CSP deadlocks for IPC call cycles not readily detected, complex frameworks needed for inversion-of-control, order-of-startup and shutdown and other process life-cycle challenges, distribution, disruption tolerance, security, and so on.

The sheer number of mechanisms creates even more complexity and more workarounds. It also tosses "simplicity" out the window, should you deign to directly support OS concurrency and persistence mechanisms from within another language.

To say the OS approach works well 99.999% of the time would be somewhat optimistic. When faced with problems of persistence, distribution, disruption tolerance, multi-cast, resilience (automatic regeneration), consistency (i.e. intentional cascading failures to avoid half-limping multi-process programs where fail-fast is useful), graceful degradation or concerns for performance (efficiency and utilization), safety, security, etc. I'd be hard pressed to agree the OS approach 'works' even 90% of the time. I suspect I'd be spending far more than 10% of my time 'working around' the OS, implementing complex frameworks atop these communications, greenspunning myself more effecient applications-layer persistence and inter-process communications.

In any case, saying that some ill-defined 'external' abstract machine shall be responsible for concurrency simply delays essential complexity and doesn't take much advantage of in-language opportunities for safe, high-performance concurrency and parallelization.

And the distinction between a host OS and a language can only be made insofar as the programs written in the latter are independent of the former... which usually requires an abstract machine be defined for the language.

Language vs OS... food for thought.

Thanks for a your long and thoughtful and thought provoking post.

By the time I got 3/4 the way through it.. you had me convinced... Yes, the OS and the IPC and the I/O mechanisms stink. futex is a nightmare gone wild, select/poll is scary indeed. Yes there must be a better way.. But then after I finished reading it and contemplating more, I became unconvinced.

Most of what I do is drive a rich ecosystem of other tools and packages through their hoops, and the OS is the habitat that supports that ecosystem. If there was a replacement to Linux that was written in a saner language than C/C++, and had a comparably rich ecosystem, I'd jump at it. But since I don't know of such a thing... I'm stuck writing code thats merely another fish in the big pond of linux applications.

So why does the OS provide such an important boundary for me, why do I feel it's is the true gateway and owner of concurrency?

Because it owns the hardware. That _is_ what an OS is. The owner and final arbitrator of access to hardware. The virtual memory, scheduling, networking and IPC is all irrevocably and (quite rightfully) the domain of the OS.

I can and do write highly concurrent programs in languages with no reference to concurrency... relying entirely on the OS to do it's thing. Ok, it is not very fine grained concurrency... but that's OK. It's robust and simple. And the hardware, as mediated by the OS provides me with some very strong guarantees indeed.

OS is abstract machine

A virtual machine or hypervisor can run underneath an OS, and an OS can time-share with other OS's on the same machine (e.g. via dual-boot). Some hardware (e.g. network printers) and services (e.g. distributed filesystems) can be arbitrated by many OS's and owned by none of them.

Your argument seems to have you targeting whichever abstract machine you see as being at the 'lowest' layer. Would you favor languages targeted directly for the hypervisor if your machine was running one?

Maybe you should skip the arbitrary abstract machine and favor a language that targets bare-metal. There are many good arguments for doing so... and following your logic, it should be even more "robust and simple".

Last I looked, hypervisors weren't simple...

...they were quite complex and devious things that were either an OS plus hairy stuff, or hairy stuff that were hosted by an OS.

Shudder... have you actually taken a close look at what the likes of Xen is doing to work? Uurgh! Simplicity it is not. God! They are cunning, devious and horrible beasts.

..skip the arbitrary abstract machine and favor a language that targets bare-metal

That's more or less what the likes of C/Bliss/Forth does. There is a certain simplicity that arises from "I know this add is going to take 1 instruction cycle". Alas that simplicity is trumped by the complexity of..

* It's going to do the add modulo 2^sizeof(unsigned int) for some undefine value of sizeof(unsigned int) and no problem I have is actually represented by such a weird mathematical entity.

* There is a hell of a lot of fine print about caches and alignments that mean that "will take only one instruction cycle" is actually a massive oversimplification of reality.

OS isn't simple

You have not previously objected to OS's for their complexity. Your argument above should have been made earlier against your own suggestion for targeting the OS's model with the commercial software suite. Or perhaps you can reasonably argue that there is a formal and significant distinction in complexity between OS vs. virtual machines and hypervisors?

It is easy to create an abstract machine above the host OS for the language to target that is simple even to the point of being simplistic. If simplicity is the goal, it seems to me the most effective means to achieve it is to develop a concurrency model into the language and its interpreter/compiler/implementation.

Mutable state...

And, John, perhaps you should consider giving up mutable state for simplicity, or at least making many uses of it (i.e. for caching, whether that caching be for performance or disruption tolerance) unnecessary.

What worries me is that often this approach seems to be "you're too dumb to do concurrency (yup I agree with that bit)....but trust us, the compiler writers...we're smart enough to do it right even though our task is harder than yours."

That last bit worries me.

Anyway, do you have any concrete languages / compiler interpretors in mind when you mentioned that?

Erlang, Oz, Clojure,

Erlang, Oz, Clojure, LabView (G), Max, etc.

It isn't about the compiler-writers, John. It's about the languages. Compiler-writers can't help with poor concurrency models, except to ensure the bad concurrency models are implemented efficiently and to specification. (Efficiency = doing something right. Effectiveness = doing the right thing.)

In any case, your approach seems to be: "you're too dumb to do concurrency.... but trust us, the OS writers... we're smart enough to do it right even though our task is harder than yours".

That last bit worries me. ;)

[Though I'd honestly rather trust the guy who is smart enough to do it right BECAUSE his task is harder than yours. It takes a lot of thinking and learning to figure out good-by-many-metrics ways to accomplish tasks, and people don't do this thinking and learning unless the task they have is sufficiently challenging that they discover a need to build tools to build tools.]

Do you have any concrete OS's in mind that do a good, consistent, composable job with concurrency? By my analysis, Unix, Linux, and Windows should be tossed right away. They do a poor job with concurrency, then do a poor job hacking around the weaknesses with things like shared-memory file mappings and XML schema. They also do a poor job with persistence and synchronization.

Hmm...

Erlang, I like the principles... but the syntax is pig ugly. Also the Erlang and OZ attitude of "concurrency is really hard, so lets make it really easy to do lots of it, so there are no obvious deficiencies" worries me.

Clojure fascinates, because it's like Scheme crossed with erlang. I really liked RScheme... but then it went unmaintained at one point in time.

Yes, I agree whole heartedly it comes down to the language design not the implementation.... but I still argue if the core of the "eval" implementation needs to be sprinkled with support for concurrency the language design itself is wrong.

Do you have any concrete OS's in mind that do a good, consistent, composable job with concurrency?

No, I don't know of any.

Linux does a very usable job, but alas, they have stalled and spun their wheels for the last about ten years progressively making it more complex in the broken attempt to make the broken pthread model work according to the broken standard.

Do you have any in mind?

Alas Googling for "Max" turns up the world, do you have a URL for it?

Max/MSP/Jitter is a

Max/MSP/Jitter is a suite of dataflow-based multi-media languages. Look up 'Max (software)' on wikipedia.

re: erlang's "no obvious deficiencies"

it would be neat if somebody checked into all the available erlang code to see how much of it uses timeouts to deal with / avoid deadlocks (as opposed to a system that somehow statically makes sure deadlocks can't happen?!).

Erlang

Erlang systems use timeouts for general robustness. They are good for breaking unexpected deadlocks, but also for dealing with network outages, machines overloading, external systems getting wedged, and so on. Remember that Erlang's philosophy is all about being robust in the presence of errors.

Timeouts vs Robustness

I was talking to one of our senior designers here at work when he said a curious thing...

If he could control one aspect of what goes into (a real time embedded) system, it would be timers.

In previous projects he had found that too often timers were used a sticking plaster over unreliable subsystems. Timers to control retries down in the comms layer, OK. Timers elsewhen.... look hard a it. Often instead of retrying they should just fail noisily and expose rather than hide the bug.

After he said that recalled an instance where I was on the track of a flaky bit of code... so I did my usual trick.... check every return code. Every one.

Aha! Some buggy code was causing a TCP connect to fail literally 9 times out of ten, but a timer and a fast retry was masking it so no one had noticed it for years (except for a general impression that the connects were slow, compute intensive and flaky).

In defense of timeouts (not timers)

In Erlang code I've seen and maintained, we use timeouts to noisily crash processes that are taking too long.

In your example, the first failed TCP connect would bring down a server process, which would result in a whole lot of logged messages, then be restarted by a supervisor. So, you'd have the best of both worlds; a controlled retry to keep the application running, and enough debug information to track down the cause.

Silently retrying something, on the other hand, goes against Erlang design philosophy, which is to crash hard on the first error and let supervisors deal with recovery.

And lordy, don't run it on an x86....

Believe me.. that is a major headache for me. I'm running on a flock of Intel Core 2's.... and whether that flock of insanely complex hardware is churning exactly right gives me ulcers to think about....

....well, strangely enough, it's mostly not the hardware that's biting me. It's the layers of libraries and tools, the layers and layers software that doesn't seem to give me enough 9's of reliability in high load high concurrency situations.

Lua

I wonder how you feel about Lua? Seems like they've gone for the same trade-off, at least. I guess the main question is whether their choices regarding semantics are to your taste...

Lua - And one data structure to rule them all...

Well, it work for LISP and cons pair based lists. So I guess Lua tables are a pretty generic mechanism especially coupled with first class functions. I can't help wonder whether Lisp / Schemes "one data structure to rule them all" isn't a better choice than a hash table.

Either way, it's an interesting choice.

Curiously enough in the days when I did Perl I found it trivial to spin endlessly complex datastructures out of arrays and hashes....

...but then found I couldn't maintain them. In ruby I've learned as soon as I start doing that to make (another) class. Much more maintainable.

And the answer is.....

Currently lua,factor,cat are looking the best.

Pity, with the exception of lua, they haven't shown up in The Great Game yet.

I think I'll spend a few evenings playing with Factor

URL

The game to which you allude is kept up-to-date here.

The Great Game?

This kind of comparison should be taken with a grain of salt. It does not compare programming languages at all. It has two main defects: (1) execution time is an implementation property that says very little about a language, and (2) the comparison depends strongly on what programs are compared, and typically these are chosen to be appropriate for sequential imperative languages.

Consider concurrency: as dmbarbour says, if the problem domain contains it, then the language should support it. Erlang and Oz do support concurrency in a nice way, and it sounds like you should learn about it before saying things like this:

Erlang, I like the principles... but the syntax is pig ugly.

Also the Erlang and OZ attitude of "concurrency is really hard, so lets make it really easy to do lots of it, so there are no obvious deficiencies" worries me.

The first remark is irrelevant and the second is wrong. If you want to have a serious discussion, please try to understand the replies to your posts and don't dismiss them with flippant remarks. Many people on LtU have a lot of experience and if you listen to them, you can learn a lot.

Ok, so I was a too flippant...

...sorry.

Too deeply hidden under the flippant remark about the syntax was the thought that writing code to analyse/refactor/compute globals properties of /rewrite Erlang code is very hard compared to Scheme/Factor/Joy.

I really do like the "no mutable state" principles of Erlang that make it easier to do concurrency right... Too often people make the assumption that because they don't actually invoke "pthread_create" that they don't have concurrency issues.

Every state machine in a program is a mini-thread of concurrency with home grown scheduler... subject to all the same issues of concurrency. Except since they tend to work deterministically in lock step, usually only a very small portion of the state space is explored in typical operation.

However a small code change can easily expose a new portion of state space and new bugs. These bugs are often just "thread races in drag", but are not thought of as thread races because people employ magic thinking..."I don't have threads or concurrency I have state machines."

ie. Erlang's principles will help make simpler programs more correct as well.

However, I'm not in the least bit convinced that "making it easy to do concurrency Right" is sufficient license to "use massive concurrency especially without as much hardware support as I can get".

Yes, often the problem domain does contain concurrency. What I'm arguing is threading is usually not the simplest or best mechanism for handling that concurrency.

In several cases that I have had the chance to analyse down to the lowest detail... the claim "We must use native threads" has come down to "we have underspec'd the hardware and designed the system poorly, violating several sound design principles, so now we need a backdoor to rescue us". (Well, those were in the non-trivial cases. The trivial cases were "Microsoft says threads are cool, so we must have some") (The trouble with such analyses... is there are always only going to be very few. Nobody can afford the time it takes to pick apart and truly understand many large complex systems.)

As for "The Great Game". Yes, it is only a Game. It cannot be a definitive answer, hence the name. However, as a "order of magnitude" answer... it's not bad. I certainly wouldn't get fussed over any difference in "The Great Game" that is less than a factor of 2. But I'm quite prepared to believe there is some fire under any "factor of 10" smoke.

Execution time isn't purely an implementation artifact. There are hard limits to optimization based on how much the compiler can statically infer about the code.

Which is why "Clean" does surprisingly well, and well, I don't think Ruby will ever get close. Sure Ruby can get a _lot_ faster with a better implementation, but I bet it will _never_ catch up to Clean because you simply cannot infer enough about the parameters to and return values from a function.

Factor is pretty slow at the moment...but I suspect in the long run it has the potential to be one of the fastest.