Dynamic Languages Symposium Program Online

The program for the Dynamic Languages Symposium at OOPSLA is online.

The three invited talks look especially interesting. After the long nuclear winter caused by Java, it seems that we finally are entering a period of programming language renaissance.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Au Contraire

I'd say that Java's outstanding contribution is legitimizing garbage collection and virtual machines, within mainstream enterprises. Perhaps without Java we'd be in a different, darker winter.

IDEs

It's simple (and verbose) syntax and lack C PreProcessor has also lead to an improvement in mainstream IDE technology as well.

and, and, and...

I'd say that Java's outstanding contribution is legitimizing garbage collection and virtual machines

and standardized cross-platform distribution formats

and dynamic loading

and application containers

and (shared-state) concurrent programming.

Now before anyone jumps down my throat, obviously all of these predate Java by at least two decades. What Java did was legitimize these technologies to the point where they could be used by average developers in an enterprise environment. This is no small thing. Those complaining about Java mostly don't remember the dark, C-infested days that came before.

You say that...

and (shared-state) concurrent programming.


You say that as if it's a good thing... ;)

EDIT: Added smiley.

I think most of us associate

I think most of us associate standardised cross-platform distribution formats with virtual machines anyway? Albeit usually something more clunky than jar files.

java innovation

Java has some good points, but I think you're giving it a little too much credit.

Dynamic loading of shared libraries has been around since the 80s on unix. It became mainstream on desktop PCs around the time of Windows 95, which offered a bunch of ways to use DLL files to load code at runtime. Software companies writing databases, games, etc, did not wait for Sun's blessing to start using dynamic loading.

Concurrent programming has been around pretty much since the first time-sharing machines crawled out of the primeval muck. Anyone who's ever written a UNIX kernel knows that concurrency is a huge source of Heisenbugs in programs. The pthreads library made concurrency a little bit more palatable to the masses, but nobody ever really reduced the danger of concurrent programming. (Flame-retardent suit on) I don't think Java's concurrency model is really any safer or better than pthreads'.

Concurrency is actually a pretty good thing to be researching now. Some people believe that the computers of the future will be massively parallel. Perhaps a new kind of programming language will be required to really exploit this power. People who write Verilog or VHDL "code" (yes, I know these are not technically programming languages,) may be getting a foretaste of things to come.

Concurrent programming languages

Perhaps a new kind of programming language will be required to really exploit this power.

A pretty wide range of different concurrent programming languages have been developed over the years. Languages like E, occam, Erlang, and Oz provide concurrency constructs vastly better and safer than Java's monitor-based model. Occam in particular was explicitly developed to take advantage of massively parallel arrays of transputer chips, and supported extremely fine-grained parallelism (so much so that every block of statements must be explicitly identified as sequential or parallel). Not that I'm saying that improvement couldn't be made. Just curious as to what kind of "new" features you would want to see beyond those that already exist in concurrent languages.

My current hobby horse

what kind of "new" features you would want to see

Proof in/with the given language that the concurrent code you just wrote is/not broken. Commercial-quality tools to help prove. Tools to generate batteries of automated tests. At the very least, extensively tested and documented libraries to make it all not have heisenbugs. All wrapped up in packages that won't freak out "main-stream" programmers.

(I think it is a lot to ask, but I think it is a good thing to hope for.)

"Main-stream" programmers

Starting to get wildly off-topic here. But anyway...

All wrapped up in packages that won't freak out "main-stream" programmers.

These would be the same "main-stream" programmers who don't tend to want much to do with the kind of powerful type systems that would let them prove really useful properties about their code? The same ones that are generally downright resistant to the extra annotations describing intention required to make something like SPARK/Ada or JML work? I agree that adding features for proving the absence of concurrency bugs would be useful. I'm just skeptical that it will become accepted by "main-stream" programmers, when they resist adding the necessary features to do the same for sequential code. In that sense, perhaps E has the best approach - its concurrency model is explicitly designed to avoid the possibility of deadlock.

As it is, getting "main-stream" programmers to try a language that supports anything other than thread-based shared-state concurrency has been something of an uphill battle. Returning to the topic of Java for a moment, there was a push at one point to get the well-tested (and formally verified) JCSP library of CSP primitives for Java adopted as part of the Java concurrency utilities. This was rejected in favor of the present util.concurrent package in part because that package presented a set of abstractions (i.e. those based on threads with shared-state) more familiar to "main-stream" programmers.

programmers

It's not really programmers who resist new languages... it's economic factors. I know a lot of programmers who would be happy to use something like OCaml or scheme if that became feasible. I myself wrote a compiler in OCaml, and I was very happy with the expressive power of the language and the type system.

I don't think proofs are the answer, any more than they are in other fields of engineering, and for much the same reasons. There are too many details to consider, and too little time to spend forever polishing an abstract mathematical model of your program. Proofs are very helpful for designing algorithms and heuristics-- but don't try to use a screwdriver when a sledgehammer is what you need.

I do think various model checking and artificial intelligence approaches will be successful. Tools which reason about high-level programs in bayesian ways.

The Food and Drug Administration allows a certain weight of insect parts to be present in each pound of flour. Think about that next time you are eating your bowl of cereal. You could come up with a way to eliminate insects from flour entirely, but I doubt you would see much interest on the part of farmers.

Proofs

I don't think proofs are the answer, any more than they are in other fields of engineering, and for much the same reasons.

I think that largely depends on what you mean by "proofs". If you mean proof via manual formal deduction, then I agree that it's unlikely to be workable for "mainstream" programming. But automatic proof via (for example) a type system doesn't seem beyond the realm of possibility. It just lacks acceptance. For whatever reason, developers seem to prefer a test-based approach even when it takes the same kind of effort and detailed specification as something based on static analysis but delivers less confidence (see some of the recent work on "behavioral" unit testing).

As for what other engineering disciplines do, in my experience (mostly control systems, trajectory design, and circuit design) engineers do spend lots of time polishing a detailed mathematical model of the design, and then run extensive simulations on it (assuming the model isn't simple enough to permit a closed-form solution). The analogous situation in the software world would be developing detailed models suitable for model-checking (simulation being less useful due to the highly nonlinear nature of most software systems). Perhaps your experience has been different.

Yes, this is true. I have

Yes, this is true. I have several friends that are engineers (civil, electrical, etc) and they don't just build a system and test it, and hope their tests covered everything. Okay, so they may not mathematically prove their designs before they build, but they utilize design methods that have been developed over the years and have a solid mathematical foundation, even if the engineers don't use that mathematical foundation directly. A civil engineer may not solve systems of PDE's to calculate the stresses on a bridge design, but they use design methods that are build on understanding (and possibly even solving) the mathematical models of stress. CAD programs like AutoCAD and SolidWorks have numerical solvers (finite element analysis for example) to help the engineers discover if their design is adequate. More seasoned engineers will look over the design to see if the design looks sound. A chemical engineer looks at mathematical models describing heat flow to determine if their plant design is adequate.

The amount of rigor in engineering fields puts most software development to shame, which is really a shocker when you consider that of any field (in engineering or science) programming is the most similar to mathematical formalism. Of any discipline, we should be the ones who are most able to use formal design and verification in our design and verification process.

Engineering software

Okay, so they may not mathematically prove their designs before they build, but they utilize design methods that have been developed over the years and have a solid mathematical foundation, even if the engineers don't use that mathematical foundation directly.

Quite so. There's a reason that engineering degree programs are places where people who are "good at math and science" are often directed to go. It's the same reason that those degree programs require their students to take lots of math classes, and teach additional math in their own classes. Once you get to grad school, a lot of "engineering" classes pretty much are math (perhaps with a few applications thrown in for good measure). There are a lot of textbooks out there that carry some variation on the title "Engineering Mathematics". Engineering is very firmly rooted in mathematics.
Of any discipline, we should be the ones who are most able to use formal design and verification in our design and verification process.

I've often had the same thought. But I think there are several things that have prevented the software community from really embracing formal mathematics.

One of the problems is culture: software "engineering" often as not grew up outside of engineering schools, and even when it did grow up inside engineering schools the math that most engineers are taught simply isn't applicable to software (how many "Engineering Mathematics" texts include sections on logic, formal languages, set theory, or category theory?) so there isn't really a tradition of applying mathematics to software. Crack a software engineering textbook, and you'll likely find entire chapters devoted to things that are more project management than engineering. Crack a textbook on (for example) spacecraft engineering, and you'll find little or no project management, but lots of information on orbit mechanics, attitude dynamics, structural stress calculations, and solar array illumination computations.

Another problem is awareness of the mathematical techniques that are available. Traditional engineering is primarily rooted in continuous mathematics and differential equations (which isn't all that useful for software design). That's what engineers get taught. In fact, that's what most people (even non-engineers) think of as math - they simply don't realize that there are other mathematical techniques out there that are useful for understanding software. What's required is a change in the way math education is carried out. The Beseme project is an attempt to make such a change.

Agreed

The things you raise always give one pause for thought. Everybody can hypothesize all day long about the underlying reasons for lack of acceptance. What we really need is for somebody with resources (either time:academia; or money:industry) to do some real usability research and figure out the pain points.