Programming: 50, 100 years from now

Since LtU is a little slow these days, I thought I'd ask for half-serious opinions about how 'systems' may be 'developed' in the far future.

There are some things today which seem destined to be replaced...eventually. One of them is the keyboard. A physical board with keys that have letter, numbers and symbols pre-printed. I don't know how, and I certainly don't know when, but there have to be better solutions to interfacing with a computer. In term of programming, the decline of the keyboard will have to mean the decline of text based languages. Could syntax wars be a thing of the past (in the future)? Modern architects sit in front of a two dimensional screen with a mouse, a keyboard and sometimes another, 3D enabling, device (at least that's the image I have of them). It looks like they all want to just reach into their screen and stretch a building wider with their two hands. Wouldn't programmers be able to accomplish more if they didn't have to remember whether the way to they need to type in toString or ToString?

At what level of abstraction might the majority of 'developers' of the future work? Today we have many assembly language programmers, but they are certainly not a majority. With Java, .NET, scripting languages, MOST developers don't even do explicit memory management. SQL 'developers' can usually get away with not having to worry about algorithms, underlying database, ifs, fors, whiles. Even in functional programming, pattern matching in Haskell seems more productive than car/cdr of lisp. List comprehensions reduce (eliminate?) the need to use fold/map/filter in programs we write today. I can't imagine what the (far) future holds.

What might PLT theorists of the far future work on? Might there still be PLT theorists? (do we have geometrists?) My, fairly shallow, understanding is that there are already established limits to how expressive a programming language can be (Incompleteness theorem, Halting problem,?).

Will there still be 'programmers' or 'developers' (hence the quotations)?

How much more do we need to learn before we can write the 'software' for a holodeck system? What do we need to learn to write the holodeck?

(I originally titled this "Programming: 100, 200, 500 years from now." But I have a hard time imagining how we might work 20 years out much less 200 years out...perhaps others can do better)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

50 years from now

50 years ago people were programming in Lisp, Fortran, or Algol. Plus ça change...

In 1960Mumble I was using

In 1960Mumble I was using Algol 60 and Lisp 1.5 now ("I'm retired and can use whatever I want) and I use Scheme and Ada.

Is this almost the same or very different??

Web

I am afraid not much will change.

The problem lies not in the academic world, but in the commercial sector who does not dare choose better programming languages than Algol (all the modern mainstream languages are Algol variants). Not only managers are afraid of change, but programmers also: they fear that by choosing a better programming language, they will be reduntant.

I have been trying to engage my colleagues into checking out Haskell and Eiffel as examples of better programming languages. I have hit a wall: not only resistance is great, but discussion ends up in flame wars (I am not the calmest person in discussions, but the other side does not move an inch!).

Therefore I really doubt we are going to see any real progress soon. As for using other means than keyboard, the only alternative I see is voice recognition. The paradigm of commanding the computer with devices can not be really changed, since it is humans who know what they want, and not computers. Most 'command' devices are ineffective when compared to the keyboard, and that is the reason the command line interface thrives (imagine building an application by pointing to keywords/functions with the mouse! you could barely write 50 lines of code in a day!)...

For most users voice has

For most users voice has less reliable bandwidth in total than the fingers, FWIW. There're experiments with mouse-based systems based on a predictive text-style scheme which I'm told can go pretty fast when you're practiced - I imagine a variation using a thumbpad or the like'd work too.

The academic bubble

It would be easy for me to blame everything on Sun and Java, but I know that most businesses aren't interested in developing software for the sake of developing software.

I'd be interested in specific business cases that you used in your argument to your colleagues for using something like Eiffel or Haskell.

Academics need to do a better job of presenting compelling business cases for the switch to "better languages".

I think we're starting to see a thaw in Java-inspired "staganation" though - maybe not something "better" like Haskell or Eiffel, but to languages like Ruby and possibly (probably not though) to an old timer like Smalltalk (if Croquet gets some traction with its 1.0 release), and maybe to something like Arc, if Paul Graham pulls a rabbit out of the hat.

73 points in the first half

Academics need to do a better job of presenting compelling business cases for the switch to "better languages".

That sounds backwards to me. After all, it's the technology companies that claim to be good at making money out of existing technologies. If academics, who claim to be good at primarily discovering new knowledge, are also supposed to give business advice, then something's wrong somewhere.

Cut out the middle man

That sounds backwards to me. After all, it's the technology companies that claim to be good at making money out of existing technologies. If academics, who claim to be good at primarily discovering new knowledge, are also supposed to give business advice, then something's wrong somewhere.

You have three general entities here. You have the vast majority of businesses for which software development is just a means to an end. Then you have the middle men like Sun, Microsoft, IBM...who take the research they've done and which academics have done and produce something sanitized from all of that for "the masses". And then you have the academics who are feeding the Microsofts and Suns.

So the problem that many people (around here at least) is that the general business community is just buying up whatever Sun and Microsoft package up for them (.NET/Java). So "the goal" would be to do an end-run around these middle men and bring primarily academic (or at least something different and more productive) to these general business organization.

But let's try a couple examples. One is Ruby on Rails, which is bringing Ruby into the mainstream now. The key was that RoR was a business case that is tangible. Just handwaving about how productive Ruby (and dynamic languages in general) was not enough.

Another example is Linux. It did an end run around and destroyed proprietary Unix (for the most part).

So the goal for something like Haskell or whatever academic language you want to throw out, is for the academics or non-academics that use it to come up with tangible business cases for its use. The middle men (Sun and Microsoft) want you to buy their packaged up solutions.

Of course there are shades of gray in all of this, but if some academic is living in a bubble and doesn't have any business sense or doesn't make an effort to engage the business community or even the more business-oriented MIS department then it'll just have to wait unti someone else packages it up.

Does Haskell become another Lisp?

Does Haskell become another

Does Haskell become another Lisp?

I wonder about this comparison. At least Lisp was somewhat successfull in the business world in the 80s with AI- and ( expensive ) Lisp-machines. The Common Lisp standard was driven by company requirements for a large part.

I don't see yet a large market segment for Haskell or a Haskell successor. Maybe security critical applications where rigour and proof techniques are prescribed by customers. Since we are doing unbiased prognostics here I would expect that the small domain of "specification languages" will be merged with pure functional languages. Allthough I do not expect that this drags the masses into Haskell business we might experience a surprise? But are there any indications? No reliable IDE's, no "Learning Haskell", "The Haskell Cookbook" or "Haskell in a Nutshell" in O'Reilly bookstores. No RoR new-age hipsters. Just a couple of CS-students and PhDs talking about type theory and monads. A good indication would also be a "Haskell practitioners conference" where the category theory guys would have to stay at home.

Indications

My favorite revision control system, at least for personal to medium-sized projects, is Darcs, which is written in Haskell. And then there is, of course, Pugs. The latter page has a link to a Learning Haskell, a 254-slide presentation by the inimitable Audrey Tang, complete with comparisons between Perl and Haskell.

I don't think Haskell's adoption curve outside of academia will be anything like that of any other language we've ever seen, but I think the above examples are signs that it has a broader role to play, that it's not playing yet.

wow!

admirable and instructive set of slides about Haskell in Mozilla's XUL format! :)

gotta love that lady...

Not a luddite, just an idiot...

This may paint me as a bit out of touch, but... how in the hell am I supposed to view that slide presentation?

rmalafia did say "mozilla" ...

Any recent enough mozilla-based browser should be able to handle Audrey's presentation. It looks great in Firefox.

Wow... how user friendly.

So I have to download yet another browser to view these slides? Whatever. Bandwidth and disk space are more or less cheap these days.

* trundles off to get firefox *

OK, so I tried to open the file I had downloaded earlier in firefox and was greeted with a whole lot of nothing... some navigation looking bar at the top of the window and no slides. I guess that not only do I have to download a new browser to go through the slides, but I also have to be online to view them? No thanks.

I know I've gone on here and elsewhere about publishing for the web vs. publishing for print (and this was touched on a bit in the discussion about The Problem With Parsing), but this seems to be an even more egregious sin. It's publishing for a very small subset of the web -- those with mozilla based browsers. A horrible publishing decision - and that's without even touching on the clear readability issues with these slides - at least if one wants this information to be widely read.

don't be so harsh!

at least, you are now able to view the web with a more up-to-date rendering engine rather than through some 2001 antique. :)

why don't you try to simply click on the link rather than opening the file you downloaded? i don't know why it didn't work, but following the link should be fine.

yes, she could just go with some clever html+css presentation alright, but i wouldn't expect that from a perl/haskell fan... :)

Spare the rod, get lousy documents...

why don't you try to simply click on the link rather than opening the file you downloaded?

Two reasons: When I clicked on the link the first time, it downloaded the xul file. Why should I bother to revisit the link when I supposedly have a local copy of the content? Also, it's occasionally nice to view a document when you are, say, on an airplane, and the 35,000 foot cruising altitude puts your wireless card a wee bit out of range of the hotspots on the ground.

yes, she could just go with some clever html+css presentation

Oh, I was thinking something more along the lines of a pdf -- far more portable, actually printable (unlike these slides), and available for offline reading on a number of devices with a number of different readers. And preferably it would be a pdf generated with readability in mind, and not the unfortunate typesetting stew that made up those slides.

And since I'm already dangerously off-topic, I'm using Safari as a browser, and its rendering engine is as "up-to-date" as gecko is.

sheesh

Ok, man, i downloaded the xul file just to see what was going on. It simply turns out you just download the xul when you click on the link on a xul-unaware browser, but then, the xul only works with the associated javascript file:

http://www.pugscode.org/osdc/takahashi.js

It was just a matter of some source-code inspection. Download it too and save to the same directory as the previously saved xul. You'll also like to have the associated css file as well:

http://www.pugscode.org/osdc/euroscon.css

though it's by no means necessary...

She could have just packaged the javascript and css in the very xul file.

"Why should I bother to revisit the link when I supposedly have a local copy of the content?"

because it's so easy to visit a link?

"Oh, I was thinking something more along the lines of a pdf -- far more portable, actually printable (unlike these slides), and available for offline reading on a number of devices with a number of different readers."

I'm not sure a pdf is as portable as html. And, with the right css, you can have pretty good printing (page breaks as well) and nice fonts. But, then again, she choose xul, not html+css... xul is for GUI building AFAIK...

"And since I'm already dangerously off-topic, I'm using Safari as a browser, and its rendering engine is as 'up-to-date' as gecko is."

great then!

Aim higher than mediocrity

As we have gotten so far afield, this will be my last post on the subject, unless it actually gets steered back to the realm of programming languages...

It was just a matter of some source-code inspection.

And you really think that this is a reasonable way to distribute a document? It

  • Requires the reader to use a specific piece of software that probably isn't on their system, while an alternative program is. So you're requiring the user to install redundant software.
  • As it is, the slides are not readable offline. Unless, of course, you are willing and knowledgeable enough to do "some source-code inspection."
  • The slides are not printable.
  • The slides have no coherent sense of typesetting, and would actually require a conscious effort to make them less readable.

About the only thing this distribution method has going for it is the relatively small size. However, we could do something similar by distributing a *TeX file, which would require about as much work to view by the end user as these slides. Of course, there are obvious reasons why people stopped distributing their papers and slides as tex files, many of which are shared by this XUL format.

because it's so easy to visit a link?

Granted, it's not hard to visit a link. However, launching firefox, typing in lambda-the-ultimate.org, waiting for the page to load, clicking on the discussions link, clicking on the link for this thread, searching for Audrey, then clicking on the link... well it is more involved than double-clicking the icon in my inbox directory.

I'm not sure a pdf is as portable as html.

Well, readers for both formats are nearly ubiquitous. However, PDFs have the advantage of being a single, self contained file, perfect for offline reading (and easy annotation with modern readers). In order to work with an HTML file offline, one needs to download all of the associated images, all of the pages that are linked to in the case of multipage documents, and so on -- and then there's the matter of things like the css and javascript files that need to come along as well...

And, with the right css, you can have pretty good printing (page breaks as well) and nice fonts.

I suppose. In theory. But I have yet to see a page with a print css fie that results in anything more than a passable result. Sure, it's great that we actually get things like page breaks and links printing in black, not light grey. But there's a lot more to good typography than that, and you just don't get more than that -- if you get anything at all...

Of course, I'd be willing to forgive the sins of the distribution if the content is good enough to overcome the problems of getting to it. And these slides just don't seem to be. They're not bad, per se, just not particularly instructive. They certainly aren't a very good method of "Learning Haskell" -- one would be far better off here with something like Two Dozen Short Lessons In Haskell (which, sadly, hasn't been updated in almost 9 years), or something else along the lines of Jason Hickey's Introduction to OCaml (PDF) (only for Haskell) which offer exposition as well as examples.

please

"And you really think that this is a reasonable way to distribute a document?"

I didn't agree with it being in xul, but didn't have a problem with that. I understand html+css would do a pretty good job, while being portable and readily available. Html documents are composable documents and can have other formats embedded, like css. So, if i was to distribute one such document, i would package everything altogether in a single file.

Beats me why she decided to restringe herself to xul, which is used to render GUIs on the fly via an xml description, like the user interface of Firefox itself.

"* As it is, the slides are not readable offline. Unless, of course, you are willing and knowledgeable enough to do 'some source-code inspection.'"

which is not that difficult, since it's just a plain text file with some hrefs...

"* The slides have no coherent sense of typesetting, and would actually require a conscious effort to make them less readable."

really? Perhaps i'm too used to a text-only command shell, cause I didn't feel it half as bad... but then again, Mac users tend too be a lot touchy about design...

"a *TeX file, which would require about as much work to view by the end user as these slides."

oh yeah, sure. Because viewing a TeX file is just as difficult as downloading Firefox and clicking on a link...

"Granted, it's not hard to visit a link. However, launching firefox, typing in lambda-the-ultimate.org, waiting for the page to load, clicking on the discussions link, clicking on the link for this thread, searching for Audrey, then clicking on the link..."

See that little integrated Google search box in the right-upper corner in Firefox? Type "learning haskell" + ENTER, then look the 7th link down... or "learning haskell audrey" one better...

"They're not bad, per se, just not particularly instructive. They certainly aren't a very good method..."

They are very instructive for programmer with a long imperative programming past, with no idea whatsoever of functional programming...

Tolerate experimentation

I have done exactly the kind of thing Audrey did with these slides - namely, experiment with a format which seemed promising, and use it in a presentation. They are slides, after all, i.e. they were presumably associated with a live presentation. I don't think there's any moral imperative to create slides in the most widely distributable possible format.

As for the content, I mentioned those slides partly because they were targeted at people already familiar with Perl, and I was pointing out not that those slides provided the perfect introduction to Haskell, but rather as an example of a non-academic context in which Haskell is being introduced to programmers. Whether you like XUL, or these particular slides, isn't really relevant. People like Audrey push the envelope in all sorts of ways, and that should be valued, not subjected to endless trivial criticism.

Audrey's presentation is for in person presentations.

Audrey created these presentations to show to people who came into the room to see the talk in person.
The presentation wasn't designed with single file distribution, printing, etc in mind.
The typesetting is perfect for the Takahashi presentation method Audrey used.
Audrey's presentations in various countries have inspired many people to learn Haskell and FP, this is obviously a good thing.
When you look at the context and purpose of these presentations, your complaints don't fit.

EuroHaskell 2004 and #haskell

Björn Bringert and I organized EuroHaskell 2004 where the motto was "less talk, more code!"
Anders Carlsson came up with a variety of marvelous signs. The category theory guys didn't stay home, but they did try my unicycle. (That's John Hughes, author of 'Generalizing Monads to Arrows' on the unicycle, I'm the guy with the ponytail.)
Björn came up with a wonderful tshirt design as well. (Knowledge of slashdot, south park, and type theory required to get the joke.)

There are 'Learning Haskell' books but they're mostly free online books or wikibooks.

If you want to see more indications of activity in the Haskell world, drop by the #haskell irc channel on irc.freenode.net. The channel averages 170 users or so. The discussions range from teaching newbies to deep type theory and category theory.

Arc

and maybe to something like Arc, if Paul Graham pulls a rabbit out of the hat

I'm surprised this is the first I heard of Arc. It's nice to see a change in focus, instead of blaming Lisp's failure on "Blub" and looking more at the language itself. Even so, Arc looks like a dead project, which would explain why I never heard of it :)

Arc isn't dead

Paul Graham always says that Arc isn't dead. He says he's working on it, but just hasn't released anything yet (and might not release anything for several years). It's like a seed: alive inside, but dead-looking. And it's just sitting there frustratingly at the moment.

I believe a millionaire with

I believe a millionaire with a deep passion for Lisp and lots of time in his hands can eventually produce some significant contribution. Graham is the author of "Hackers and Painters" and i believe he thinks of himself as some kind of Michelangelo, working on the Sistine Chapel... a true work of art, not the kind of thing to be rushed.

His not competing with Microsoft or Sun for the hearts and minds of the young generation of client programmers... its just his take on Lisp.

Startup School

He was asked, "When will Arc be finished?" at the Startup School in November. His response was, "Next question?", followed by much laughter from the audience. So I think it's fairly safe to assume that Arc is dead, at least for the forseeable future.

As of 13 hours ago...

And more info, as of last night:

http://reddit.com/info?id=38vr.

So yes, basically dead, but he may start working on it again this summer.

Specific business cases

Our company develops two kinds of products: a) business applications (web, desktop) in the cellphone billing domain, and b) realtime defense applications (radar consoles etc). The business department uses VB/Java, and our departnment uses C++/Java.

Keyboard is a dense input device

While there is a large learning curve, I do not see the keyboard being replaced in full. People are visual creatures. We do well scanning a keyboard looking for odd symbols, particularly when we have been trained in 10 finger typing. I personally can type faster than I can talk.

I become frustrated at voice mail. It works, but not well. The interface is serial. There is no way to skip the unimportant parts to get to the important parts. There is no way to go back (other than rewind so many seconds and hope you get to the right spot) when you realize that unimportant stuff, you didn't pay attention to, is critical to the message.

There are also privacy issues. When I speak people can overhear me. You cannot carry on a private conversation on a crowded bus. Someone will overhear it, but you can keep people from reading over your shoulder. There are tools to tell what I'm typing, but they are much harder to deal with.

Touch screens do show promise, but with them you have to learn to draw the symbols, so I'm not convinced the learning curve is shallower than a keyboard. A keyboard is faster than drawing, when you know both well. However if you only have time to learn one, learning to draw the symbols is more useful than learning to type.

One other disadvantage of the keyboard is the limited range of symbols. My keyboard does not have a sigma symbol. I have no clue how I would write "Sum from 1 to 10 of f(x)", even though my computer has the ability to display the above using the proper symbols (including getting the range correctly above/below the sigma symbol). I would love to have a handwriting recognition pad just for inputting such things, but on the whole I believe I can write faster with a keyboard.

So in the future I think computers will have touch screens and voice recognition by default, many people will get by with just these. However those who need to input a lot of data will continue to learn the keyboard.

Most devices will have a monitor of some sort as a output device. I don't think smell or taste will ever catch on, except in specialized industries.

Keyboard-less keyboards (projection / virtual)

As computing devices shrink, I think some sort of projection-style keyboards could catch on, once a few more usability experiments are done. Since the patterns projected as fake key labels are all software, this also neatly solves the key labeling problem for non-ASCII characters, like those used in Fortress [aha! an LtU-related bit :-)]. Also see this related half-bakery discussion.

keyboards?!

I'm a keyboard man myself, but you guys seem to be forgetting that in the near future, either people will be talking to computers or exercising some aerobics in virtual visual interfaces like the one Tom Cruise interacts to in Minority Report...

Programmers and analysts in the future? What for?, when you have more advanced intelligence available than our own... more likely, we'll serve as batteries...

I wonder what side-effect free programming will be like in the light of quantum computing...

Well, someone has to program those easy to use interfaces

I always figure that making computers simpler to use amounts to adding more complexity on the software construction side. And even if you're not in on the core development side, there's going to be lot's of DSL's to master for those that are intermediaries.

Well, rmalafias point seems

Well, rmalafias point seems to be that you, if you become a computer yourself, won't need particular stuff anymore for programming them; just like nobody is employed to connect the synapses of your brain.

This is a rather depressing perspective. Either mankind will make progress in the domain of autonomous synthetic bioware or we will continue programming in C also in 50 or 100 yrs.

but we have lots of people

but we have lots of people who program our minds (teachers, psychologists, managers?). they just do it in a fuzzy high-level language that noone understands realy well. But you could see this occupations merging with that of the programer. And perhaps the programing would be more on a direct neural interface.

This is an interesting topic

This is an interesting topic with a very wide scope. I remember also Michel Foucault inisting on techniques of body discipline that where invented in 19-th century schools, prisons, factories and armys. Neural programming on that level may be simpler than the convenient SF themes like para-telepathic communication and re-wiring of the neural cortex. It didn't even have to be invasive.

While I consider the "we'll

While I consider the "we'll serve as batteries" part to be utterly silly (are pilots "just batteries" now that autopilots have been around for ages? Does a wrench render an engineer unnecessary? I doubt I'll live to see the 'more advanced intelligence' in form of a computer, unless of course people soon figure out that immortality thing which is just around the corner since centuries...), check out this if you're interested in innovative user interfaces, it's really cool: http://mrl.nyu.edu/~jhan/ftirtouch/

Actually...

I don't know about you guys, but I'm ordering an Optimus Keyboard...

I know I'm taking something off-topic further afield, but hey, its a cool looking keyboard :).

Seriously, I think this is targetted at gamers, but its aesthetically pleasing and it does what you're describing. Its been a bit of vapourware, but they are intending to market it, so I'm hopeful.

Don't toss your money

Wait til' it's beyond the preorder stage. If that device ever sees the market, I'll eat my hat.

and future PLT research?

The keyboard issue will take care of itself as we start getting screens which recognize 'touch.' I love the keyboard and the command line--it can be extremely convenient-- that's why I'm curious about what might replace it.

Any way, my original question was intended to be more general. What kind of problems might the theorists of the future face? Is it possible that as computers become more and more ubiquitous, we will be forced to discover higher and higher abstraction, until the end-users don't need a programmer to add features? Perhaps some system architects will put some basic constraints on a system (so a user can't have their system delete all other systems), and everything else will be controllable by users?

Take a look at this picture, and read its caption :). Do any of us have any hunches as to which of our ideas today will seem silly? My own guess is that most developers will not be 'typing source code.' However they 'code,' I also suspect they will not be worrying about the layout of their data on disk (or even the existence of a disk)...similar to how few people today have to worry about memory management.

[OT] That picture...

Is a doctored photo of an early 1960s-era US nuclear submarine operation console set. It is a total forgery; I can't believe anyone would think that the TV set and teletype of the era were actually present in that scene, especially given the differences in lighting. It's a really bad cut and paste job on photos and captions - I guess people are fooled because you can't find other pictures of that online. I trained on one of those consoles a decade ago (yeah, they don't throw much out) and can name everything on all of those panels and how to use them. It is in no way useful as an actual example.

[OT] There goes my wallpaper

Is a doctored photo of an early 1960s-era US nuclear submarine operation console set. It is a total forgery; I can't believe anyone would think that the TV set and teletype of the era were actually present in that scene, especially given the differences in lighting. It's a really bad cut and paste job on photos and captions - I guess people are fooled because you can't find other pictures of that online.

What a shame, I have been using that picture as my desktop wallpaper for the last couple of days. ;-)

Still, made me think. I wonder how long it will take before we can by a "personal computer on a chip" solution, i.e. some Intel chip with some gigabytes of ram, some gigabytes of flash, and wifi/bluetooth build in. How long before such a chip would drop into to the $20 range?

When it comes to buying stuff I am thinking about buying a solid state hard drive, or IDE/compact flash, in the coming months for my laptop. (Bit too costly though, and I am also not really in the mood of configuring linux to run on solid state).

I trained on one of those consoles a decade ago (yeah, they don't throw much out) and can name everything on all of those panels and how to use them.

Say what! Now we also have people here who know how to handle nuclear submarines? Only on LtU.... ;-)

Paintable Computing

Or another term "ubiquitous computing" (and just to toss it out, "amorphous computing"). See Paintable Computing.

Responding to a fairly particular part of the above reply: what we'll see more of is a lot of very cheap simple computers that are everywhere (the resulting computing capacity would be enormous and even enormous per dollar). Already today computers are all over the place and in many places people don't expect. In the future it will be this way to an even more ridiculous degree. (Computers in the fabric of your clothing, in your bloodstream, in the paint on your walls and cars, in the structural materials of your buildings and vehicles, sprinkled about on your lawn and in parks, ...) This is the pre-nanotechnology technology.

To talk about the interface side (of the general discussion): no one seems to have mentioned one of the obvious interfaces (albeit this may require more technological advancement than one can expect in the given timeframe); namely, thought. Certainly one can think faster than one can type, and such an interface is private and intuitive. There is some science (I don't have any handy links though) that suggests that we may be able to do a fairly crude form of this in the not too distant future. We won't (unfortunately) be able to just "think" what we want, but we may be able to associate certain thoughts or actions with certain inputs. It wouldn't work this way (as this wouldn't be dense enough), but like associating the character 'a' with the thought of an apple.

As a completely different aside, eyetracking technology and a system like Dasher (which I originally heard about from shapr), might be a viable interface with a simple HUD glasses combination*. Providing reasonable speed (better than many people can type) and privacy, while being more portable. This however isn't really futuristic technology. Adoption of this would be more a cultural thing.

* Incidentally, this should be very highly effective for programming languages as it would be very easy to accurately predict the continuation of some chunk of code (syntax wise).

Touch screens

If you had a touch screen, you could easily program it to display a visual of a keyboard that you could type on. You wouldn't get any tactile feedback, but you also wouldn't have any problem reprogramming it for different layouts and character sets, like the projection keyboard above.

Personally, I want an IPod-sized nanocomputer with a roll-out touch-sensitive E-ink display. The technology for that should exist within the next couple years; I've seen demos on the web for all the individual technologies, so somebody just needs to put them together. Any enterprising hardware engineers in the crowd?

Oooh, I love games! Pick me! Pick me!

In the fifty-year timeframe:

After the Sao Paolo Mistake, in which 450 people died and $US100B worth of damaged occurred due to a heap overflow attack on Brazil's payments clearance system, and the Acera Debacle, in which 137 died and a 1.4 million cars were recalled due to a race condition in the Ford Acera's automatic transmission firmware, legislation was passed essentially saying that liability-limiting software licenses were contrary to the public good, and unenforcable. After that everything changed.

With damages due to software errors and security holes actionable against the developers, coding for reliability and security became key issues. Essentially all coding moved to managed runtimes, as insurance for code created for direct execution first became extraordinarily expensive and all but unobtainable. Since failing to follow "industry-standard best practices" could easily end you up in front of a unfriendly judge, dynamically typed languages quickly fell from favor, while design-by-contract, effects and aliasing annotations, and similar safety-oriented practices became much more common. Sales of static and dynamic code analysis tools soared, as development houses furiously attempted to hold down the costs of insurance-mandated code reviews.

On the languages front, explicit transactions, first-class monitoring and security constructs, and type systems with aliasing and effect annotations all move mainstream, while explicit pointers, implicit coercions, and unhygenic macros meet an unmourned demise.

I don’t know about the

I don’t know about the automobile industry, but other industries are already regulated by government requirements. A friend of mine develops software for navigation systems in airplanes. According to him, there are all sorts of government guidelines regulating the software they produce. Every time a piece of code changes (regardless of the type or size of change), a lot of documentation must be written (along with the obvious stringent testing), just to satisfy government regulations.

I don’t know about the automobile industry, but the "liability-limiting software license” only exists for shrink-wrapped consumer software (a la Microsoft Word). Companies buying specialized software just wouldn't go for that. Ford is still liable for the behavior of the automatic transmissions, so they aren't going to buy a product from a company that doesn't indemnify their software. A large financial firm has a lot to lose if the custom software regulating financial transactions is hacked. Big companies care about buying software that is indemnified, or else they are legally up a creek when something breaks.

Consumer level products don’t need the same sort of stringent requirements that these “life critical” systems require. Dynamic languages will find a place in the less critical markets of consumer and business software (the sort of software that doesn’t control critical systems in planes, trains, or automobiles, or financial transactions). If the software breaks, it may be annoying, but the risk of breakage will generally be worth the benefits gained by using a more dynamic language and agile development methods.

What would a world where all software is liable for failures look like? Software on the whole is much more robust; however, the price of robustness is not small. Small software firms, because they can no longer afford to stay in business, either go out of business or, if they are lucky, are bought out by larger firms. The legal fees along with the increased cost of a longer more thorough development process drive the price of consumer software way up. As a result, the number of pirated copies of software increases, while legal sales of software decrease, placing further burdens on software corporations who are now unable to pay developers the wages they use to earn. Soon, all software developers are required to get expensive liability insurance to protect them in case they ever make a mistake and are sued, just like a doctor (and despite all efforts to the contrary, mistakes are still made; they’re just rarer than they use to be). The number of software developers decreases dramatically, due to the decreased net pay and increased stress. Open-source and free software quickly dies out due to the wave of lawsuits that break out over damage claimed to be caused by said software, regardless of the actual fault of said damage.

No, such rash decisions are just as bad, if not worse, than the original problem. (This brings up another point; all decisions made in the name of the public good should be eyed at very suspiciously, because it usually means some group of people will get screwed, the dictatorship of the majority, and all that. There’s a win-win decision somewhere, we just need to find it.) Should developers of software of life-critical systems be liable for errors in their software? Yes, just the same as a civil engineer is liable for errors in a bridge design. However, that liability need not, and should not, extend to the developer of a non critical application, just like an electrical engineer is not liable for an electronic toy or game breaking.

Those safety regulations do

Those safety regulations do exist for security critical software as well. Some companies already live from those obligations. A nice portrait of the britain company "Praxis" is freely available from EEE Spectrum.

A note...

I wasn't recommending that state of affairs, just offering a somewhat whimsical prediction.

Usually in discussions like these, people focus on language ideas coming out of academia (or currently popular with early adopters), rather that thinking about exogenous financial forces that historically have more impact on the mainstream programming language landscape. Just trying to inject something into this discussion beyond navel-gazing, really.

Very good point.

Usually in discussions like these, people focus on language ideas coming out of academia (or currently popular with early adopters), rather that thinking about exogenous financial forces that historically have more impact on the mainstream programming language landscape.

Yes, I think this is true. For us language people, looking at economic forces helps us understand why things are the way they are, when there are languages out there that are clearly better than the mainstream.

Verification/Model Checking

Since failing to follow "industry-standard best practices" could easily end you up in front of a unfriendly judge, dynamically typed languages quickly fell from favor, while design-by-contract, effects and aliasing annotations, and similar safety-oriented practices became much more common. Sales of static and dynamic code analysis tools soared, as development houses furiously attempted to hold down the costs of insurance-mandated code reviews.

An interesting prediction, but IMHO I rather see verification instead of testing as the future. In fact, verification tools are already widely applied in life critical systems. See for example some recent applications of the SPIN model checker.

I somehow have the feeling that a big part of the programming community has no idea about the state of verification today, and therefore keep pushing languages like C which are harder to verify than e.g. side-effects free languages.

Furthermore, language usage often seems to be an emotional affair instead of a matter of rational analysis. Just have a look at the flamewars people start on Slashdot if you should make the mistake of mentioning Java somewhere (though I do not want to imply here that Slashdot represents a significant amount of professional/academic programmers).

Depends on secondary education

I've a fascinating book, Capitalism & Arithmetic: The New Math of the 15th Century. It includes a nice modern translation of the Treviso Arithmetic of 1478. It is one of the first text books on how to do arithmetic with Arabic numerals and is in a sense a university level text. Capitalism & Arithmetic goes to great lengths to emphasise that arthimetic was once a difficult and advanced subject. I pluck out this anecdote related by Tobias Dantzig
There is a story of a German merchant of the fifteenth century, which I have not succeeded in authenticating, but it is so characteristic of the situation then existing that I cannot resist the temptation of telling it. It appears that the merchant had a son whom he desired to give an advanced commercial eduction. He appealed to a prominent professor of a university for advice as to where he should send his son. The reply was that if the mathematical curriculum of the young man was to be confined to adding and subtracting, he perhaps could obtain the instruction in a German university; but the art of multiplying and dividing, he continued, had been greatly developed in Italy which, in his opinion, was the only country where such advanced instruction could be obtained
I find it easy to imagine a historian in 500 years time relating this anecdote
A civil-servant of the 21st Century had a son whom he desired to give an advanced education in computer programming. He appealed for advice and was told that if the curriculum was confined to object oriented programming and functional programming he could obtain the education at many universities, but the art of meta-programming was too hard for those who took up computing at the age of 18 and instruction was seldom available to undergraduates.
His listeners all laugh because in 2500 one learns objected oriented and functional programming from ages 11 to 16 and meta-programming and meta-object protocols from ages 16 to 18.

Implicit in all discussions about the merits of programming languages is that programmers grow old and die and must be replaced. So young persons must learn to program and the requirement that new programmers may obtain a reasonable degree of proficiency in a reasonable amount of time trumps all other considerations.

Lurking behind this presupposition is another, subtler, presupposition about what is and is not in the secondary school curriculum.

Currently, our primary schools teach arithmetic and our secondary shools attempt to build knowledge of algebra and trigonometry atop this foundation. Clearly university teaching in physics and engineering depends on this prior preparation. Indeed it is hard to see how our modern, technology society could have come into existence had arithmetic remained an advanced subject and students had continued to travel abroad to study it.

What does this tell us about the future of programming language design? My own story is of learning to program a Tektronix desk calculator age 10, and later an HP65 age 12. If you start young, writing code becomes part of your nature. So when I encountered defmacro in Common Lisp, writing code that writes code seemed like a natural next step. I can easily imagine that some-one who learns programming as an adult struggles to learn to type in programs themselves, and is only ready to try writing code that writes code 10 years later.

I see the future of programming language design being determined by the future of secondary education. If computer programming gets into secondary schools and migrates down the age range, then advanced techniques can be taught in university and advanced languages are possible. If computer programming remains a university subject then languages will be constrained by what can be mastered in 3 or 4 years from a standing start, that is something like Java.

The pensions crisis points the way to a third possibility. If people start living longer in the sense of remaining healthy for longer then a university course could be longer and students could learn advanced languages as adults.

Hoare's anecdote re long division

When I was at IBM Watson one summer back in '84 or '85, Tony Hoare (Sir Tony?) visited to talk about developing correct code from specs. In responding criticisms that these techniques were too hard for programmers to learn, he related how he had come across something in the Oxford archives to the effect that (historically) division was considered too difficult for undergraduates. Of course now (and even back in the mid-80's :) ) we can teach the long division algorithm to kids in 3rd grade, basically due to the highly structured combination of algorithm and standard layout used.

If anyone knows of a written version of this anecdote, I'd be pleased to hear about it (I couldn't get email through to Sir Tony himself: maybe Oxford filters stuff from the Colonies).

(And I'll pick up my library's copy of Capiatalism & Arithmetic in case it appears there.)

Asimov's story "The Feeling of Power"

Also relevant is the science fiction story by Isaac Asimov "The Feeling of Power", available free on-line at http://www.themathlab.com/writings/short%20stories/feeling.htm.

Hoare

(Sir) Tony Hoare has retired from Oxford, and is now with Microsoft Research's Cambridge Lab. You might try contacting him through there.

Contacting Tony Hoare

Yes, I knew about the move: I looked all over their website (both then and just now), and never found a way to contact him directly. It has the feel of a deliberate attempt to shield them from unsolicited contacts, which makes sense. In the end I tried a couple of variations based on Simon PJ's address, as well as one of the old Oxford ones from a paper, but never heard back. (That doesn't mean the mail didn't go through, which is one reason I suppose I haven't tried harder.)

Copy, paste and edit a bit

Programmers currently learn that programming by copying code around and editing it to work is very, very bad! We all have been conditioned by education and experience to fear duplication and refactor it away. But copy-and-paste programming is the most natural way to do things. It's the method that beginners naturally understand. It's the way that spreadsheets work.

The problem is not with the programmers, it's with the tools: they don't support the way people really think about programming computers.

The whole point of computers is to automate away mental drudgery. Future programming environments will do the same for the mental drudgery involved in programming itself, allowing "naive" copy-and-paste programming at the UI while keeping track of relationships behind the scenes.

Almost exists already...

There are already commercial products that will search for duplicate code, and help you abstract it away with automated refactorings. They are currently fairly clunky, but still eminently servicable for production work. What you are describing is basically raising the game a bit for these tools.

I guess I want more than

I guess I want more than just raising the game of clunky tools. I want to not even think about needing those tools.

At the moment, my IDE (IntelliJ) can discover duplication and refactor it away into shared methods. Great. But I still have to decide to remove duplication.

Would it be possible for the tool to detect duplication, manage copy-and-paste operations, keep track of differences as I edit pasted code?

What would the effect be on design? Removing duplication is a design decision. It would be hard to make a tool that worked in a copy-and-paste-and-edit-a-bit style, removed duplication to produce well designed code under the hood and then somehow let the programmer learn the new design to get a better understanding of the domain they are working in.

50, 100 years might be enough time to invent something like that...

I guess I'm not understanding

Right now, IntelliJ will let you search for duplicated code. It's fairly bright about it, letting you set parameters for how small are large of duplicate chunks you are looking for, and what differences between reported duplicates you wish to ignore, so that you can find "duplicates modulo abstraction". It does this by creating a database of semantic hashes, and only doing structural comparisons when the hashes match. Right now, it only checks for duplicate expressions or self-contained statement sequences (which can be abstracted as methods), but it would be fairly straightforward to make it work for finding duplicate methods or even entire classes.

Unfortunately, there's a usability issue. IntelliJ only exposes all of this as a batch process. You request a duplicate report, and it gives you one. What if instead, it maintained the semantic hash database in sync with the project's code base, searched for duplicates against this database in real time as you were editing, highlighted potential duplicates in your edit window, and offered you appropriate "quickfixes" to abstract them out. IntelliJ is already interactively highlighting several hundred other coding errors or design weaknesses and offering automated fixes. You could think of this as just one more.

So my question is: Would that fulfill the use case you're envisioning? I don't really see the value of maintaining information about cut-and-paste "lineage", as opposed to finding abstractable duplications. I don't much care where my design flaws came from, all I need to know is where they are.

Nothing here seems like rocket surgery, give the current state of the art. Probably the IntelliJ guys could code it up in a month, if they thought that enough of their market had the high-end quad-processor development boxes it would take to make this a pleasurable user experience.

I'm sure that the

I'm sure that the implementation technology is getting there, although the refactorings that IntelliJ can suggest are currently quite limited. It cannot pull out classes and interfaces, work out where to introduce delegation and so forth.

However, the user interface is not right. The most successful programming environment in the world, Excel, is designed around copying code. What it doesn't do is find and introduce abstractions. But people don't find thinking about abstraction very easy. It takes programmers decades to become good at it, and warps their thinking about other things in the process. It's not that non-programmers aren't good at abstraction, it's that they actively don't like it. They want to work with the concrete.

Now, we all know as programmers that working with the concrete becomes increasingly tedious and error prone until the code is unmaintainable and has to be discarded.

So what I'm envisaging is a programming environment based around copy-and-paste that then introduces new abstractions to the programmer/user when they find that they need them. I'm thinking of the programming tool really being an iterative learning tool, a way of exploring and understanding the domain that you're writing programs for without lots of up-front analysis and design that delays delivery of actual functionality.

The system might work in a question-and-answer style. If the tool finds it hard to keep track of all the duplicated logic you could explore the abstractions that it has found and it would then ask you to name them and name the relationships between them. You could then select abstractions that you feel comfortable working with and the user interface to your program would morph to allow you to work with those abstractions. While working, you could morph between different levels of abstraction and the tool would reconfigure abstractions to reflect changes you've made at different levels and illustrate those reconfigurations with some whizzy graphical goodness. You could have different abstractions on the go at once, giving you different viewpoints on the system, and could morph between them.

I know I'm being vague but I have trouble foretelling tomorrow morning, let alone 50 years from now.

Programming by intention?

I'm pretty sure that what you are getting at maps to a subset of Generative Programming/Programming By Intention. Worth checking out if you haven't, yet.

And by the way...

I'm sure that the implementation technology is getting there, although the refactorings that IntelliJ can suggest are currently quite limited. It cannot pull out classes and interfaces, work out where to introduce delegation and so forth.

Funny you should mention that. I'm the implementer of most of IntelliJ's code inspections, and many of it's small refactorings. Now that JetBrains has opened up some more API, I'll be adding global analyses to suggest classes and interfaces that need split or merged, methods and fields that need moved to a subclass or merged to a superclass, methods that should be inlined, delegation that should be introduced or removed, packages that need merged or split, and a bunch of other goodies. Exciting stuff. Should ship in late summer, I'm guessing.

That sounds fantastic! You

That sounds fantastic! You guys really know how to produce a good development tool.

Will any of this end up in Resharper? How do you share technology between the two products?

Thank you

But I should note that I don't actually work for JetBrains. My work was done as an open-source plugin to IDEA, which was merged into the product a couple of years ago.

Will any of this end up in Resharper?

I can't say anything about that, other than that it would certainly be cool, and I'd love to do it.

Copy-and-paste programming

I swapped a few emails with Fowler years ago on the subject of code duplication. Everyone I know does it. It's a very common operation in real world refactoring of code. And yet he didn't want to add it to his canonical list of refactorings. A typical scenario: a class appears to be serving two separate purposes. You duplicate it and then remove the parts from each that no longer serve a purpose. You could, instead, write two new classes from scratch - but that is more error-prone. It's far easier to test at every stage when you duplicate the class and delete functionality that has become redundant as a result.

Re copy-and-paste

Everyone I know does it.

I (generally) don't do it. On the contrary, I aggressively look for duplicated snippets of code and remove the duplication.

You duplicate it and then remove the parts from each that no longer serve a purpose.

That is unlike the cases of duplication that I see dozens of times a day. What I see is that code is duplicated and then some parts of the duplicate are changed. Often it is just partial alpha conversion to account for differently named local variables. I've never seen a case of (what I would call) duplication, where parts of the original were deleted after the copying.

What you seem to be describing, I would call "splitting". The smell you are describing is a violation of the "Single Responsibility Principle" or the "Interface Segregation Principle" as described by Robert Martin.

Frankly, I view code duplication as a strong indicator of incompetence (inability to form abstractions) or immaturity (does not understand the maintenance burden) or both.

Code duplication is an intermediate step...

...on the way to "splitting". Often you can't make a clean split because some piece is still shared between the two 'children' (which may or may not itself be refactorable later) so you can't literally 'split' the code. Instead you duplicate and remove redundancy. I agree that duplication as a final step is likely to be a sign of incompetence (or simply not yet having finished the job).

Look at subtext

Look at subtext (subtextual.org)
Read the 3 papers (including the recent one published by the author) and the discussion in ltu about it.

partly interesting

I have seen that presentation before (perhaps an older version). I like the idea of not having to work with text-source-code (again, only because existing methods of 'coding' are so good, whatever replaces it should be very exciting). Unfortunately, I don't like the solution very much. It looks like coding in subtextual will take longer and be more difficult to understand than just typing out code in a nice editor.

Well, it's very early days

Well, it's very early days yet and Jonathon Edwards admits that he's working on the implementation and semantics and that the UI is work to be done.

If in a hundred years, or

If in a hundred years, or even fifty years, there are no artificial minds of at least human-level intelligence, I will be severely disappointed. When you have artificial minds, there is no longer any need for humans to do manual programming; manual programming is just a stepping stone.

Why bother with artificial

Why bother with artificial minds? If they are going to be minds of human ability we might as well just use humans.

nah!

we'll be too busy in a perpetual vacation in dreamland... let the machines think for (and crop) us! ;)

Artificial Minds will be smarter

IMO there'll be only a few moments where a artificial mind will be as able as a human mind. After that it'll evolve itself and be smarter. A couple of days later it'll solve the dynamic vs. static debate and all programmers (and LtU) will become obsolete ;)

I suspect that the future of

I suspect that the future of software development lies in software tools creating machine code from complete computational models, and not from PL intructions (functional or imperative) that are created from the model by the developer, as it is now. Such would be that traditional programming languages would disappear altogether and that creating software would be an excercise in formalizing computational models (imagine a flowchart, perhaps) which would then be intepreted by a "compiler" which would create the neccessary implementation-level machine code.

Of course, one could imagine that the method of formalizing a computational model could not be standardized and different vendors could have their own way of defining the computational model...

People have been saying this

People have been saying this for years. Back in the late 80's it was the CASE tool folks. Now it's the MDA folks. In a few years it will be the graphical DSL folks. However, they all run into the same problem: if the model is detailed enough to be turned into a program the modelling language is a programming language that just happens to have a graphical representation and you're not modelling any more, you're programming.

Good point!

However, I think the line between modelling and programming is arbitrary, ultimately the complexity of the model is irrelevant, as you would continously be modelling ever-finer sub elements of parent elements without it being caught up in any sort of syntax or side effect. The graphical representation can hide the implementation details of the sub elements, and modifying the sub-elements does not affect the overall architectural flow of the model.

Hmm. I just realised that I've been talking about FP in a graphical way :eek:

On the gripping hand...

If we are to accept the Star Trek computational environment, both object orientation and functional programming will have been proven wrong, because everything will be contained in the good ol' subroutines... :-D

"Ah cannah do it, cap'n, ah doont 'ave the stahk speece!"

What about experience-based programming?

It would be interesting if programming could be reduced down to writing some code and let the computer extract the rest of the program from previous experiences. For example, a phone-book application could be written in a few lines of code, like this:

phone = {first-name, last-name, phone-number}
phone-list = list of phone
ui = {{new phone}, {edit phone}, {save phone-list}, {load phone-list}}

Then a compiler could infer from past experiences that the application has a UI with 4 actions (new phone, edit phone, save phones, load phones) applied over a list of phones.

Ruby on Rails is not far from that

Of course, you have to put in more work to build an *attractive* phone UI that is fully Web 2.0 buzzword compliant.