Scalability

From Yearning for a practical scheme - "I don't want to load a SRFI

I do wonder about the scalability of CL or Scheme along the number-of-coders axis. When you have 20+ programmers working on code that was originally written by a separate group of 20+ programmers... The limitations of a language like Java might actually be a benefit in these

This is an issue close to my heart, and one that I think the LtU under-discusses. First of all, a few confessions: I am a Haskell afficionado, and hate Java with a passion for the mindset that it promotes, where coders become nothing more than LoC mincers. It saddens and angers me that people are wasting time learning it, because there is nothing in Java (or C#) that is new; and old in them are botched and bungled. That said, however, I have started to appreciate them in a different capacity.

I've recently started a position with as a lead and architect, and so also manager. After floundering for a bit, I started to think like a manager (much to my regret). The business reality is that most "programmers" (and I use the term loosely) are not LtU material. They probably didn't see the symbol lambda during their university educations. Furthermore, asking them to understand the intricacies of CPS transforms and endo-hylo-cata-ana-expi-ali-docious morphims. They wouldn't be interested either. Most programmers are just Java monkeys; it arises from the fact that there are more people than can competently (by our standards) program who are willing to take the salaries that the industry offers. Doubtlessly, if they did not work in the industry, the industry as a whole would produce better work (though not more work; more on this in a minute). The wages would also be higher... which would then attract them right back. The fact is that we count competence in terms of elegance and a feeling that a piece of code will last -- a Mona Lisa of the pyramids. The customers of our industry, however, does not hold us to such high standards. They are happy to spend less time looking for their data. They are really happy when you can save them even the smallest bit of time. They will also be satisfied with a poorly written piece of shit that passes the basic mustard. This is the reality, and no amount of Mary Poppins impressions will change that.

These problems plague all fields, engineering especially. There is one main aspect in which we stand out, which is that we deal with more complexity than any other field. Other industries struggle with structure more than 20 or 30 layers deep, while we brush them off and imagine towers so tall that we can't imagine any further. We then build them, and see how we can build even taller. I estimate (also known as pulling numbers outta my ass) that we deal with 100 times more layers of structure than any other field. In a way, it can be said that software engineering is all about complexity management.

Which brings me back to Java. It, as a language, sucks. Its one redeeming feature is that it is a lowest common denominator. I could get my English majoring friend to learn it. At first glance, the language itself is poor as hell to manage complexity. Surely the fact that you need tens of files just to equal a page of Haskell already represents a huge LoC count? Not to mention readability? Yes, in my opinion. But that's also not where Java shines. Indeed, Java is less about programming and libraries and so on, and more about the processes which are built around them. Everything from the consistency of presentation of documentation to universality of training schemes that exist. These are extra-language features that have made it popular. It is also, AFAICK, the first language that was commercially pushed successfully. These things we cannot ignore.

The first lesson that I learnt doing project management, is that a project is less about the technologies, even less about the product, and almost all about the team. The people that make up the project are the project. If you are lucky that you've got a crack team of PL specialists and who are also experts in the problem domain, then you will succeed. But you're never going to get that. At best, you will get 10 Java monkey level people, and 1 who can grok the difference between structure subtyping and nominal subtyping (not in those words, however). This is where the Java processes become invaluable -- they've laid out all the correct procedures that you, as the lead, should do to get everyone to work together. You draw some UML diagrams, etc and just generally follow the road. At the end of the day, if it fails, you can simply say that as an industry, you weren't ready to take on the challenge, and be absolved of personal responsibility. Furthermore, it also helps you with planning -- how many packages, how many components, how many classes == how many man-hours == easy calculation of projected costs and timescale. These are the things that the customers care about, and it is a rare customer that cares even about "maintainance" costs, where you'd have some leeway to build in some reason to sneaky something good in. Oh, and the worst thing is that most won't even care about that -- they'll ask about risk, which forces you to justify any and all choices, especially ones perceived by the local "expert" as outliers.

On these fronts, Java stands unrivalled (except for C#, which for the moment is almost the exact same). It makes economic sense. It also makes no sense in any other way. If I get a Lisp guru to hack up some code, the resulting poetry will be unutterably beautiful, and will take another equal guru forever to read it, all the while exclaming the great use of macro transformers. It becomes a single person pursuit. There is nothing wrong in general with single person endevours, but there are times when you need to get something done quicker than one person can, and then you need more people to help.

Thus we are back at the beginning. We have unimaginable complexity, and we have people who can't comprehend it. We do need formal processes to help us. So far, the only successful industrial scale ones have been extra-language. I wonder if it would be possible to bring them into the language? In other words, to have the language incorporate a framework to help with the workflow? I've personally tried to control the complexity by using industry standard protocols to set out the edges of components and hoping that each component can be done by one person. I have a feeling that there is always going to be a gap between those who can see the system in the whole, and those who will be working the trenches; I also find it unsatisfactory. I also don't have any better ideas.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The subject's a bit OT for Lt

The subject's a bit OT for LtU, but I think one thing that we do observe here is that different languages have different strengths. Thus I believe that as a project grows, it will probably stretch the abilities of the "one language" that it's written in.

The natural solution that I see is explicitly architecting the project to use multiple languages from the start. A second language creeps into the project from the bottom-up pretty often ("Boss, I wrote this little stuff in Perl") anyway. If we plan ahead instead, we can have keep both the Lisp gurus and the bare metal hackers happy at the same time.

The important thing here is to make sure that the languages play nice, which is the architect's job. AFAIK, F# and C# were designed to play nice in the .NET environment, but I don't have any experience with them. I think in the long run it is too inefficient to have everybody code in the one project language, so the interoperability costs will be worth it.

Although off topic for LtU, i

Although off topic for LtU, it does help those of us who have been "enlightened" understand one reason why Java/C# are so popular and hence be a little more patient. Maybe the managers do "get it" and we're the ones that don't.

On the other hand, it would be great if there was a way to successfully create a team that could handle a more "elegant" language such as Haskell, Lisp, or O'Caml.

Interesting article

If this is off topic... where does one go to discuss the practical application of programming languages? Recommendations? I'm not only interested in languages for their inherent value and the things I can learn from them, but I'm also interested in "where the rubber meets the road", which is very much what this article was addressing.

I found this article to be quite interesting, in any case, it is yet another quality bit of data on why certain technologies are chosen over others.

You sure can discuss thee pra

You sure can discuss thee practical application of programming languages on LtU. However, LtU is not really an independent discussion group. It is intended to be related to the weblog (home page items), or for short discussions by regular members of the community (which sometime become lengthy, I know).

So the topic of this thread isn't a problem, but if you don't see how to relate it to LtU threads, and none of the regulars appears here, than I guess it's time to move to comp.lang.misc or whatever.

At the moment, I think the thread can live - but personally, I think many of the idea here where discussed previously (even on LtU), and reading previous dicussions migh be helpful for the discussion.

Eiffel?

IIRC, the Eiffel language was designed, in large part, to be a "compilable" object modelling notation, rather than "just" a programming language. One of its strengths, according to its fans (I have no real strong opinions on Eiffel pro or con), is that one can use Eiffel "notation" to document your OO design, and then simply add implementation on to what exists. Contrast this with UML, where one produces a bunch of paper (possibly assisted by some CASE tool), which then must be translated (often by hand) into the source code of whatever language.

While Eiffel might offer some advantage in not requiring the UML-to-Java translation step, there are several issues I see (with tool support in general, not just Eiffel):

1) The common confusion of design/implementation and interface/implementation. In many SW development processes, a "design" (or "architecture"--the concept goes by many names, often with overlapping and ill-defined meanings; a mess which I will sidestep for now) is produced, often by a combination of architects, domain experts, and other folks assumed to be qualified to produce such a design (they may not be so qualified, but that's a problem no product or process will correct). The important point is that "designs" are often a first level cut at mapping the user domain to the software domain.

Firmly within the software domain we have the concept of "interface" and "implementation"; one being the exposed boundary of a library/object/component/class, the other being the hidden details. Encapsulation and abstraction are two powerful weapons of the software engineer (no matter what language is used; though some do it better)--especially in projects consisting of more than one developer.

The two concepts are '''not''' the same. In some cases, an algorithm (part of the implementation of some subsystem) may in fact be a fundamental part of the user-level design. Conversely, header files and interface definitions almost always contain additional elements which don't directly model anything in the user domain, but are essential in expediting the work of programmers. Many pathological management anti-patterns (such as the fear of adding classes) are often the result of confusing the two concepts, thus placing constraints on the software implementation not dicated by the functional design.

2) A secondary issue, of course, is the feedback loop. Designs sans implementation, whether done entirely on paper or a whiteboard, or computer-assisted, are inherently untestable (let alone amenable to formal verification in any significant sense). Yet many processes assume that the design, coming ex cathedra from the local architect (who is presumed to be correct based on his position) is infallable; and that it's the poor programmer's job to reconcile the implementation with the design. No improvement to languages/tools will fix this. The various agile processes can help of course; but the incremental nature of agile processes may not be appropriate for very large-scale projects.

Summarizing, tools may help; but processes and procedures are a key part of the equation as well. As of yet, the processes programmers and their managers should take in turning requirements into code are still an area of active research (not to mention a good deal of suspicion, salesmanship, snake-oil, and superstition).

2) A secondary issue, of cou

2) A secondary issue, of course, is the feedback loop. Designs sans implementation, whether done entirely on paper or a whiteboard, or computer-assisted, are inherently untestable (let alone amenable to formal verification in any significant sense).

This may well be the case but if it is it must be fixed. A system level designer must put their design in the terms of the implementation and should provide some testing and verification on some level (ie to ensure that they are geting what they ordered). A good strategy might be to simulate their system in a fast prototyping language. Not the implementation language. If one isn't sure of the system itself it is a mistake to move on to implementation.
Edit: I would like to ammend this statement somewhat. If you are going to try to implement something you don't fully understand, (ie hack it out) do it on the side and don't confuse that with implementation. Such efforts might well lead to good understanding and be the basis of good projects.

An interesting article appear

"Thus we are back at the beginning. We have unimaginable complexity, and we have people who can't comprehend it. We do need formal processes to help us. So far, the only successful industrial scale ones have been extra-language. I wonder if it would be possible to bring them into the language?"

An interesting article appeared in the June 2003 issue of Scientific American entitled Self-Repairing Computers. One link given there:
Berkley ROC. I would like to suggest that these techniques can also be applied to the debugging and testing phase of development. It is just one idea.

Hypothetical proposal: Instead of assembling a large, new, software system and sitting back and watching it crash; put it together as a recoverable system with built in fault diagnosis. This is theoretically possible using ROC techniques mentioned above. Another possibility might be to use a language such as my own language called Lewis. Lewis has never been used in this way but may have all the necessary features. If not perhaps it will give you some ideas.

You write: The business re

You write:
The business reality is that most "programmers" (and I use the term loosely) are not LtU material.

Sure, but most programmers don't live within a thousand miles of you, either, but that doesn't stop you from having several co-workers who live a short drive away, does it?

I assume you mean something more specific: the reality of your business is that most programmers that you work with are not LtU material. How did that come to be?

I can think of three major possible reasons:

  • Your hiring policy discriminates against highly-qualified people (because you think they're hard to get along with, likely to be unhappy there, or whatever)
  • You don't know how to find them with a reasonable amount of effort when you're searching for new employees; or
  • You aren't willing to pay them as much as they expect that some other company would.

None of these three is some sort of immutable reality that everyone has to merely figure out how to cope with. Consider the third. Presumably if this is true of your company, it's because someone has bought into the Mongolian Hordes approach before hiring even started: they think that ten minimally-skilled programmers each earning $75k/year will collectively be more productive than three more-skilled programmers each earning $250k/year. (I'm using Silicon Valley figures here. Adjust as appropriate.) Is that true? Well, if you're managing a team built on that assumption, it doesn't matter if it's true or not; you have to work with the people you've got and get the best results you can.

That doesn't mean that being preceded by a Mongolian Horde ideologist is some kind of universal human condition among software managers, though.

I'm out of time; maybe I'll comment more later.

Hiring policy

Argh... I know this is off-topic for LtU, but I can't resist replying to this one:

Your hiring policy discriminates against highly-qualified people (because you think they're hard to get along with, likely to be unhappy there, or whatever)

I was invited to an interview for a programming position (they contacted me after seeing my "job wanted" ad) a couple of months ago. I think that the interview went very well and they also told me (both in person and in an e-mail) that the interview left a positive picture of me and they were impressed by my experience. Nevertheless, they told me (in the same e-mail) that they can't hire me because they don't have sufficiently demanding work for me and they can't bring in a new programmer due to their project schedules at the moment. Sounds fairly reasonable?

Well, about a month later the same company starts advertising for a "demanding" programming position. The requirements, qualifications, experience, etc... that they call for in their advertisement basically match me almost perfectly. I know how this sounds, but I'm not imagining things; the job description is virtually identical to the one I had (for 5 years, succesfully) before I went back to the university to continue my studies. I asked them whether the situation has changed. The reply I received said that they decided not to hire me due to my "interest in fairly theoretical subjects, research, and interest in getting a PhD" and can't reconsider.

Duh. Next time I just have to be silent about my interest in PLT.

Mongolian Hordes

Actually, it's a little more complex than you're assuming.

"they think that ten minimally-skilled programmers each earning $75k/year will collectively be more productive than three more-skilled programmers each earning $250k/year. (I'm using Silicon Valley figures here. Adjust as appropriate.) Is that true?"

No. It is not true. And I think, surprisingly, that no software manager thinks it is true. They do think, however, that the Mongolian Horde will do enough to get by on, and find it hard to justify the extra spending. When you are in numbers as low as 3 or 4, it is hard to scale linearly in productivity; each person bring expertise and experience, which overlap and overlap increasingly. The horde have no expertise to overlap. They just pile it on, and rely on the architect/manager at the top to take up the slack. The said person more often just wants to live, and not risk their career on "interesting" choices. Perhaps the trick is to eliminate the architect/manager, and replace it with a small team of crack experts? Or is that just XP?

The "Problem"

Is that there is no problem at all. Software engineering is just that...engineering. What it is *not* is art. Thats why you hire artists to design very expensive buildings and architects to design strip malls. You simply cannot afford to hire PL divas to write your average business software. The software itself isn't worth the cost of a handful of Scheme experts writing a program in half the lines it would have taken Java programmers. In the same way, your average Midas dealer is not going to hire the pit crew of a Formula One team to change the oil on their customer's cars, no matter how qualified the mechanics. It's all simple economics. The market can only support a limited number of high quality programmers, and the rest can fend for themselves or settle for a mediocre job. Anyone who thinks the software industry is somehow unique in this area should try working a different job for a while. It's got nothing to do with complexity and everything to do with the almighty dollar. The average software produced by the software industry is no worse than the average car produced by the auto industry or the average house produced by the construction industry. It's just that people who happen to be highly qualified in their field tend to look down their noses at people who are only good enough to produce mediocre work. It's true of car designers, clothes designers, chefs, and programmers.

Agreed

Actually, you hire architects to design both your fancy skyscraper and your garden-variety strip mall; but the architect you choose for your fancy skyscraper will likely be a different one (and a well-known artiste, which is what you were getting at) then the competent-but-not-famous architectural firm that will be designing the strip mall.

That said, there is a strong opinion among "star" programmers that the most economical way to design software is to hire a small team of such stars (I'll avoid the word "diva" because that term implies arrogance and unprofessionalism as much as it implies greater skill), rather than a larger team of junior programmers. The problem, as you state, is that there aren't enough star programmers to go around--they're a relatively scarce resource. Most projects can't wait while such a team is assembled. Plus, most HR departments (and even many programmers!) have a hard time telling the difference, especially after the typical job interview. After all, people don't put "code monkey" or other such pejoratives on their resumes.

The other point to me made, is that in many cases mediocre work is ''acceptable''. While the mechanics at the dealership will likely be better at changing your oil than the folks at Jiffy Lube, and everyone has heard horror stories about Jiffy Lube installing the wrong oil filter, etc... most oil changes there are uneventful.

In many cases, the difference between a star and an average programmer is not the quality of the code (the end user only cares, "does it work?"), but the chances of success.

Prove that it's worth money

I hesitate to respond to a "thread non grata." Especially considering my previous history on ocaml-biz, and the people who might come out of the woodwork to lambast me for anything I have to say, valid or not. But the problem statement cries out for response. Thus I'll be brief.

If you can demonstrate to someone with funding that your HLL provides a competitive advantage, then you can get your HLL used. But you have to prove it, it's your burden. If you can't, if it's really all a bunch of academic theory about what supposedly "nicer" languages are about, then nobody with the power to cause change is going to pay attention to you.

Success stories

We posted several examples of companies taking advantgae of FP, for example, to gain competitive advantage in the past.

Not always...

Joel Reymont wrote a blog post about his Erlang OpenPoker server that is demonstrably superior to existing Java solutions.

All of the vendors I have spoken with are having scalability and fault-tolerance problems. None of the vendors are willing to switch to Erlang due to being heavily invested in Java.

[edit: fixed the url]

Attribution of causality

Your second link just points to LtU, so I can't follow that. But in any event, has anyone demonstrated that the languages make the difference in the products, and not the programmers? Obviously, people willing to learn Erlang have a little more motivation and patience than people who might have gone to a tech school for a few years or just learned Java on their own.

Language acceptance

Joel writes about "barriers to switching" to a new product. The main idea: if there are 10 barriers, and you eliminate 8 of them, you aren't that much better off. The 9th gets you halfway there, and once you do the 10th, you become the market leader.

I think this applies to languages, too.

It's not just about team work. At the beginning of my most recent project, the company allocated just one developer: me. When we were choosing a language, we were faced with something like the following checklist:

has an HTTP server library (servlets style)

has an embedded database library (we don't want to deploy a standalone DB as well)

can play MIDI and MP3 files

can talk to the serial port

...And so on. Eventually we eliminated everything but Java and Python, and then chose Java. If I remember right, the particular deciding factor was "IDE availability and quality". (We could also have chosen C++, but time-to-market was really critical, and I'm not a template wizard.)

OCaml was eliminated pretty early on. Ruby and Haskell, my two current (maybe) favourite languages, weren't even discussed.

For the record, I haven't regretted the choice of Java. It's not a bad language - if you stick to the style: hide big implementations behind small interfaces, and use raw SQL instead of object models.

LFSP all over again?

I am not sure I understand the argument in the origianl post, but it seems to me that the idea at its core is the LFSP/LFM meme.

Very interesting discussion.

It saddens and angers me that people are wasting time learning it, because there is nothing in Java (or C#) that is new; and old in them are botched and bungled.

Having nothing new in a programming language is not a good reason for not adopting that language. What counts is not only the number of innovations, but the style of execution: a strong corporation, a central site, a strong foundation library, professionally written documentation. Java excells in those things.

The business reality is that most "programmers" (and I use the term loosely) are not LtU material. They probably didn't see the symbol lambda during their university educations.

I am one of them. During my Bachelor degree years, I was unaware of lambda calculus and functional programming languages. I came across ML in my Masters degree, but the time to work on FP was very little. I had to work on my spare time in order to understand it. I have been playing with Haskell lately, in order to have a more complete picture of the FP work.

Most programmers are just Java monkeys

I would say that most programmers are just BASIC monkeys, and when they learn Java, they think it is the most advanced programming language.

Which brings me back to Java. It, as a language, sucks.

It's an advancement over C++ though, thanks to garbage collection, enforcement of OOP (which means better code structure), portability between platforms (less work to port an app to different O/Ses), and a good API that covers most of people's needs. Most average programmers can write quite fastly quite complex Java programs that work correctly in most of the cases...no other mainstream programming language succeeded in that up until Java.

Surely the fact that you need tens of files just to equal a page of Haskell already represents a huge LoC count?

Although functional programs are shorter, and Haskell even more, I don't consider Haskell a good language when it comes to linguistics. Maybe that's just me coming from the imperative world, but I have a hard time navigating large Haskell programs. That's the reason I have proposed a more conservative FP language, even one with curly brackets! (blasphemy(!!!) :-)).

For the average programmer, it is very very difficult to learn lazy evaluation, lambda and get used to the style of Haskell at the same time. I've tried it with the Hamming numbers problem: although most of my colleagues understood the concept, the problem was not revealed to them until they tried it in Java. Haskell has quite a lot of hidden depth, which is not a bad thing, but not exactly a motive either.

a project is less about the technologies, even less about the product, and almost all about the team.

These are the things that the customers care about

Well, I am not a manager, but I have already figured that myself. Java has a very simplistic model that is understandable by everyone, from the upper managers who do not know anything about FP to the lowest programmers who also do not know anything about FP.

(except for C#, which for the moment is almost the exact same)

The .NET API is not as good as the Java API. For example, in the GUI front, the .NET API follows Windows conventions: widget events are sent to their parent forms, instead of being 'signals' to the outter world. Model-view-controller apps can not be easily done with C#, even though C# has an 'event' construct. Much of the code is still written inside forms, as in VB 6, thanks to that little detail.

I have a feeling that there is always going to be a gap between those who can see the system in the whole, and those who will be working the trenches; I also find it unsatisfactory. I also don't have any better ideas.

I have, but who listens to a lowly geek programmer? :-) The best way to manage complexity is to have true componentization, since object orientation simply isn't good enough. In order to provide true component-based programming, we need higher-order programming, just as in FP, and a way to combine ready-made sofware as in electronics through input and output signals (as I said in this discussion)

All'n'all, thanks for raising such an important topic. I think some LtU members have a hard time understanding the human issues that involve programming, although they excel in FP. I don't say that in a bad way: they focus in what they interests them more. but in order to make FP succeed, it needs work addressing the issues you mention. Java had shown the way forward in that...

A few corrections to your comment

Although this is a tangent, I just wanted to correct some info on C# as to not prevent the spread of misinformation about the language that is defined in the spec.

For example, in the GUI front, the .NET API follows Windows conventions: widget events are sent to their parent forms, instead of being 'signals' to the outter world.

I'm not sure what you mean when you say that "Events are sent to their parent forms". Events in C# (.NET in general) follow a publisher subscriber model. You use an overloaded operator += to subscribe to events in C# and AddHandler keyword in VB.NET. The "event" keyword is nothing more than delegates with visibility modifiers. Delegates themselves are nothing more than type-safe method references (function pointers in c).

Model-view-controller apps can not be easily done with C#, even though C# has an 'event' construct.

I'm not sure why MVC would be any harder to implement in C#. And what does an event keyword have anything to do with MVC?

Much of the code is still written inside forms, as in VB 6, thanks to that little detail.

A Windows Form is nothing more than a class file coupled with a "resource file" which is an XML file that has some basic settings information. No GUI program logic goes into the resx file. It's all defined in the class so I'm not sure what you mean when you say "code is written inside forms"

You can have a fully functional GUI program running without ever using the Visual Studio Designer. For example this is all you need to get a windows application.

using System;
using System.Windows.Forms;

public class HelloWorldForm : Form {
    [STAThread]
    public static int Main(string[] args) {
        Application.Run(new HelloWorldForm());
        return 0;
    }

    public HelloWorldForm() {
        this.Text = "Hello World";
    }
}

Since F# now has top level, you can pop up windows on the fly from REPL. Windows Forms follows the Java Swing container model much more than VB6. In fact having done VB6, Java Swing, and Windows Forms, I'd say Java Swing and Windows Forms are like brothers and VB6 is like a distant cousin.

I'm not sure what you mean wh

I'm not sure what you mean when you say that "Events are sent to their parent forms". Events in C# (.NET in general) follow a publisher subscriber model. You use an overloaded operator += to subscribe to events in C# and AddHandler keyword in VB.NET. The "event" keyword is nothing more than delegates with visibility modifiers. Delegates themselves are nothing more than type-safe method references (function pointers in c).

.Event notifications are sent to a widget's owner form.

I'm not sure why MVC would be any harder to implement in C#.

Because .NET widgets are not views of models. For example, a CheckBox is not a view of a boolean model.

And what does an event keyword have anything to do with MVC?

Because MVC depends on the publisher/subscriber model, which in turn is made easier with the existance of 'event': models expose events, views bind themselves to those events, controllers invoke those events by modifying the models.

It's all defined in the class so I'm not sure what you mean when you say "code is written inside forms"

I mean that code for a button click is written inside a form, because the button widget sends a click event to its parent.

Here is the code that is created with one form and one button from VS 2003:

using System;
using System.Drawing;
using System.Collections;
using System.ComponentModel;
using System.Windows.Forms;
using System.Data;

namespace WindowsApplication1
{
	/// 
	/// Summary description for Form1.
	/// 
	public class Form1 : System.Windows.Forms.Form
	{
		private System.Windows.Forms.Button button1;
		/// 
		/// Required designer variable.
		/// 
		private System.ComponentModel.Container components = null;

		public Form1()
		{
			//
			// Required for Windows Form Designer support
			//
			InitializeComponent();

			//
			// TODO: Add any constructor code after InitializeComponent call
			//
		}

		/// 
		/// Clean up any resources being used.
		/// 
		protected override void Dispose( bool disposing )
		{
			if( disposing )
			{
				if (components != null) 
				{
					components.Dispose();
				}
			}
			base.Dispose( disposing );
		}

		#region Windows Form Designer generated code
		/// 
		/// Required method for Designer support - do not modify
		/// the contents of this method with the code editor.
		/// 
		private void InitializeComponent()
		{
			this.button1 = new System.Windows.Forms.Button();
			this.SuspendLayout();
			// 
			// button1
			// 
			this.button1.Location = new System.Drawing.Point(56, 88);
			this.button1.Name = "button1";
			this.button1.Size = new System.Drawing.Size(120, 64);
			this.button1.TabIndex = 0;
			this.button1.Text = "button1";
			this.button1.Click += new System.EventHandler(this.button1_Click);
			// 
			// Form1
			// 
			this.AutoScaleBaseSize = new System.Drawing.Size(5, 13);
			this.ClientSize = new System.Drawing.Size(292, 273);
			this.Controls.Add(this.button1);
			this.Name = "Form1";
			this.Text = "Form1";
			this.ResumeLayout(false);

		}
		#endregion

		/// 
		/// The main entry point for the application.
		/// 
		[STAThread]
		static void Main() 
		{
			Application.Run(new Form1());
		}

		private void button1_Click(object sender, System.EventArgs e)
		{
		
		}
	}
}

Although the button has a click event, VS sets the form as the button click handler. And since this is the default, I've seen many apps where controller code is written inside the 'button1_Click' method.

Java Swing and Windows Forms are like brothers

They aren't, really. Did you know that if you create an object, you have to create the control as well, as in MFC? here is a piece of documentation from MSDN:

The CreateControl method forces a handle to be created for the control and its child controls. This method is used when you need a handle immediately for manipulation of the control or its children; simply calling a control's constructor does not create the Handle.

You can find it here: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfsystemwindowsformscontrolclasscreatecontroltopic.asp

I think there's an excellent solution, build a stack of turtles!

Layer your programming and applications teams. Hire a few Haskell guys to write DSLs and other tools for the Java guys. If some of your Java guys become dissatisfied with their understanding and pay rate, encourage them to train themselves up to Haskell quality.

This already happens to me with webdevelopment. I write scripts in Python and other people call that code from inside html pages. Those other people are programming just as much as I am, just not at the same level.

This results in multiple layers, but as long as each layer understands that the layer 'above' them counts as a customer with needs to be met, it works out fine.


I think this is the Languages for Smart People / Languages for the Masses discussion, but people haven't yet stratified the solutions.

Personally, I think there should be languages simple and unsubtle enough that the average McDonald's employee is able to use them to assist parts of their work, and subtle and complicated far past Haskell. All of those areas have problems that can be simplified with the appropriate programming languages.


One example that just occurred to me is Perl6 implemented on top of Haskell. Someone who can write Perl6 code may never understand Template Haskell or Generalized Algrebraic Datatypes, but they can surely spot a problem or inconsistency in Perl6 and ask someone else to fix it.

Good!

I think there should be languages simple and unsubtle enough that the average McDonald's employee is able to use them

Yep! Languages that don't involve explicit computation. SQL, CSS, .htaccess. No explicit computation == no subtle bugs.

The hard part is factoring those languages out, and wrapping them up so that no loose ends stick out.

What if SQL was e.g. a mini-language on top of Haskell? The users would have to deal with Haskell's error messages. Better leave SQL in the string form, and write specialized parsers - heavyweight, yes, but this gets us some portability across host languages and databases.

My term for this phenomenon is "macro system". If system A is embedded in system B and can't be fully understood (errors, corner cases etc.) without first grokking system B, I call system A a "macro system". A simple example: C preprocessor macros. An even simpler example: when you construct a Regexp object in Java, you need to escape backslashes, because Java string literals use backslashes as escapes too... so, how do I match a literal backslash?

In summary, layering may be good, but layering with macro systems (and embedded DSLs) is very hard to get right. Better go all the way and separate the DSL from the host language.

HaskellDB is SQL on top of Haskell.

The HaskellDB library for Haskell is a database 'unwrapper', meaning that it exposes more of the set theory behind databases... I wrote a long description of the good points of HaskellDB, but I'll just put that off-topic bit into my blog and stick to the subject...


I think SQL makes an excellent mini-language on top of Haskell, but I agree that doesn't make life easier for non-Haskell languages.


For the layered systems I described, I did mean DSLs that appear to be completely different languages. Look at Perl6 on top of Haskell for example.


I wonder if your point about explicit computation implies that dataflow programming might be easier for non-professional programmers? I'll have to think about that.

End user programming

Kimberley Burchett summed it up pretty well:

avoid computation

avoid state

avoid indirection

Maybe it's dataflow. I think not. My personal hunch is that end-user programming should avoid the notion of time; Excel, CSS and SQL are very "static" artefacts. Time is too complex even for professional programmers.

As for HaskellDB, yes, I'm well aware of that. The main point is, it doesn't make life easier for non-Haskell programmers.

I *heart* HaskellDB

*Further OT warning*

As shapr might already know, I love HaskellDB. To me, it is what SQL should have been. I still have some issues with HaskellDB, but those are more difficult to solve, and it would involve serious engineering; in particular, to do with extending the types and functions into the database layer directly. I've always thought that the biggest problem with SQL was the complete dominance that it exerted over you. You think you're using it to solve a problem, but instead you end up fitting your problem so that you use it.

I like it too

But I suspect that's the Haskell in me talking; making DSLs or combinator libraries in Haskell is so much fun that I think everyone should do it. In particular, it helps you to find the fundamental pillars against which your entire system/problem field rests.

I have two issues with the concept:

Writing a good DSL is hard. Really hard. Much harder than coming up with a good library (in the sense of other languages). A DSL is imbedded, and furthermore, it needs to be extensible; the L bit of the name is a clue that it needs to do things the designer didn't think of, and that's a piece of art that I would trust very few people in the world to.

Using a (good) DSL is easy, too easy. Okay, that's just baiting and self-indulgence, but I will explain. I personally think that a DSL is usually created to do one thing, and one thing well. Most DSLs have fairly narrow uses, and it is obvious that they are designed with certain use patterns in mind. The problem is that the effort doesn't pay off. I can get lots and lots of people to use a DSL; there are plenty of people capable of being a MacDonalds employee. However, I don't have enough real problems to solve that I can use them. But to solve these problems will need 1 or even 2 or 3 DSLs -- at which point hiring one guru per DSL isn't economically sensible if I only need one Macki-D employee to use them to solve the problem. There is a numbers inversion. The tools are so good, everyone can use them; but they need too great a cost to get up and going, compared to piling on the "certified" programmers.

The case of Perl6 is far enough in the other direction (and, let's face it, it was never going to get started in Perl5...) that it is worth the effort. But, for instance, I need something to help with planning pharmaceutical QC testing -- I'm not sure that the market would be large enough to make it worthwhile.

On DSLs

Most DSLs have fairly narrow uses, and it is obvious that they are designed with certain use patterns in mind. The problem is that the effort doesn't pay off. I can get lots and lots of people to use a DSL; there are plenty of people capable of being a MacDonalds employee. However, I don't have enough real problems to solve that I can use them.

What about web applications? In my opinion, web applications are a great domain for the use of DSLs. They are typically short lived and yet abundant. You most certainly wouldn't want an expert programmer writing a web applications, because they are too rare and too expensive. Seems like a domain were the layered approach would work rather well.

Now, I'm young and fairly ignorant, but I'm sure there are other domains where using a layered approach, as Mr. Erisson proposed, could be very beneficial.

The most seldom DSLs are seldom recognized as such.

For example, in the domain of web programming--HTML and its follow-ons.

It's a DSL which has been so successful, that it is seldom recognized as such. It is one of the key fabrics of the Web; it's one that many non-programmers (folks who don't write code in general-purpose languages) can use, and it's also successful as a target for higher-level authoring tools.

There are many, many, many other successful DSLs out there, but many of them seem to be ignored by the PLT community, especially those whose business is DSLs and metaprogramming. Why?

The vast majority (almost all) successful DSLs out there are standalone DSLs. Such languages have no dependency on a host language--and can be implemented in any language you like. Many are commonly implemented in things like C/C++ or Java--languages considered to have poor meta capabilities--using standard PLT design techniques. In other words, they're implemented in the same fashion as a general-purpose language compiler: from scratch. None of 'em, except possibly for lex and yacc, are intimately tied to any host programming language--and none of them are typically implemented using macros, metaprogramming, or any of the other PLT features advocated for DSL construction. Virtually all of them are implemented using a separate standalone interpreter--their implementation consists of a front-end parser and a semantic engine of some sort. And parsers are FTMP a solved problem in computer science, which we've long known how to automate the construction of.

Of course, there are may be a large number of successful language-hosted DSLs out there in industry and academia; but many of them are proprietary--and intended for programmers, rather than for Web authors and such. For languages intended for domain experts rather than programmers, dependence on (or tight integration with) a host programming language is probably a disadvantage.

A second attribute of DSLs which are popular among non-programmers is that they're declarative--which is why, I suspect, that many people don't consider their use to be "programming". But that topic is largely orthogonal to how the DSL is implemented.

While the theoritician and PLT geek in me thinks languages with advanced metaprogramming capabilities are cool; the proactical industrial programmer in me realizes that nothing requires me to use my primary development language for DSL construction. So the lack of good meta-capabilities in, say, C, is not necessarily a disadvantage--I can implement the parts in C that need to be implemented in C for whatever reason, and do my DSL in something else. Whether a conveninent HLL (perl is a popular choice in industry for this sort of thing); a standalone macro processor (a good one like m4), or tools like lex/yacc--there are lots of choices.

Double Standard

hate Java with a passion for the mindset that it promotes, where coders become nothing more than LoC mincers. It saddens and angers me that people are wasting time learning it, because there is nothing in Java (or C#) that is new; and old in them are botched and bungled...Most programmers are just Java monkeys...Which brings me back to Java. It, as a language, sucks. Its one redeeming feature is that it is a lowest common denominator. I could get my English majoring friend to learn it. At first glance, the language itself is poor as hell to manage complexity.

If we were to s/Java/Scheme|Haskell|Erlang would this post have seen the light of day? I'm no Java apologist, but this is the kind of double-standard that annoys some people. If the above comments count as "technical criticism" as opposed to "rhetoric", then perhaps I should find a different forum to read.

So ask for details.

You have a good point, what do you like or dislike about Java? Why would Haskell, OCaml, Scheme, or other oft-mentioned LtU language be a better replacement for anyone?



I'll post my own answer soon, I just have to finish some other stuff first...

My personal reasons

Is basically that it actively stops me doing anything someone else didn't think is a good idea. C++ is almost the same, before the generics/template metaprogamming people showed us the way. Lisp and Haskell obviously excel in this area, but then they are considered too obtruse by the average programmer. With great power comes great responsibility, as a wise man once said. In my experience, responsibility is the hardest thing to master. Code which (one hopes) will do more greatness than one can imagine at the moment carries more responsibility, proportionally. Flexibility in the way that Lisp and Haskell are gives with one hand and takes away with the other. I guess a central question I have is how do we find a middle ground? Or even a gradient that will be comfortable? It may be useless for us to discuss this; if you can understand monads and transformers you probably can't feel the difference between recursion and iteration. We can, however, come up with tests and criterions, so that sampling "the masses" will come back with a conclusion. Otherwise, in the empty philosophical ranglings we will be no better than those before Galileo, who decided the laws of physics by which ones sounded more probable.

Missing the point?

The technical criticisms of Java are well-understood by most of us. What David was pointing out, is that your post is full of flamebait phrases like:

* X "sucks"
* X (programmers are) "LoC mincers" and "monkeys"
* X is "botched and bangled"
* X is a "lowest common denominator".

None of those comments says anything constructive or useful; and the first two are rather inflammatory--especially the suggestion that most industrial programmers are "monkeys". (The industrial programmer might retort that FP advocates are "eggheads" and such; that would be equally inflammatory).

Your post does include valid technical criticisms of Java as well (as well as some personal obervations that while strictly anecdotal in nature, do not rise to the level of flamebait).

At any rate, there still remains the fact that most industrial programming shops do not consider things like Haskell, ML, or Scheme to be suitable tools for their development (whether in-house, or for software which is sold to others). Much has been said about why this state of affairs exists; ranging from accusations of widespread industrial incompetence, to accusations of poor-quality tools from the "academic" community, etc. And has been mentioned several times in the past; there is quite a bit of mutual distrust between academia and industry in our discipline, far more than is healthy. Rants about "code monkeys" and such don't help matters much.

I should note that virtually every online forum in which programming is a topic, from LtU to slashdot to cliki to c2 to comp.object, will contain similar rants. It seems to be a curse of the discipline that the vast majority of programmers consider themselves, like the children in Lake Wobegon, to be above average. And in all of these forums; it is often proposed that tool choice is an excellent way to separate wheat from chaff--the only point of disagreement is which tools are wielded by the masters, and which by the "monkeys".

I think I missed my own point...

As David points out earlier, there is a divide between the costs of Scheme divas and engineering projects; and, as you point out, many wish for there to be a way to seperate the classes. I disagree with David on that point, and I agree with your point (that many propose so), but disagree with them (as perhaps you yourself do, if I read your implicit critisms).

David points out that the you hire artists to build "expensive buildings" and engineers to build strip malls, in the same way that you get Scheme divas to build expensive and pretty software and Java engineers to build business software. However, I feel that there is a subtle and important distinction. In the former, the aesthetics are just that -- aesthetics. They don't affect the purpose of the building, which is presumably to house something. They are just status symbols and sociological features. In software, there is a subtle, but I think, important link. In opposition to David's point, software _is_ complex; I stand by my contention that it is more complex than other engineering fields. Part of this complexity is the need to change, and flow with the changing demands that customers and market needs. When changes are needed, an understanding of the system as a whole is needed, and understanding something so complex is difficult, if not impossible. So things like "elegance" subtly encode important features of the system, like orthogonality or symmetry. The link is thin, but I think the intuition of an "artiste" as David calls them, is the finest gauge of the dynamic potential of a software system that we have. So the first question is: does the LtU have any idea whether such a measure can be established with something more scientific than the wane and wax of the divas?

On the differentiator between the masses and the geniuses: I think there is a divide, but it is porous. I went through PL via C, asm, Prolog, Java, C++ (which, incidentally, I thought was an improvement over Java), Haskell, Python, and then just everything under the sun. No one stands still. Everyone is a dynamic entity that flows through the world. The supposed divide appears to me to be then a sort of right of passage to "adulthood"; the question is then whether this is optimal. I think that it is not. Something like Java constrains a person's development; my anecdote is getting a friend to understand C++ destructors after lots of Java experience. It took over several months for him to see how destructors are useful, for more than mere memory management. At the risk of sounding like a pounce, I hate Java because I think it kills the birds that could fly. You never know how far a person can go without letting them try. The barrier of a language change, especially the first one, is mostly syntactical, and is enough to put many off; therefore I think it is most important that there can be a gradient within a single language, one that fosters development of the developer. So the second question to LtU: is there such a language? If not, what is missing from exitsting languages such that there could be one?

Re: software is harder

"In software, there is a subtle, but I think, important link. In opposition to David's point, software _is_ complex; I stand by my contention that it is more complex than other engineering fields."

Hear, hear! So maybe we should be looking for super smart and experienced and balanced developers who at the same time have the smarts to stick to e.g.: simple Java? That way things don't get out of hand, and there's a chance that the system can be maintained by mere mortals in the future.

Absolutely not

My entire point is that we need tools to help us manage the complexity, not that we all need a career change short of an IQ hike. This is looking from the point of someone who finds understanding usual software systems difficult, and so would be a willing client of such a technology. Limiting the workforce is limiting the number of problems that you can tackle at the same time as an industry; and it would not be feasible anyway. Maybe it is a little overbroad to state that software engineering is more complex than all other engineering fields; rather it is more complex in one specific way, and that is the layers of structure in a typical project. The very non-physicality of software means that we are not constrained by physical limits (in a practical way, short of exceeding the entropy limit of the universe). We naturally build on top of existing, tall structures, and we do so as much as possible. I would like my language to help me do that, and not be a pain that I constantly work around.

Engineering complex systems

I stand by my contention that it is more complex than other engineering fields. Part of this complexity is the need to change, and flow with the changing demands that customers and market needs. When changes are needed, an understanding of the system as a whole is needed, and understanding something so complex is difficult, if not impossible.

Have you ever been involved in design projects in other engineering fields? I have. And I can state with certainty that other fields certainly have to deal with changing customer and market demands. Whole-system understanding is also needed in those fields. Gaining that understanding is hard. I'd argue that in some ways it's harder than many software projects, since other fields have to deal with complex, implicit, non-orthogonal interfaces (such as thermal conduction paths within an electrical circuit) that simply don't exist in a software context (or can be designed out by an "artiste").

Getting this somewhat back on topic: I don't believe that there is anything inherently more complex about software than other engineering fields, but I do think that poor language design can make things more complex than they need to be. Providing language constructs that permit separation of concerns, compositional construction of larger components from smaller ones, and abstraction facilities that really hide internal implementation details make the management of complexity much easier. Is language design all of the solution? Of course not - even mature engineering disciplines struggle with the management of complexity. But it can help a lot.
...therefore I think it is most important that there can be a gradient within a single language, one that fosters development of the developer. So the second question to LtU: is there such a language? If not, what is missing from exitsting languages such that there could be one?
Well, I believe that DrScheme steps through several different "language levels", permitting a step-by-step learning process.

A couple of observations

I think at this stage it is accepted that languages can help manage complexity, but also that they are not the whole solution. Other aspects of the problem, like se methodologies, requirements engineering etc., aren't dealt with at length on LtU, due to our focus, but are obviously also important.

Second, while solving the problem of complexity still eludes us (since "there's no silver bullet"), researchers are actively trying to find better approaches and tools. I don't think that the fact that many ideas are being explored by experimental tools implies that academy is out of touch with reality. Many of the good ideas later turn up in industrial strength tools. But this is, of course, a process and takes time. Taking a snapshot isn't enlightening: you shouldn't compare what researchers are doing today with current languages. It's more instructive to see what researchers did last decade and how it influences current day tools. The recent discussion of C# 3.0 is a case in point, as are many other interesting projects we discuss that are along the continuum between pure research and applied tools (e.g., C++ metaprogramming, JMatch etc. etc.)

Fianlly, I don't think the things to study are "paradigms" (FP vs. OO etc.) but rather more specific subjects that effect the management of complexity. I suggest searching the LtU archive for things like "module systems" and "module languages", expressive type systems (e.g., GADTs ) etc.

Complexity

There is one main aspect in which we stand out, which is that we deal with more complexity than any other field.

I hear this assterion a lot. I've yet to see any real proof that it's true. You're honestly trying to tell me that the majority of software out there is "more complex" than a modern super-scalar processor? Or that it's more complex than a communications satellite? Or an office building? I'm willing to stipulate that some software is more complex thna the above. But when I consider the complexities of structural design, plumbing, AC, lighting, electrical conduits, etc, etc of a modern office building, I have a hard time seeing how most software is more complex. Similarly, the complexity of a modern spacecraft design is equal to, or greater than, many software products. So I'm at a loss to see where these assertions of greater complexity come from.

Is, for example, MS Office complex? Sure. But significantly more complex than other large engineering endeavours? I doubt it.

Complexity??

I would like to take a wild guess at what is going on here. In other forms of engineering design and implementation are separate. This is necessary because the idea (the thing as it should be) is easily separated from the thing (ie the office building). Engineers wouldn’t dare to implement anything without complete drawings of what they are going to build. This normally includes every piece of wire and what lugs to use on the ends and even nuts and bolts. Engineers traditionally do this on a drawing board but now days have wonderful computer aided tools.

It seems that software engineering has not evolved to the point where it can do the same thing. You can’t sit down and design something that has a high probability of working. I suspect that this has something to do with software itself. The idea and the implementation get completely mixed up. Using the above analogy this might be addressed by better “languages” and abstractions. Electrical engineers work with definite set of things such as resistors, capacitors, wires, transistors, etc., and they have a language of how these things go together. The same is true of other engineering fields. But software languages don’t do this. It is a bit of a puzzle actually.

Abstractions

Electrical engineers work with definite set of things such as resistors, capacitors, wires, transistors, etc., and they have a language of how these things go together. The same is true of other engineering fields. But software languages don’t do this. It is a bit of a puzzle actually.

I think you may be on to something there. And perhaps that's why Java is so popular for "large scale" projects: it provides a defined set of standard components, and a language for combining them. The "language barrier" in moving from Java may have less to do with syntax, and more to do with the fact that other languages have different components, with different semantics, and different ways to combine those components. Some "components" are fairly standard across many languages (while loops, for loops, etc), some are standard across e.g. "functional languages" (map, foldl, foldr), but higher-level constructs are less well-standardized. I suspect that the patterns movement may help with this some, as will the gradual adoption of ideas from functional languages into more "mainstream" languages.

Not a puzzle

It's not puzzling at all. There's nothing surprising or mysterious about Java's popularity if you have had to work in the coding trenches for any amount of time. People building Chevys go to the parts bin, grab a handful of standard-size bolts and nuts, and crank away at a manufactured car body. People using Java go to java.sql.* or javax.* or com.favlibauthor.* and grab a handful of standard components and crank away at their manufactured business app. People building Ferraris fire up the machine shop and build custom components to exacting specifications, putting everything together by hand. The product is a very fine, very fast, very high quality, and very expensive car. People using Haskell or Erlang or O'Caml fire up their algebraic types and their higher-order functions and build many things from scratch because there isn't a convenient yet mediocre quality library to do it for them. The result is beautiful, powerful, high-quality code that is also very expensive. C++ is somewhere in between. You typically build things like Camaros and Mustangs with it...they aren't always the prettiest cars on the road, but with some tweaking and getting your hands fairly dirty, you can make them pretty darned fast. Popularity has little to do with fundamental capability and a lot to do with standardized components. Nobody wants to reinvent wheels any more. The average programmer really doesn't give a rip if you can transform a loop into a foldr. What he cares about is whether you can perform a file transfer in two lines with a prefab FTP library, or pop an embedded browser with a minimum of effort. This is part of the disconnect between academia and industry. Writing a loop takes seconds, minutes if it's really tricky. Writing a library takes weeks or months.

Still a puzzle?

Haskell doesn't have all the libraries Java has, that's true.

But it's fifty two lines for the tutorial version of the embedded browser and eight lines for a simple FTP transfer.


Maybe the problem is that people don't know which libraries exist?

That's soon to be fixed too, there's an apt-get style tool for Haskell that works, but isn't finished.


I won't claim to be the average programmer, but I am self-employed and self-taught. I do not have formal CS education, but I have gotten paid for a couple of Haskell contract jobs so far, and I expect to get paid for more in the future. I believe that the Haskell/Erlang/OCaml approach is the best way to deliver value to my customers, and I am putting my money where my mouth is.

Actually

That is quite possible.

I must admit that, thanks to this discussion, I'm coming to revise my prior judgement -- that is, I was believing in my superiority w.r.t. Java programmers.
Of course, in my vanity, I still consider myself superior. But now, it's because I have experience with several mindsets :)

More seriously, I'm thinking of current university students, who are being taught nothing but Java. I concur that Java is important, for pragmatic reasons, one of the, being the notion of "components" you underline. If they leave university without knowing that any other set of components exist, they stand the chance of staying code monkeys. Not that Java is any bad, mind you, just that they will lack scientific culture.

Beating the Averages

If you haven't already, I suggest reading Paul Graham's article, Beating the Averages.

Jim