Does Visual Studio Rot the Mind?

An long and interesting rant by Charles Petzold.

Obviously this is mostly about the IDE side of things (seeing as VS is an IDE).

Some of the features that VS provides are intended to overcome the huge size of the standard libraries, and you might argue this isn't really a language issue. At some level this is indeed a valid argument. However, I think we should pause every once in awhile and wonder whether better programming language abstractions might make modern programming - with GUIs, XML etc. - easier, and eliminate some of the need for huge libraries. In fact, one can argue that LINQ (Cw) is a step in this direction, as regards data access.

It is also worth noting that when IDEs influence the way programming is done, they influence the way languages are used, and thus influence the design space. Programmers demand new language features partly as a response to their experience with the language as it is used in practice.

Finally, the impact on teaching and learning programming shouldn't be overlooked. Students naturally want to produce cool GUI applications and use VS. If this makes it harder to introduce them to different programming techniques and languages (and I think it does), this can be highly problematic.

Here are some choice quotes from Petzold (who also said that the whole history of new programming languages... for Windows has involved the struggle to reduce the windows hello-world program down to something small, sleek, and elegant):

To get IntelliSense to work right, not only must you code in a bottom-up structure, but within each method or property, you must also write you code linearly from beginning to end — just as if you were using that old DOS line editor, EDLIN. You must define all variables before you use them. No more skipping around in your code.

If we select a new project type of Windows Application, for example, and give it a name and location on a local drive, Visual Studio generates sufficient code so that this project is immediately compilable and runable... Somehow, we have been persuaded that this is the proper way to program. I don’t know why. Personally, I find starting a program with an empty source code file to be very enjoyable.

This is the file in which Visual Studio inserts generated code when you design your form. Visual Studio really doesn’t want you messing around with this file, and for good reason. Visual Studio is expecting this generated code to be in a certain format, and if you mess with it, it may not be able to read it back in the next time you open the project.

This bothered me because Visual Basic was treating a program not as a complete coherent document, but as little snippets of code attached to visual objects. That’s not what a program is. That’s not what the compiler sees. How did one then get a sense of the complete program? It baffled me.

If Visual Studio really wanted you to write good code, every time you dragged a control onto your form, an annoying dialog would pop up saying “Type in a meaningful name for this control.” But Visual Studio is not interested in having you write good code. It wants you to write code fast.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Redesign the IDE for FP?

I can't help but think that some of the difficulties of 'shoe-horning' a functional language into modern IDEs is because of this inherent problem that they abstract the wrong elements of a language. Trying to force linear program construction on a lazy language might just be asking for trouble. I can't say I have a wealth experience using FP outside a text editor but what I have done was never enjoyable.

"Forcing"

Trying to force linear program construction on a lazy language might just be asking for trouble.

When Petzold is saying that IntelliSense "forces" you to write code linearly, he isn't being literal. IntelliSense is the system in VS for providing context-sensitive information and auto-completion. All he seems to be saying in that particular part of his rant is that IntelliSense can't offer auto-completions of variables that don't yet exist. :)

Predictive Types, Good Guesses

His real problem seems to be IntelliSense's aggressiveness. I would certainly expect to have free variables in some way highlighted by my IDE; IntelliSense doesn't highlight them, it tries to stop them.

Eclipse is less aggressive, using ctrl-space (rather than just space) for autocompletion. I use the Eclipse feature sparingly, mostly after the field/method reference dot.

On a slightly different tack, I don't see why the type of an object couldn't be inferred in many cases, particularly when an explicit allocation or cast is present. The IDE could insert a declaration with a single click. (I also don't see why the IDE couldn't run a points-to analysis and tell me the potential types of a variable. Okay, maybe I do see why.)

Already exists...

On a slightly different tack, I don't see why the type of an object couldn't be inferred in many cases, particularly when an explicit allocation or cast is present. The IDE could insert a declaration with a single click. (I also don't see why the IDE couldn't run a points-to analysis and tell me the potential types of a variable. Okay, maybe I do see why.)

I'm pretty sure this already exists in Eclipse, and I know it exists in IntelliJ IDEA. Look for something called "Introduce Variable" and make sure you've got it bound to some decent key combination. It's been a long time since I've had to manually type a local variable declaration when working with Java. So long that I no longer really see the point of putting static type inference in a language, rather than into the IDE, but that's another story.

At least part of the point fo

At least part of the point for me is being able to change a couple of statically declared types and watch all the changes propagate through where I've left it to the typechecker.

Just wait...

I expect that what you're describing will be a standard automated procedure in refactoring IDE's within about 3-5 years.

Being unable to write code th

Being unable to write code that doesn't compile will take away all the fun of programming...

Surprisingly, no.

I'd have thought so too, but I don't get more than one compile failure a month nowadays, and it hasn't lessened my enjoyment at all. You're right that a type change is about the only way I get uncompilable code. The only other way I can think is via changes to exception signatures, which can produce unreachable code at locations distant from the change. That's why I figured type migration was on some tool vendor's short-list for automation (possibly mine).

Which means...

You should translate all exceptions to RuntimeException, and get rid of those pesky exception signatures. ;>

Bottom-up structure

As of VS 2005, the IDE doesn't "force" you to write code from the bottom up. You can write a call to a method that doesn't exist yet, and with a couple clicks, the IDE will infer its return type and parameter types and write a method stub for you.

same as Eclipse

same as Eclipse

Misses the point

Here's a different viewpoint... you don't need a computer to program, most of the exercises in Knuth can be attempted with paper and a pencil. Programming is about selecting algorithms and designing data structures, and this is something that no IDE can (AFAIK) help with. IDEs help with the 'plumbing' involved in complicated APIs, but they do not appear to help in any way with programming. (With regards to the comment abouve about infering types - should this not be a feature of the language and not the IDE?). You could even argue they hinder the development of good debugging skills by reducing the need to think through the problem...

This suggests that as tools for working programmers IDEs are useful, but in teaching programming they don't really help. Perhaps if a language could be designed with a very small API, whilst retaining functionality, the IDE would be redundant.

I suppose this depends on your views regarding whether you are teaching programming as a vocational activity, or as an academic study.

I agree with that, and I thin

I agree with that, and I think that essentially what this is leading to is that the writer seems to be blaming people for their not learning a language. The problem with this is that VS is not designed to teach, it's designed to help. I certainly don't feel that it "rots" one's existing PL knowledge, but it would be fair to say that it lowers the level of knowledge that one needs to get a working system, and this could discourage people from learning in the first place.

yes, but is this any diffrent

yes, but is this any diffrent from PL? A PL makes it much easier to get a working system, than writing in pure machine code. And it certainly discourages people fron learning about machine code. Isn't this the point of abstraction? to not have to know things?

Programming is programming no

Programming is programming no matter what language you use. You must select the correct algorithm and structure the data no matter what language you use. If a language makes reading/understanding the algorithm easier, or makes constructing a data-structure easier (or makes the structures more readable) - then it helps make programming easier. Higher level languages make this easier by abstraction (variables are easier than register allocation and memory addresses for example).

Do IDEs make any of this easier? I don't see how an IDE helps you choose an algorithm or design a data structure?

What an extremely odd view...

Programming is about selecting algorithms and designing data structures, and this is something that no IDE can (AFAIK) help with.

I would guess that less that 5% of an average working programmer's time is spent selecting algorithms or designing data structures. For junior programmers, I'd guess it was more like 2%. Programing as a profession is no more "about" algorithms and data structures than surgery as a profession is "about" choosing which scalpel to use.

IDEs help with the 'plumbing' involved in complicated APIs, but they do not appear to help in any way with programming

Good IDEs automate what can be automated, and do so with fastidious attention to usability. The usability issue is a lot more difficult than the automation side, particularly if the IDE targets professional programmers. The article's issue seems to boil down to a handful of places that Visual Studio's author committed some fairly bad usability bugs. Why he seems to tag all IDEs with these bugs when more modern IDEs that Visual Studio don't exhibit them is an issue to take up with him.

IDEs help with the 'plumbing' involved in complicated APIs, but they do not appear to help in any way with programming.

What you refer to as 'plumbing' is a large part of what professional programmers actually do for a living. The even larger part carries the dismissive title of 'maintenance', and that's where modern IDEs really shine, with completion, navigation, automated refactorings, layout, highlighting, error analysis, test support, and build support. "Programming" as you are using the term needs none of these, but to someone actually trying to make a living maintaining large system these can be amazingly powerful tools.

I suppose this depends on your views regarding whether you are teaching programming as a vocational activity, or as an academic study.

For teaching, I would suggest it is more that much of the teaching of programming is teaching the wrong things.

Vocational...

As I said, it depends on whether you see programming as a vocational activity. I would argue that it is never a good idea to train people for particular jobs (as the jobs may well change before people have even finished the course).

The point of education is to teach people to think independantly and be capable of learning on their own. You want to give them the tools so that no matter what job they get they can identify what they need to know and learn about it independantly.

In this case I would expect people to be thought how to program in an abstract sence, so that they can pick up whatever language is required and APIs on the job.

But I guess this is slightly off topic... so I will agree that if you want to teach programming as a vocation then IDEs should be taught... as it is what people actually use.

However to teach people to program in a 'future-proof' way then the IDE is largely irrelevent to the academic understanding of programming.

A final point - If one only spends 5% of ones time designing software then I dont have much confidence in its quality. After discussing the requirements of the customer do you just sit down at a keyboard and bash out API plumbing? What about multi-threading, deadlocks, workflow (for the customer), internal data-structures, objects (UML?), database tables. All these things need designing, and in my opinion this is the real work involved in programming. Once everything is specified, and the architecture established, this implies the algorithms and data structures have been designed and chosen. The final stage of implementing the specification on the chosen platform using the selected APIs is really just a mechanical transformation (often left to junior programmers)...

An average programmers workload...

A final point - If you only spend 5% of your time designing software then I dont have much confidence in its quality.

First, that's unnecessary and tendentious. Consult the posting guidelines before trying that again.

Second, here's what goes on in the other 95% of an average programmer's professional day that isn't spent on algorithms and data structures:

Unit test development. Functional test development. Integration test development. System test development. Performance and load test development. Test data creation. Test execution. Reviewing your teammates' code. Helping your teammates review your code. Reading and evaluating legacy code. Defect analysis. Defect resolution. Defect tracking. User interface design. Vendor evaluation. Vendor integration. Build automation. Test automation. Installation/deployment automation. Configuration management. Documentation. Training and professional development (if you're lucky). Requirements gathering. Requirements analysis. Prototyping. Refactoring. Tool selection. Tool development. Protocol design. Schedule estimation. Porting. Recruiting (again, if you're lucky). And finally and most importantly communication. Communication with customers, communication with your teammates, and communication with your management.

When I put it that way, I can't help but think that my 95% estimate was low.

Now you can dismiss all these as merely "vocational" in nature if you want, but I would assert that doing so has had an incredibly pernicious effect on our industry. It's not so much that the skills behind these activities don't get taught in schools. All that means is that they get taught in workplaces instead. That raises costs and defect rates, but only by some constant factor. The really pernicious part is that these activities barely get studied academically, and studies of other aspects of computing are weakened by ignoring effects in these areas. Programming languages studies in particularly have been horribly stunted by ignoring these topics, in favor of data structures and algorithm implementation.

Bring this back on topic, a lot of what good modern IDEs provide is support for some of that 95%. I fully expect that IDEs will be doing even more of this over the next few years. We are starting to see commercial IDEs with communications tools built in (code-aware chat, shared editing and debugging sessions, team time tracking), and I would be very surprised if we didn't see issue tracking, requirements tracking, and code review tools built into IDEs take off soon.

It's not so much that I believe that teaching should be done with an IDE, but rather that these aspects of the profession should be taught and studied academically, and that the need for doing these things inform other parts of the study of computer science. It's not like underlying necessity for doing those things is going to change anytime soon.

The final stage of implementing the specification on the chosen platform using the selected APIs is really just a mechanical transformation (often left to junior programmers)...

I begin to expect you're just trolling. That sort of development structure has been out of style for nearly twenty years, mostly because no one ever really got it to work. The keywords to google for are "iterative development".

I certainly don't dismiss tha

I certainly don't dismiss that long list of stuff you gave... I think thats all important, and should be taught (and I dont see where I said it shouldn't)... But its not necessarily programming... perhaps systems integration or something. I think there has been a lack of understanding here, I originally maintained that the majority of programming was something that is not "plumbing" and therefore something that IDEs cannot help with - you suggested that "plumbing" takes up 95% of a programmers time... I suggested that IF you only spend 5% of your time on activities that are not "plumbing" (IE require creative thought) then the programs are likely to be bad... (I did not mean to imply that programs you write are bad, merely that if what you seemed to be suggesting were true then they were not likely to be good)... but then you list a whole range of activities which require thought and are not really helped much by the IDE (AFAIK) - these activites fall within the "non-plumbing" activites, and as far as I can tell you strengthen my case by listing them...

I am perfectly happy for you to define programming how you like - This is just how I would describe the act of programming. What you are describing is more like the broader topic of software engineering. I feel you have taken issue with my use of the word "programming" without addressing the point (that IDEs only help with a limited sub-field of the dicipline, and in no way help with the creative parts that require thought, although they obviously do help automate repetitive mechanical tasks. But would we expect anything else without invoking some kind of strong AI in the IDE?)

I am certainly not dismissing anything as "vocational" - vocational training is very important - and not something to be regarded as inferior.

I merely meant these techniques (like unit testing etc) should be taught in a generic way that does not tie the student to any one companies product.

As for trolling, not at all... However I should point out that I come from a hardware VHDL background, and as such a mistake in rolled-out code can cost over half a million dollars - maybe if more of the sofware industry had that kind of penalty associated with bugs they would find a way of making design by specification work... (yes there is still iterative design, but each iteration must be tested and found reasonably correct before the masks are made - of course mistakes still happen like the pentium division bug)... There are also some markets for software where specifications are essential (control systems for nuclear power plants or rockets)...

I guess we just come from different programming backgrounds.

Speaking of limiting...

Maybe you didn't mean to be dismissive, but calling what Java or C# programmers do "plumbing" or "proprietary" or "a limited sub-field of the discipline" sure sounds like it, considering what a large part of the industry it is. This isn't criticism based on experience like Charles Petzold's article. It sounds more like excuses to avoid learning something you haven't tried.

Yes, using an IDE exclusively can be limiting. But so is using text editors such as emacs exclusively. I'd expect a well-rounded CS major to have used either C# or Java with a modern, refactoring IDE, and also to have programmed in languages for which no IDE is available. If you've only done one or the other, you're missing out.

Very confused

But its not necessarily programming... perhaps systems integration or something.

And yet, it's the vast bulk of what working programmers do, and a lot of it is what modern IDEs are designed to support. I'm not sure what value there is to limiting the term "programming" to mean "data-structures-and-algorithms". Outside of academics, there's basically nobody who spends their time doing "data-structures-and-algorithms". Indeed, outside of academics the whole "data-structures-and-algorithms" part of programming is viewed as basically a solved problem, for all but the most rare and complex cases. On the other hand there's a boatload of people who do all of the things I listed, do them creatively and well, and do so with the help of modern tools. They call themselves professional programmers.

but then you list a whole range of activities which require thought and are not really helped much by the IDE (AFAIK) - these activites fall within the "non-plumbing" activites, and as far as I can tell you strengthen my case by listing them...

Some require creative thought, some don't (I'm not the one who referred to them dismissively, or called them "plumbing"), and many of them are helped by modern IDEs. In particular, I would suggest that all of the test development tasks, defect analysis, code review, understanding legacy code, refactoring, build automation, configuration management, and prototyping are all strongly supported by current IDEs. At least for Java, some of these activities (refactoring, legacy code uptake, possibly defect analysis) have support in current IDEs this is so good that professional developers who do not use IDEs are at a serious disadvantage. Current industry trends are that inter-team communication, defect tracking, and requirements analysis will be strongly supported by IDEs in the next three-to-five years.


I merely meant these techniques (like unit testing etc) should be taught in a generic way that does not tie the student to any one companies product.

I don't think anyone here would suggest otherwise.

There are also some markets for software where specifications are essential (control systems for nuclear power plants or rockets)..

Sure. It's worth noting, though, that developers in those markets spend even less time than average doing "data-structures-and-algorithms" work, and even more of
their time doing the stuff I listed. For that matter, they use tools with levels of intrusiveness and power (and expense) that make modern IDEs look like EDLIN.

I have used IDE's and find so

I have used IDE's and find some of the code generation facilities useful (like intelligent copy and paste that renames variables or identifiers as you copy). I use defect/issue tacking - but its not built in the IDE, nor does it need to be, as I am capable of having 2 different applications open at once.

And yes, a lot of programming is plumbing (and I do it too)... Have you never been frustrated by the ammount of boiler-plate and sheer typing must be done sometimes to do even a simple thing? (If you think Java/C++ is bad you should try programming in VHDL one day).

I want to open a window that has a button that when you click it prints "hello world" - or some similar. You can imagine how the program works almost instantly - but it would take you much longer to type all the boilerplate necessary. This is what an IDE helps with, although I often use stand alone code generators.

I want to model a customers business process in UML, and specify the information to be associated with each object. This requires talking to the customer and thinking... (this is systems analysis, not programming - but either way the IDE cannot help here). Really object modelling and selecting design patterns encompasses both the algorithms and data structures part... Design patterns give you both a structure for the data and an inherant algorithm (often for accessing the data rather than some computation).

I need to fix a customer issue with the software, so I have to read and understand the code, and find the error. This is probably the second major area where most people find IDEs useful (you can use the symbolic debugger), but often I find well placed conditionally compiled debugging statements and assertions are faster to use, providing the source already has the debugging statements and test cases. Infact I would argue that a good test-suite which allows modules to be tested in isolation, against either programmer picked corner case data - or statistical testing is better, and does not require an IDE. Also the real work here is in choosing the test data-set, which AFAIK no IDE can help you with. (although there are some code analysers that can pick out some (not all) of the 'corner-cases').

As for inter-team communication, defect-tracking and requirements analysis being current industry trends - when haven't they been important. These are things people should have been doing all the time - I know my company has been doing these things for years. But these are really software-engineering disciplines, and not programming. Software engineering like all engineering consists of the core discipline (in this case writing software or programming) plus all the "engineering" elements (requirements, costing, scheduling, documentation, accountability).

It appears I am not alone in this, for example look at:

http://en.wikipedia.org/wiki/Computer_science

and

http://en.wikipedia.org/wiki/Software_engineering

note that "programming" is only in four of nine subject areas in computer science (Maths, Theory, Hardware, Architecture, Programming, Information Systems, methodologies, applications, history, legal) , which is itself only one of four fields in software-engineering (computer science, project management, engineering, application domains).

To summerise my position:
- A lot of programming is plumbing - and thats what an IDE helps you with... I have yet to see an IDE make a useful contribution to my documentation, test-data set creation, understaning of a team-mates code, or any of the other things you listed. Your point that these form part of a software engineers workload is not disputed... nor have I ever suggested that any of these activites should be regarded as plumbing, but you continue to reply as if I had. You would be much better off trying to show that an IDE can help with these "non-programming" elements of software-engineering. For example - how can an IDE help me in a code review... can it understand the code for me?
- An IDE cannot help you with the design aspects of programming - which is the challenging bit anyway...
- Programming is only part of what a "programmer" does for a living, and the rest like requirements analysis, and defect tracking, is not currently part of IDEs, nor should it necessarily be... I have no problems keeping a browser pointing at our web-app based issue tracker whilst programming.
- For education, the plumbing part is uninteresting, you want to teach people the challenging part of program design - which is the bit IDEs don't help you with. You also want to teach all the "professional" skills associated with an engineering discipline.

Outdated...

I'm thinking that a part of the issue is that you and I are comparing different things. I think your experience is with IDEs a step behind the technology curve.

Older "second generation" IDEs did attempt to do a lot of code generation. This turned out to be fairly useless, as the generated code was never that good, the generators never that easy to use (particularly in maintenance), and the end result never fit well into existing build systems. Also, vastly more of a professional programmer's time is spent reading code or maintaining existing code than writing code, so tools that help you write new code were never going to be all that important. This soured a bunch of people on the whole IDE concept, understandably.

In the last couple of years, "third generation" IDEs have instead turned to creating tools for code reading and navigation, and manipulation of existing code. This has turned out to be an enormous win, because they are targeting the actions that professional programmers' actually spend time doing. The pre-eminent examples are IntelliJ IDEA and Eclipse, but others are quickly following.

For example - how can an IDE help me in a code review... can it understand the code for me?

No, of course not. Instead, they can provide a whole lot of tools to help you understand the code and it's changes. Things a modern IDE can help with doing code review include:

--dependency navigation, allowing you to easily see where the changed code is used and to trace to any code it uses. The good modern IDEs lets you do this sort of thing with a single keystroke.
--code differencing, so that you can see what changes were actually made.
--code history and annotation, so that you can see how the code has evolved.
--static code analysis, validating that standards are followed and common bugs are avoided. In the new IDEs, this sort of thing is done continuously in the background, and results displayed either in the editor or as a batch report. It's shockingly powerful.
--duplication analysis, so that you can find out if the code you are reviewing was cut-and-pasted from somewhere else, and thus should probably be refactored.
--test coverage analysis, so you can get a sense of whether the code under review is inadequately tested.

Going forward (1-3 years from now), IDEs will do the following to assist code review:

--test tracing, so you can see just what test cases are affected by the modified code, and quickly run them.
--requirements tracing, binding code segments to requirements, so that you have more context about what is changes the reviewed code is implementing
--communications tools, so that you can schedule and track review processes.

Code review without a modern IDE is like searching for information in a dusty library. Code review with a modern IDE is like searching for information with Google. You can probably find the same stuff, it's just going to take a lot more effort.

Let me look at these points:

Let me look at these points:

--dependency navigation, allowing you to easily see where the changed code is used and to trace to any code it uses. The good modern IDEs lets you do this sort of thing with a single keystroke.

keyword searching for the function name does most of this. Besides which there are stand-alone dependancy tools.

--code differencing, so that you can see what changes were actually made.

I use CVS to store code, and CVS provides version histories and code differencing.

--code history and annotation, so that you can see how the code has evolved.

CVS again.

--static code analysis, validating that standards are followed and common bugs are avoided. In the new IDEs, this sort of thing is done continuously in the background, and results displayed either in the editor or as a batch report. It's shockingly powerful.

Plenty of stand alone code analysis tools are available... and they are very useful.

--duplication analysis, so that you can find out if the code you are reviewing was cut-and-pasted from somewhere else, and thus should probably be refactored.

Covered by code analysis tools...

--test coverage analysis, so you can get a sense of whether the code under review is inadequately tested.

There are stand alone test coverage tools too...

--test tracing, so you can see just what test cases are affected by the modified code, and quickly run them.

This seems a nice feature.

--requirements tracing, binding code segments to requirements, so that you have more context about what is changes the reviewed code is implementing

Comments can be used for this.

--communications tools, so that you can schedule and track review processes.

We already have such tools not part of the IDE.

But in the end I dont associate most of these things with IDEs (nor do I think things have to be done that way).

CVS provides the source code management from the command line and is very useful. Web-based issue tracking and bug-tracking is perfectly usable, and I am not sure I see the advantage of integrating into a single package. You simply put the issue or bug tracking number in a comment against changes in the code, and also refer to the tracking number from the CVS changelog.

There are plenty of stand-alone static code analysis tools (Coverity, KlocWork, PolySpace, Purify). I think you will find these tools do more than any single IDE's built in analysis.

To me an IDE would help bringing all these things in one place - but the disadvantage is you are tied to the tools in the IDE... Whereas using stand alone tools you have the whole range available to you. Either way these things are not the exclusive domain of IDEs.

I don't really know what to conclude from this... I am quite happy using separate issue-tracking software, and stand alone coding tools... And I have probably been using these techniques for longer than they have been available in IDEs. It seems IDEs are playing catch-up with the state of the art programming techniques.
So rather than outdated - the tools I have been using are ahead of the curve, and will probably continue to be until the rate of progress in new tools slows down.

The argument in favor of separate tools is you can pick the best of each class of tool, the argument in favor of IDEs is that all the tools are well integrated.

Perhaps the need for a VS (or Eclipse) style IDE is a symptom

I would suppose the ultra-IDE has evolved in response to the ever rapidly increasing library size involved with programming in (.NET, Java, MFC, ...). That's a point that the author of the article touched on.

Perhaps it's a symptom of using languages that are not expressive enough. The IDE languages tend to be very verbose with an eye toward creating large, safe libraries. A lot of the verbosity is there in order to forcibly set the structure of the code in such a way as to document it and allow it to slot into (and use) large API libraries.

I'd be inclined to say that it misses the point and that a language that requires an IDE has proven itself as too bulky.

I know that sounds like a snotty rant but I've certainly done my time on both sides of the fence as a working programmer and while the vast libraries of the IDE languages can be great I've found that working with smaller languages allows me to develop solutions in far less time.

Small languages?

I'm just curious as to what you mean by "working with smaller languages allows me to develop far faster."

Smaller in what sense? Assembly language is about as simple as it gets yet I wouldn't say it makes me more productive.

What language(s) are you talking about?

I was trying to stay away fro

I was trying to stay away from naming names in order to avoid the language holy war thing.

I currently have active projects in Ruby, Erlang, Haskell, and Java. I think Ruby is a pretty good example of small and expressive when compared to Java, C#, or C++. The syntax is far lighter which is what I meant by "small". As for expressiveness that can be argued until the cows come home.

OK, so rants are meant to be

OK, so rants are meant to be taken too seriously, and the article itself is more balanced than the heading makes out. But even so...

I recently switched from Emacs to Eclipse for Java programming. While Emacs is arguable an IDE itself, I didn't use any Java-specific extensions apart from the syntax colouring that comes by default. On the other hand, I was pretty good at using Emacs - teaching it to do repetitive tasks, moving blocks, sorting rows etc.

I'd rather not go back.

Eclipse is one of the reasons why I'm actually enjoying Java programming these days (for the first time in years I'm working on a personal project in Java rather than a functional language).

It makes a whole pile of common Java-related tasks trivial. Rearranging the package-level structure of Java code used to be a nightmare because of all the explicit package names in import statements. Eclipse renames all those automatically. Eclipse also manages imports for me, generates getter/setter methods (useful with toolkits that expect beans) from fields, constantly recompiles and indicates errors, integrates aspect oriented code, and simplifies access to cvs.

Along with better toolkits, more emphasis on reflection, and aspects, that makes Java worth using. I can spend a larger fraction of my time thinking about what I should do, and less time tediously doing it.

I want to address one criticisim in particular from that rant - the idea that intellisense forces bottom up programming. While I can't speak for VS, that is not my impression with Eclipse/Java. The auto-completion is useful for libraries, which already exist. For my own code, I don't care whether it works or not, since I have a pretty good idea of what my own code is going to do. Also, I can sketch things out at a high level and Eclipse - either through marking errors or more explcitily via "Todo" tags which can be added to code - keeps track of where I need to fill in the gaps (and, as others have pointed out, it will also help fill those gaps in, generating empty classes, stubs, etc).

The holistic view

Hands up whose world view needs redefinition to account for a programming environment with Java at its center being desirable to program in for someone with knowledge and taste? :-)

Where is The Programming Environments Weblog? Update: In a fit of excessive ambition I made one.

Count me in!

At the very least his LtU editorship needs to reviewed; I'm personally in favour of a good tarring-and-feathering (and possibly a running out of a town on rail). ;-)

I have discovered that, now I'm doing most of my programming in Scheme, I've started doing little side projects in Scala and O'Caml just to expose me to different ideas. Scala doesn't really have good Emacs support, so it looks like I might end up using the beast that is Eclipse. But actually using Java...*shakes head*

Java == simple

In a good way. Part of the reason Eclipse is so powerful is because there are so many things Java *can't* do, and so many concepts that *can't* be expressed in Java. On the other hand, the C++ plugins are relatively lame. That's because you can make C++ syntax mean almost anything, which means the IDE really can't make any assumptions at all. I'm fairly sure the same kind of thing is true for FP langs. Since many FP langs don't have many of the "function words" that imperative langs have (like begin/end block symbols, end-of-statement symbols, etc.) the syntax is stripped down to just the words that have actual meaning. That's great for writing concise programs, but it also makes it difficult to impossible for the IDE to give you much help. All the redundancy is stripped out of the language, and that makes automation all but impossible. The thing I like about Java is that I can take 2 seconds looking at code and pretty much tell you what is going on. For denser languages, I need to look at a lot more context to understand what the code is doing. Less code is better, but extremely less code can actually raise the maintenance cost by increasing the burden of comprehension.

To make a hardware analogy, consider that programs are stored on a disk, and reading and understanding them is like reading the disk. By putting fewer bytes on disk, there are fewer bytes that can be corrupted, and programs can be read (understood) more quickly. We can put fewer bytes on disk by performing compression (higher-level abstractions). However, we pay for that space savings with computation. At some point, the savings from having smaller programs is offset by the computational burden of decompressing the programs. I fear that FP langs are quickly reaching this limit. The density of some FP code is astoundingly high. Unfortunately, that density creates a strong gravitational field that can ultimately collapse into a black hole of understanding.

Sparse code can suffer from spaghetti organization, which is exacerbated by verbosity. But dense code can suffer from weight, when the cost of reading a single line can be as high as reading entire functions in a lesser language (which is indeed because the single line replaces entire functions in a lesser language). The problem is that abstractions are built up by experience. The reason most humans can't do high-level mathematics is because they don't have the intuition for it. And they don't have the intuition for it because they haven't been exposed to the foundational mathematics needed to build that intuition. The same kind of problem exists in PLT. Theorists who have immersed themselves in lambda calculus and all the theoretical languages with nice proof properties have built up a strong base of intuition by which it is easy to relate to high-level abstractions. But your average programmer lacks this base of experience, making it all but impossible to harness a modern FP at its nominal potential, let alone its full potential.

Java is like doing everything with algebra, whereas Haskell is like doing everything with topology or real analysis. Yes, it is nice to write elegantly small programs. But it is nicer to be able to crank out hundreds of lines of code that solve your problem without having to think too hard or wonder if your code is correct. There are definitely tools and abstractions that would make certain problem solutions easier to express in Java. On the other hand, if you add too many of them, it would lose its simplicity.

I don't believe it.

How do you support these statements?

In my personal experience as someone with no formal education in computer science, dense programs take less time to read, and are easier to understand because everything is right there in front of you.
For me, it's slower and more work to browse many files to find the bits of an algorithm and then sew it all back together in my head.

There, that's my personal anecdote. Is the above your personal anecdote?



Which abstractions does the average programmer have and lack? Which abstractions are too hard for this mythical average programmer? I want details rather than vague claims.



How do you compare the simplicity of Java and Haskell? Do you use cyclomatic complexity of resulting code?

Simplicity

Well, syntax, for starters. The syntax highlighter in Java is very good because Java syntax is pretty simple. One of the problems with Haskell is that function call invokation uses whitespace as an operator, so it's not always obvious looking at code whether an identifier refers to a function or a variable. In Java, all functions are called with operator(), so there's no possible ambiguity. The verbosity of the language tells you exactly what you're looking at. For instance, here's a Haskell snippet from a tutorial:

traceFamily s l = foldM getParent s l
  where getParent s f = f s

Trying to figure out what functions get called where in this code is no trivial matter compared to the equivalent Java:

public <T, U> T traceFamily(T s, U l)
{
    T getParent(T s, U f) { return f(s); }
    return foldM(getParent, s, l);
}

It isn't an exact translation, of course, but the point is that function definition looks just the same as function application, and function application looks just the same as an argument list. That's because the whitespace operator is being heavily overloaded. ;> In particular, the whitespace in the Haskell example replaces the braces, the parens, the commas, and the semicolons in the Java version. You have to know a lot about Haskell syntax to know that "foldM getParent s l" represents a function invokation (of sorts), but "getParent s f" represents a function definition. And if you take those two expressions out of context, they look identical. In the Java version, they are not identical, because the definition includes argument types.

That is not to say that the Java version is *better*. But I *do* claim that it is *simpler* because you need to know less about the context to parse any given substring of the code. That's because Java is more verbose and uses function words (like parens, braces, etc.) to make the grammar explicit and less context-sensitive. Looking at the Haskell version, it's not immediately obvious how many types are involved in the code. You have to infer what types are involved by looking carefully at the code. In the Java version, you can see that exactly two types are involved. So manifest typing makes more work at write time, but less work at read time (and since most Java programmers spend a lot of time reading code, that's pretty important).

I personally find function application using whitespace to be more difficult to read than function application using explicit operators (i.e., operator()). Not more difficult as in: "Oh, no! This code is unreadable!" Just more difficult as in: "Ok, I have to find the beginning of this list to find the function name, now I have to see whether this is the left or right side of an = to tell if its a definition or invokation, etc. etc.". Of course, all the overloading and such is quite natural in Haskell, because of the ubiquitous nature of functions and the support for higher-order functions. But that abstraction exacts its cost in what you might call "brain-dead readability". Frankly, I'm quite surprised that anybody might disagree with the claim that Java is simpler to parse than Haskell. I'm sure a comparison of the respective compilers would be highly instructive.

Probably just me...

...but I find neither the Haskell nor the Java versions particularly readable without thinking about 'em. But just glancing at them, I can't see that the Java version is any more intuitive than the Haskell version. Both require knowledge of the PL used for the encoding of the algorithm.

Identifiers in Haskell always

Identifiers in Haskell always, *always* refer to variables, free or otherwise. Those variables may or may not be bound to functions, or for that matter to references to mutable cells in the appropriate monad. As a result, all you have to do is go look up what that identifier's bound to in that scope if you really need to disambiguate. The thing that tells you that something is/should be a function is the fact it's being used as one! Java's lvalue/rvalue distinction is every bit as tricky in practice.

Once you've dealt with the layout rule, Haskell's no harder to parse than Java.

Types, Abstractions

In the Java version, they are not identical, because the definition includes argument types.

Actually, the definition of that particular Haskell function that you pasted does in fact include the argument types. This is from example3.hs of the Nomaware Monads Tutorial. The full version of that source looks like:

traceFamily :: Sheep -> [ (Sheep -> Maybe Sheep) ] -> Maybe Sheep
traceFamily s l = foldM getParent s l
  where getParent s f = f s

So, argument types can optionally be present in Haskell and many people do include the type signature. You could also include an explicit type signature in the where clause for getParent if you wish for more verbose type signatures. This is just a matter of style.

The problem is that abstractions are built up by experience. The reason most humans can't do high-level mathematics is because they don't have the intuition for it. And they don't have the intuition for it because they haven't been exposed to the foundational mathematics needed to build that intuition. The same kind of problem exists in PLT. Theorists who have immersed themselves in lambda calculus and all the theoretical languages with nice proof properties have built up a strong base of intuition by which it is easy to relate to high-level abstractions. But your average programmer lacks this base of experience, making it all but impossible to harness a modern FP at its nominal potential, let alone its full potential.

You were speaking of a "black hole of understanding" and the inability of mere mortals to understand the abstractions used in FP because of their "astoundingly high density."

I asked which abstractions the average programmer has, and which require immersing yourself in lambda calculus.

And I'd like some details rather than vague claims. How do you know which abstractions the average programmer has and which they don't?

I'm willing to believe this is your opinion, but I don't see any support for anything past that.

Haskell

Actually, the definition of that particular Haskell function that you pasted does in fact include the argument types.

Yes, but my point about context still stands. In C-like languages, you gain a lot of information in the most local part of the program. The types are right next to their arguments. In Haskell, you need to map the type signature to the argument list in a way that isn't immediately visually obvious. So the language is more "global" in a syntactic sense.

I asked which abstractions the average programmer has, and which require immersing yourself in lambda calculus.

Well, higher-order functions, for one. I dare say less than 5% of all programmers even know what one is, let alone how to use it. That may not be the way it ought to be, but I wager it's the way it is. You can't even do trivial lambda calc without using higher-order functions. Then, the specific case of using higher-order functions to operate on collections. I could easily find thousands of C++ programmers, for instance, that don't use the algorithms in the STL. Given that C++ and Java make up the bulk of the world's programmers, that's a pretty significant number.

I'm pretty sure your average Java coder would take quite a while getting his head around lazy evaluation, considering most of the mainstream non-FP langs don't support it very well (or at all). Thus, comprehensions/algebraic types are out of cognitive reach. And should we even bother trying to explain continuations to them? When they are used to explicit flow control and hopping from if to while to return to switch? Maybe you don't have to work with programmers who are "mere mortals", but I do. And you would probably be surprised who gets hired to actually write code. I can think of several programmers that I would probably never attempt to explain list comprehensions to.

I could go on and on, but I would just be tediously listing all the features of your typical FP lang. So you could write the list as easily as me. I would like to see language designers go sit in an office with a team of ASEs and SE1s and just watch them code for a while to get some idea of what the coding trenches look like and what they are up against (as far as getting adoption for their language). They would probably go home crying.

Modern Haskell supports type annotation on arguments

In C-like languages, you gain a lot of information in the most local part of the program. The types are right next to their arguments. In Haskell, you need to map the type signature to the argument list in a way that isn't immediately visually obvious. So the language is more "global" in a syntactic sense.

Most Haskell implementations will let you include type declarations for individual arguments. I believe this is the usual way to do things in ML as well.

traceFamily (s :: Sheep) (l :: [Sheep -> Maybe Sheep]) = foldM getParent s l
  where getParent s f = f s

An argument could also be made that separate type signatures are easier to read, because they put all the type information in one place, rather than mixing it with other code. (Note: I am not necessarily making this argument. I'm just saying that it's plausible.)

C header files, Lazy if, Python list comps, another anecdote

Yes, but my point about context still stands. In C-like languages, you gain a lot of information in the most local part of the program. The types are right next to their arguments. In Haskell, you need to map the type signature to the argument list in a way that isn't immediately visually obvious. So the language is more "global" in a syntactic sense.

Seems to me that C header files put the function signature far away from the code. That looks even more global than Haskell to me. What do you think?

I predict that anyone able to handle C header files would have no trouble using Haskell-style type signatures.

I dare say less than 5% of all programmers even know what one is, let alone how to use it.

The question I originally asked was, how do you get these estimates? Are these numbers just guesses off the top of your head, or do you have some actual process that you use to get the amount of 5%? If you have a process, I'd like to know how it works. Then I can either use it if it's better than what I know, or ignore it if it's not.

I'm pretty sure your average Java coder would take quite a while getting his head around lazy evaluation considering most of the mainstream non-FP langs don't support it very well (or at all). Thus, comprehensions/algebraic types are out of cognitive reach.

Every language supports lazy evaluation at some level, otherwise both branches of an if would always be executed. If you explain non-strict evaluation from that sort of viewpoint, it's not hard at all.


Also, what does non-strict evaluation have to do with learning list comprehensions or algebraic datatypes?


Python and Haskell both have list comprehensions, and collect: from Smalltalk might count as well.

Either I'm missing some steps in your logical process, or you're making this up.


On the other hand, I explained Haskell to six C++/Java coders that were my coworkers. Five who didn't care about anything except going home at 5pm never even learned Java very well. The one who really enjoys programming picked up the ideas just fine, and after a few weeks of spare time reading, is able to write OCaml and Haskell. I know this is only anecdotal evidence, but it implies to me that there are many commercial C++/Java programmers who would be able to use languages like Haskell, Erlang, OCaml, etc to their full potential.

Sounds like lack of experience

I personally find function application using whitespace to be more difficult to read
I know exactly what you mean. That is exactly what I thought when I started programming in Ocaml. In my case it just turned out to be lack of experience in the language. My bet is that you'll get over it after you've written enough code in FP languages. Just reading code is not enough, because then you are trying to figure things out and it is more difficult than making things up.

Anyway, sounds like you would prefer an Algol-style syntax as in SML:

fun traceFamily s l =
   let fun getParent s f = f s
   in foldM getParent s l
   end
The fun keyword arguably makes function definitions stick out.

Puzzled

Since many FP langs don't have many of the "function words" that imperative langs have (like begin/end block symbols, end-of-statement symbols, etc.) the syntax is stripped down to just the words that have actual meaning. That's great for writing concise programs, but it also makes it difficult to impossible for the IDE to give you much help.

So the syntax is concise, and obviously the compiler understands the program. What exactly is the IDE's problem ? Is the syntax is ambiguous ?

All the redundancy is stripped out of the language, and that makes automation all but impossible.

????????

The thing I like about Java is that I can take 2 seconds looking at code and pretty much tell you what is going on. For denser languages, I need to look at a lot more context to understand what the code is doing.

But that's you, not the IDE, nor someone who's been writing in those languages for a while.

The density of some FP code is astoundingly high. Unfortunately, that density creates a strong gravitational field that can ultimately collapse into a black hole of understanding.

You can write pretty compact code using point free style for example, but the language doesn't force you to do so (it's a style). Similarly, I had problems reading some other people's C++ code because of their style. What you mention is actually a perfect opportunity (e.g. for a Haskell IDE) to assist and translate that section into a more verbose form if the programmer desires.

But it is nicer to be able to crank out hundreds of lines of code that solve your problem without having to think too hard or wonder if your code is correct.

Again, that is your comfort level and preference. However, would you be even more comfortable using a simpler language (than, say, java by dropping its OO facilities for example) and cranking out twice as more lines ?

Syntactic simplicity

So the syntax is concise, and obviously the compiler understands the program. What exactly is the IDE's problem ? Is the syntax is ambiguous ?

It has to look at more of the code to parse any given substring of the program. But the real problem is not that the IDE has a hard time parsing the program. The problem is that the IDE can't make code look different that isn't different. For instance, in Java, you have methods, static methods, variables, constants, and all other kinds of beasts, which Eclipse is more than happy to highlight in different ways. Since Haskell is much more uniform, the code will also look more uniform. So the visual cues that give you "landmarks" in the code, so to speak, are simply not present in Haskell. As another example, I always highlight symbols with a different color than identifiers. This makes functions and function lists stand out. It also makes it easy to tell function definitions from invokations. In Haskell, to highlight functions, you would need to do something like color the function name differently from its arguments. Even that will not let you escape the fact that Haskell is designed to look uniform across the code. The point being that Java makes things that are different look different, whereas Haskell makes many things that are different look very similar (function definition vs. invokation, for instance). That syntactic uniformity places a greater burden on semantic analysis of the code.

However, would you be even more comfortable using a simpler language (than, say, java by dropping its OO facilities for example) and cranking out twice as more lines ?

No. My point was that there is an abstraction limit at which going higher requires more clock cycles than you save, and that Java is closer to the boundary than Haskell (or simply that Haskell is above the boundary for most programmers). And perhaps it's not abstraction, per se, that is the limit. Perhaps it's syntactic parsimony that I am addressing. For instance, it may be that I find Scala more readable than Haskell, even though both support fairly abstract mechanisms compared to Java.

Consider that many words in English are redundant or are used redundantly. Or even that many letters in words are redundant. For instance, you can take the vowels out of most words and have comprehensible text. Let's try it: Fr nstnc, y cn tk th vwls t f mst wrds nd hv cmprhnsbl txt. Naturally, very short words suffer the most. But the word 'of' is used in myriad contexts, but the texts would be equally comprehensible without it. So the way I see it, verbose languages like Java are like English, with plenty of redundancy to make them easier to read. Whereas Haskell is more like a compressed English with all the rdndncy sckd t f t. You can understand it, but it takes some mental decoding to do so.

More of the same

The point being that Java makes things that are different look different, whereas Haskell makes many things that are different look very similar (function definition vs. invokation, for instance).

No, Haskell doesn't make things that are different look similar, because they are not different. Even you say above "...that the IDE can't make code look different that isn't different." That's the FP paradigm shift you need to go through (first class means first class). Also, how could you not tell function definitions from function applications ? You just need to look at the "=".

No. My point was that there is an abstraction limit at which going higher requires more clock cycles than you save, and that Java is closer to the boundary than Haskell (or simply that Haskell is above the boundary for most programmers).

Maybe you are right, but this is again something that you cannot possibly provide evidence in support.

FPs & IDEs

Part of the reason Eclipse is so powerful is because there are so many things Java *can't* do, and so many concepts that *can't* be expressed in Java. On the other hand, the C++ plugins are relatively lame. That's because you can make C++ syntax mean almost anything, which means the IDE really can't make any assumptions at all. I'm fairly sure the same kind of thing is true for FP langs.

The latter assumption is incorrect, which rather undermines your other conjectures. I'd agree that Eclipse's power in dealing with Java has a lot to do with the restrictions of Java — but if you've been paying attention, you'll have noticed that formally-inspired programming languages also have all sorts of restrictions (in fact, I seem to recall you've complained about some of them!)

The difference is that the restrictions in formally-inspired languages are there for less ad-hoc reasons — they're often there precisely because they make the language more amenable to analysis, which provides rich possibilities for tool support. There's a lot that could be done in IDEs for these languages — the main reason you don't see much of that is that the commercial support isn't there. "Part of the reason that Eclipse is so powerful" is that IBM allegedly sunk $20 million into its development before it was released as open source.

If you want an example of the kinds of things that are possible, take a look at PLT's DrScheme environment and how it handles macros — it's fully aware of them, so for example recognizes user-defined binding forms, and can draw graphical arrows from variables bound by macros to their uses in ordinary code, just as it does for built-in binding forms. I'm not trying to advocate macros here (that's a separate discussion), my point is just that when language features are designed in a disciplined way, tools are better able to exploit them.

The thing I like about Java is that I can take 2 seconds looking at code and pretty much tell you what is going on. For denser languages, I need to look at a lot more context to understand what the code is doing.

Are you sure that's not a function of the amount of experience you have with whichever "denser languages" you're referring to?

Less code is better, but extremely less code can actually raise the maintenance cost by increasing the burden of comprehension.

Now you're just making stuff up. In this case, it's hardly necessary to ask for evidence, since it's doubtful you have any.

We can put fewer bytes on disk by performing compression (higher-level abstractions). However, we pay for that space savings with computation. At some point, the savings from having smaller programs is offset by the computational burden of decompressing the programs.

If you're going to argue by analogy, you at least need some kind of evidence of applicability. In any case, this is contradicted by the experience of many. Programmers get to choose which abstractions they use to solve a problem. To paraphrase Einstein, use the most abstract construct you need to solve a problem, but no abstracter.

I fear that FP langs are quickly reaching this limit.

Again, this is based on how much experience, with which languages? It's easy to mistake one's inexperience with a particular language or paradigm as being a shortcoming of the language or paradigm.

Besides, you miss an important point: Java is following C++ down a road which involves adding complexity to deal with particular problems — most recently, Java added generics to address limitations such as its lack of support for typed collections. When you compare these new features to their equivalent in FP languages, the FP language approaches are actually cleaner and more tractable, so the entire sub-premise of alleged complexity in FP languages appears suspect.

Theorists who have immersed themselves in lambda calculus and all the theoretical languages with nice proof properties have built up a strong base of intuition by which it is easy to relate to high-level abstractions. But your average programmer lacks this base of experience, making it all but impossible to harness a modern FP at its nominal potential, let alone its full potential.

Something similar was once true of object-oriented languages. Of course, the purest OO languages, like Smalltalk or Self, never fully made it into the mainstream, and what we're left with instead are much grittier hybrids, like C++, Java, Python, Ruby (listed in order of decreasing grittiness ;)  I have little doubt that something similar will continue happening with FP features, as we've already seen happen with features like higher-order functions, tail calls, and first-class continuations.

As I mentioned above, many FP features are particularly good when it comes to enabling tool support, so the success of IDEs like Eclipse is a very positive sign for formally-inspired languages.

Yes, it is nice to write elegantly small programs. But it is nicer to be able to crank out hundreds of lines of code that solve your problem without having to think too hard or wonder if your code is correct.

That's subjective. But what's even nicer is being able to write elegantly small programs without having to think to hard or wonder if your code is correct. All you really need to be able to achieve that is experience with the language in question.

It's not just you and the compiler

For practical coding, it doesn't really matter what you or I or other people in this forum (whom I assume are fairly advanced programmers) can read. Readability is a social issue. I could probably handle Haskell, but if I wrote code in Haskell, I'd have to teach Haskell to the other people on my team who will be reading and maintaining the code. No thanks! It's enough work teaching Java.

I even avoid a few Java constructs (such as anonymous inner classes) just because I've found that others have trouble with them. It's less concise, but giving an inner class a name seems to help, so I'm not going to argue.

actually...

"It's enough work teaching Java."

...it's a lot more work. :)

I'm always astounded with the fact that people actually feel it's easier to learn a huge syntax with lots of twists and an immense API as opposed to just learn how to deal with functions as if they were any other value and learn a few very handy functions to apply such higher order functions to large collections...

General refutations for general claims

I'm certainly not suggesting that it's practical for team members to suddenly start programming in a language entirely unfamiliar to the rest of their team. I was responding to claims about things like the comprehensibility of FP languages in general, and their potential for tool support.

env, man!

Cool! How can I contribute to your new blog? I tried emailing your bluetail address and it bounced. Do you have more current contact info available?

email, man!

luke@member.fsf.org will reach me and I'm curious to hear what you have in mind! I must warn you though that I anticipate this being a low-volume blog of things that I've managed to look at myself and nothing on the scale of LtU :-)

Disappointed

I've not enjoyed using VS6 myself, but the article didn't really live up to its title for me.
I don’t need to remember anything any more. IntelliSense will remember it for me. Besides, I justify to myself, I may not want those 60,000 methods and properties cluttering up my mind. My overall mental health will undoubtedly be better without them, but at the same time I’m prevented from ever achieving a fluid coding style because the coding is not coming entirely from my head. My coding has become a constant dialog with IntelliSense.
This just means that at the time of writing code, the writer is not familiar with what exists in the library, but he can select relevant-sounding methods and types from pull-down menus and completions, and proceed without studying the documentation. It will compile and link. He admits that he doesn't want to study (60k methods), but he doesn't find this style fluid, which is a word like expressive, imho.

His conclusion:

So I don’t think IntelliSense is helping us become better programmers. The real objective is for us to become faster programmers, which also means that it’s cheapening our labor.
It's a long way from not making us better to rotting our minds. And yes, if most of what you do is connecting pieces with the help of IntelliSense, your value goes down. I don't think that is IntelliSense's fault.

Somehow, we have been persuaded that this is the proper way to program. I don’t know why. Personally, I find starting a program with an empty source code file to be very enjoyable. I like typing the preliminaries and then the main function or the Main method. The time when I can really use some help is not when I’m starting a program, but when I’m trying to finish it. Where is Visual Studio then?
VS should help as often as possible. However, there are much more project starts than project finishes, and it's much easier to help starting one. Moreover, emacs doesn't help you finish, either. So I don't think this is a negative for VS.

What is a negative for VS is actually the fact that the whole thing is (of course) built from use-cases that they thought I'd follow. And the ones they forgot about ended up frustrating me, e.g. not being able to change the produced binary from a dll to a static lib, or not being able to remove automatically an ATL interface that was inserted automatically, etc. But none of these rot the brain. And I assume that they got better with the usecases in the following versions.

But for the Windows programs that I later wrote for MSJ, and for those in the first edition of Programming Windows, I couldn’t use this Dialog Editor. I couldn’t use it for the simple reason that the output of the Dialog Editor was ugly.
I'm sorry. Code that is meant to be published in a magazine is a tiny, tiny fraction. Again, why does this rot one's mind ? Did the outputs of flex/yacc rot my mind ?

This bothered me because Visual Basic was treating a program not as a complete coherent document, but as little snippets of code attached to visual objects. That’s not what a program is. That’s not what the compiler sees. How did one then get a sense of the complete program? It baffled me.
I think that's just event-based programming for you (disclaimer: I've done very few UI's with VB6). Just clump the little snippets together and you have, well, the little snippets clumped together, which is what I would have written if I had written it in another editor.
If Visual Studio really wanted you to write good code, every time you dragged a control onto your form, an annoying dialog would pop up saying “Type in a meaningful name for this control.” But Visual Studio is not interested in having you write good code. It wants you to write code fast.
Yes, this can lead to bad practices, but I still don't see why this would rot one's mind. Otherwise, all the functions that I have neglected to comment properly would haunt me forever.
Whether a particular object is defined as a field or a local variable is something that we as programmers should be thinking about with every object we create. A label that has the same text during the entire duration of the form can easily be local. For a label whose text is set from an event handler of some other control, it’s probably most convenient to store as a field. It’s as simple as that. But Visual Studio doesn’t want you to think about that. Visual Studio wants everything stored as a field.
This is probably the first section that I've seen relevant to the title, as VS writes bad code on your behalf, but VS should not be your first or only teacher of the language's best practices.
What’s appealing about this project is that I don’t have to look anything up. I’ve been coding in C for 20 years. It was my favorite language before C# came along. This stuff is just pure algorithmic coding with simple text output. It’s all content.
What a true shocker that coding a solution to a toy number theory math problem takes no intellisense or drag & drop. So, apparently an algorithm and a GUI app are developed quite differently. Could it be because the two are indeed different problems ?

The gist of the complaints: A modern GUI app is a lot of code, and the megabytes of framework code you get from Microsoft and the tool you use to generate the app is meant to unburden you. They make an effort, and it obviously has its flaws. The final code is maybe not as elegant as the toy number theory problem, but it was not going to be even if you started everything from scratch in plain C.

I can surely see how dissatisfying it can be, but I can't see why VS would rot one's mind.

More of a nostalgia rant

As you probably know, Petzold basically wrote the bible of
win32 api programming and is a longtime C hacker. He probably knows the win32 api inside and out, and is a bit overwhelmed by the sheer volume of classes and methods available in .NET C is a simple language, he's very familiar with it, and if you're just coding up some algorithms and using printfs then C is a decent tool for those that have used it a lot.

IDEs are just a tool to take away some of the menial drudgery that a computer happens to be good at. If you don't want intellisense, just turn it off.

The "mind rot" argument

is one of the more tired cliches in computer science. From Dijkstra's infamous complaints against Cobol, BASIC, and all the other languages that weren't Algol-60 :), to longstanding academic complaints about the teaching of Java, to Naggum's numerous diatribes against Perl and C++, to this particular article--the premise that exposing some people to a given technology or tool will "ruin" them, and make them mentally incapable of using more sophisticated, powerful tools--is almost always nonsense. I'm aware of zero evidence for the proposition.

On the other hand, I'm aware of numerous examples (including myself) of persons who "suffered" exposure to such allegedly mind-rotting stuff as BASIC (not VB, but the old GOTO-happy 8-bit micro kind) in our formative years, yet seem to have suffered no ill effects for it.

Claims that technology X "rots the mind", and all the variations thereof, manage to make "Considered Harmful" rants seem like high quality scholarship in comparison. :)

Note: My comments are directed to rots-the-mind articles in general; not to this paper in particular.

Weak-form mind rot argument

Like Sapir-Whorf, the mind-rot argument can be interpreted in both a strong and weak form. And like Sapir-Whorf, the strong form — that exposure to "X" will lead to an incurable incapacity — is hardly worth discussing. But weaker forms can be more interesting.

In the case of Petzold's Visual Studio rant (which I rather enjoyed, partly because I lived through some of the same changes he describes), the argument essentially boils down to the fact that tools like VS are crutches [edit: or can be], and crutches can be harmful as well as helpful. Not only is VS a crutch, but it's one which may encourage a particular style at the expense of other styles. But here's the crux: if you'll grant for the sake of argument that this can have a mind-rotting effect, then that effect is going to be strongest when you don't recognize that it's there at all. But if you're aware of it, you can take steps to mitigate it — in the same way that television can rot the mind, but you mitigate that by being selective about what you watch.

But in general, I agree with you: rhetorical excess rots the mind.

Crutch?

An interesting comment, and one which I think lays bare a longstanding bias of many in this forum, and in the PLT community in general.

What ought to be the goal of PLT design, and computer science in general? Among other things, to improve the productivty of programmers--enable us to write better code faster; to better faciliate the process of translating customer requirements into instructions for the machine(s).

PLT is a VERY important part of this. The progress which has been made in the past 50 years or so has key to enabling the digital world we live in today.

But, PLT and programming languages (formal ways of specifying algorithms to a computer using some sort of abstract machine, whose semantics a computer can duplicate in one of several ways) are only a part of the programming ecosystem. Other tools, such as IDEs and refactoring browsers, are also key components.

Unfortunately, some in the PLT community can forget this; which leads to a tendency to view productivity enhancers which are outside the domain of PLT proper, as "crutches".

The Java world is a case in point. Many PLT folk view Java in a negative light--I often criticize it. It has several design flaws which I think inhibit its usability as a high-level language, and I'm not even considering things like "no type inference", which are outside the scope of its requirements, on my mental list. While writing quick-and-dirty programs in Java is certainly tractable; I've seen numerous attempts to build large apps in the thing founder on the rocks. It doesn't scale well--as a high-level source language.

But Java+IntellijIDE, or Java+Eclipse--starts to become another story. Java has also become a favorite target of many higher-level languages--it's fast replacing C as an intermediate language. And maybe that is where Java's strengths lie--as an intermediate language?

It's often trendy to dis higher-level programming abstractions implemented outside the primary development language as "hacks", greenspunning, etc. And in some cases, such efforts are indeed "hacks". But I see little wrong with use of extra-language tools, whether forms-based programming environments, high powered IDEs/browsers, code generating wizards (wizard meaning automated agent, not programming guru), "editor metaprogramming", and other ways of automating the authoring and maintenance of code. There are many things which are very hard to represent as linear streams of characters (a representation most programming languages are still limited to), but much easier to deal with out-of-band.

So--is use of a sophisticated IDE on top of a mediocre language a "crutch"? Or is it just a different--but perfectly valid--way to get around.

Crutches compensate for handicaps

So--is use of a sophisticated IDE on top of a mediocre language a "crutch"? Or is it just a different--but perfectly valid--way to get around.

What I wrote was more general than I intended. I didn't mean to say that IDEs or even VS in general are purely crutches. I use Eclipse with Java myself, and I used to use VS for C++. I also find the code comprehension and visualization features of DrScheme very useful. I'd like to see more powerful IDEs — I don't think what Eclipse and VS do today is all that impressive compared to what we know is possible, compared to historical systems such as Genera and higher-end commercial systems like TogetherJ (re-christened Together by Borland, in a misguided imitation of Microsoft's strategy of naming products using the most generic names imaginable).

My comment about crutches was intended to refer more to Petzold's focus on the alleged consequences of a feature like Intellisense. Any feature or tool that encourages a certain way of working to the detriment of other ways that are also important, at least has the potential for being a crutch. I think Intellisense can reasonably be considered such a feature — allowing your coding style to be governed by Intellisense would probably be a suboptimal development strategy. But if you're aware of that potential, you can guard against it, and the accompanying mind-rot. :)

Having hopefully clarified my position, I can now address your question. I think that almost by definition, "use of a sophisticated IDE on top of a mediocre language [is] a crutch". In that case, the crutch allows you to function in the face of a handicap, namely the mediocre language. All else being equal, why wouldn't you choose a less mediocre language to use an IDE with?

A good question...

...whose answers are well known. Sometimes, the good language doesn't have a good IDE; while Eclipse plugins, emacs modes, and such probably exist for all languages with some level of support, many are better than others.

And then there are the many social and industrial aspects of programming language choice--the network effects which lead to the set of languages which industry deems acceptable (a small subset of the universe of languages) is well-known. IDEs are frequently used as justification for this sort of thing; though many such claims based on IDE availability are shockingly ill-informed.

An interesting question for you: Which would you rather have--Java and Eclipse, or (insert favorite PL here) and vi? I realize the choice is artificial (most programming languages have better than dumb text editors available), but it is an interesting point of discussion.

For me--it depends on what I'm doing, and who I'm doing it with. My own project, I'd be more likely to go with the advanced language and the crappy environment. If I'm working with a team, on a typical industrial application, then I'd be more inclined to use the nice environment and the not-so-nice language.

Crutch or Power Tool?

I think that almost by definition, "use of a sophisticated IDE on top of a mediocre language [is] a crutch". In that case, the crutch allows you to function in the face of a handicap, namely the mediocre language. All else being equal, why wouldn't you choose a less mediocre language to use an IDE with?

So is using a sophisticated power tool with mediocre fasteners a "crutch"? Would the solution be to use better fasteners? So that we can use hand tools? Should we develop self-pounding nails so that we can build houses with a hammer instead of a nailgun?

Take code completion in Eclipse, for instance. Eclipse will happily close open braces for you, place semicolons, and auto-indent. Is the solution to design a language that doesn't need braces, semicolons, or indentation for readability? I'm sure some would say yes. But block-structured languages need something to delineate the blocks, and a tool that closes those blocks for you automatically is going to help no matter what. If that's a crutch, then I'll gladly admit to being handicapped. What about refactoring? Should we try to design languages that don't need to be refactored? A language that is so advanced you always pick the right names the first time? I'd like to see that language! How about tooltip help on things like argument lists? Should we define a language where all functions only take one argument, and the type of the argument is spelled out in the function name? Should we make a language where you can only define and call 10 functions, so that you can memorize their names and never have to look up their descriptions? I think "sophisticated IDE" and "sophisticated language" are orthogonal concepts. I don't think using the former says anything about the latter.

On crutches...

Using a sophisticated power tool with mediocre fasteners is a crutch if it means that you're encouraged to use mediocre fasteners even where they're not appropriate, and you suffer negative consequences as a result. The solution is to use the fasteners that are appropriate to the job, and not allow your tool to lead you to make inappropriate choices.

I'm not suggesting that every benefit that an IDE provides is a crutch, only features that while useful in one dimension, may lead you to make inappropriate choices in others. Refactoring can be such a feature — e.g. the idea that you'll later refactor the crappy code you're writing now can be taken too far — but that doesn't mean that refactoring support is bad in general, or shouldn't be provided.

I think "sophisticated IDE" and "sophisticated language" are orthogonal concepts. I don't think using the former says anything about the latter.

I agree, but I was responding specifically to Scott's question about whether a mediocre language with a sophisticated IDE might be a "different, but perfectly valid way to get around", in which case the mediocreness of the language has been specified as a given, and is not implied by whether the IDE is sophisticated (i.e., it's orthogonal, as you say.)

Hm, do you know anywhere you

Hm, do you know anywhere you could get more info on Genera?

Demo movies

Have been made by Rainer Joswig who still owns a machine and posted on CLiki.net. I also recommend the CLIM summary I made on another site, which links to further information. CLIM is the portable successor of the user interface manager that formed the framework for the Genera user environment.

Exactly...

Couldn't have put it better. I will go one step further, though and ask a question that tends to set some programming language types frothing at the mouth:

What language features would help optimize developer productivity, assuming that all developers use (or will use) a state-of-the-art-for-2005-or-better IDE?

Some possibilities I've thought of:

Design-By-Contract. Explicit pre- and post-conditions become much easier to maintain and more powerful when using an environment that continually runs static analyses in the background. It's not just that the conditions are easier to validate, but rather that those contracts can then be used in other evaluations.

Module and packaging specifications in the language, rather than specified extra-linguistically in build files. Again, these are easier to maintain and can be leveraged for more tasks, assuming you have a modern IDE system available.

Explicit in-language versioning constructs.

Features to avoid: textual inclusion, macros, dynamic scoping, any sort of pre-processing. The benefits these provide might have been worth the tradeoff in a pre-IDE language, but they weaken the analyses available to an IDE too much to be worthwhile.

The lessons of Smalltalk

One interesting lesson that us in the PLT community should always remember is the lesson of Smalltalk.

From a PL theory perspective, of course, Smalltalk isn't very interesting these days. It's got a brutally simple programming model, a trivial type system, a somewhat unsual (though nice when you're used to it) syntax. It's got a decent set of standard libraries. By most modern standards, its environment is (arguably) crude and clunky; its dependence on the environment has probably limited its prospects for growth more than anything else. (Especially in the 80s and early 90s, when green screens were still prevalent in corporate data centers and on corporate desktops--when computers could even be found on desktops).

Yet, despite all that, Smalltalk is defended and praised by its advocates with a religious ferver that makes even Lispers blush. :) Why is that?

The most likely explaination is its seamless integration with its development environment. That, coupled with a few other traits of Smalltalk, made it an incredibly fast rapid prototyping environment--certainly faster than anything else which was available, until recently. The language itself was nothing terribly special; but the whole package was wonderful. Smalltalk fans still claim that modern IDEs are not as seamless as Smalltalk environments were.

One trend that has occured over the years is that the modern IDE has become more Smalltalk-like in one key way. Early IDEs were little more than glorified editors, with a proprietary build manager and graphical debugger bolted onto the side. You make a change and hit "build"; then the IDE saves it to disk, invokes the offboard compiler on the saved file, and parses and displays the compilers output in the "build" window. Modern IDEs, on the other hand, have the whole toolchain (compilers and code analyzers, etc.) built in. Unlike Smalltalk environments, the modern IDE can easily operate in a program-source-as-collection-of-text-files environment; like Smalltalk they aren't constrained by it.

Macros in IDEs

Features to avoid: textual inclusion, macros, dynamic scoping, any sort of pre-processing. The benefits these provide might have been worth the tradeoff in a pre-IDE language, but they weaken the analyses available to an IDE too much to be worthwhile.

I beg to differ about macros in IDEs. As I mentioned elsewhere in this thread, take a look at how PLT's DrScheme handles macros. It's fully aware of them and is able to treat them just as though they were built-in language constructs, e.g. showing binding arrows between variables bound by macros and their usages in regular code, or other macros. The right macro system, module system and IDE can be a fantastic combination.

Interesting...

DrScheme seems to have it's act together on navigation for macros, which is a very neat trick. I can't tell from the documentation if it has any support for automated refactoring or static analyses, though, which strike me as much harder. For instance, extracting a section of code as a separate function becomes much more difficult if that code were to contain a macro, as would interactively answering questions which required dataflow analysis. At least, that would be a problem for the sort of C-like macros which allow arbitrary text replacement. I don't recall enough about Scheme macros to know what the problems would be.

Thanks for the pointer

Refactoring with macros

DrScheme has some limited refactoring, in the form of variable renaming, which also takes macros into account, of course. However, I believe that the lack of other refactorings has much more to do with development resources and DrScheme's role as a PL teaching tool, than any fundamental barriers.

I don't see the problem in principle with automated extraction of code that contains macros, since the presence of macros and their dependencies is known statically. If code being moved contains a macro that's imported from another module, the system would of course have to make sure that the necessary module is imported in the target module. However, all that information is already known to the macro system, which is how the IDE is able to draw arrows from a usage of a macro to the import expression for the module which defines it.

Ditto for dataflow analysis: obviously, to do dataflow analysis, you have to expand macros internally, but that's not a significant problem. DrScheme's soft typing tools (previously MrSpidey, now/eventually MrFlow) have to do some such analysis, and afaik the issues they've had in that area have much more to do with Scheme's latent types and higher-order functions, than with macros.

(If I'm missing some serious limitations on refactoring in the presence of Scheme macros, I hope someone will jump in and explain.)

At least, that would be a problem for the sort of C-like macros which allow arbitrary text replacement. I don't recall enough about Scheme macros to know what the problems would be.

C-like macros and Scheme macros bear very little similarity to each other. The Scheme macro system in question, syntax-case and its subset syntax-rules, "understands" the lexical structure of code. Think of it as a code comprehension system that happens to be able to spit out transformations of the source code. In combination with a module system which can deal with macro dependencies and phase issues, manipulating code containing macros is relatively straightforward. (Although whoever works on those "straightforward" aspects of DrScheme is probably either laughing or cursing me right now...)

Refactoring in the presence of hygienic macros

(If I'm missing some serious limitations on refactoring in the presence of Scheme macros, I hope someone will jump in and explain.)

No, I don't think you are; I agree with your analysis. Any system for hygienic macros (syntax-rules, syntax-case, syntactic closures, or whatever else) already tracks the kind of environment binding information that you'd need to support more general refactorings. It would be a lot harder to support properly in the presence of procedural macros but even then I suspect you could figure something out that would work as expected say 90% of the time.

I program mostly on Unix-deri

I program mostly on Unix-derivative platforms, and I remember a few years ago having to do a project on Windows. It happened to be a COM component in C++. Now, through my XPLC project, I have extensive knowledge of COM, read the whole COM Specification a few times, and so on. I thought this would be easy.

What I discovered is that the lower levels of the Win32 API is so convoluted and awkward, that the tools are almost required to be effective at all working in that environment. For example, when setting a string value in the registry using the raw Win32 API, you need to pass in the length of the string, even though it is specified as a NULL-terminated string, so you need to have a temporary char* so you can strlen the string first, instead of just using a string literal. Or you have grotty APIs like FormatMsg, which takes something like 6 parameters, compared to the single parameter of POSIX's strerror (which is no less capable).

Using the autogenerated code is almost required in order to not go crazy.

I guess it's as much of a crutch as a car or a train is when you have a thousand kilometers to travel? Why is everything thousands of kilometers apart, though, is an interesting question...

Bad Habbits


C      YES I AGREE.  I STARTED OUT PROGRAMMING IN FORTRAN ON A TOPS-10 S  000100
C      YSTEM AND OTHER SIMILAR MACHINES.  ALTHOUGH, I MUST ADMIT, I STIL  000200
C      L HAVE SOME BAD HABITS I PICKED UP FROM THOSE OLD DAYS, BY AND LA  000300
C      RGE I BELIEVE I HAVE OVERCOME MOST OF THE WORST ONES.              000400

My favourite piece of Python humour :-)

Egads!

Don't you know it will require 4 punchcards just for that comment? The prodigal and wasteful ways of modern computer science have infected you to the bone.

:)

That's OK...

I'll just use a cart to trundle the cards from the keypunch room over to the card reader. And don't try to tell me my trusty cart is just a crutch - how else are you going to move that many cards around??

Do what the pros do...

... hire a caddie.