Small Time Languages
started 3/12/2003; 8:45:25 AM - last post 3/21/2003; 1:54:08 PM
|
|
Brent Fulgham - Small Time Languages
3/12/2003; 8:45:25 AM (reads: 3693, responses: 57)
|
|
Small Time Languages |
Michael Vanier is certainly too humble to post a link to his own thread on the Lightweight Languages Mailing List, so I'll do it for him!
This thread discusses a couple of interesting concepts:
- Why does Visual Basic hold mind-share over such a large group of programmers?
- Why does anyone think graphical "programming languages" will make the hard parts of programming any easier?
- Of course, the usual discussion of strongly typed languages versus dynamic typing, versus a combination of the two.
- Finally, a fascinating side-bar on the dangers of institutional inertia, in which a promising optional type-declaration system for Python never got off the ground due to an inability to agree on the "perfect" solution.
There's a very rich set of issues in this thread to pique discussions at LTU!
Posted to general by Brent Fulgham on 3/14/03; 4:57:12 AM
|
|
|
|
Noel Welsh - Re: Small Time Languages
3/13/2003; 2:18:35 AM (reads: 2662, responses: 1)
|
|
Two interesting things I've got out of the discussion on VB:
- The debugger matters
- Limiting the language can be a good thing
The first point is interesting because I've reached the programming level where I hardly ever use a debugger, and so forgotten how useful it is to new users (though the VB debugger didn't deliver events in the same order as a running application IIRC!)
The second point is what makes VB good (you can round-trip changes between code and the GUI designer) but also so frustrating. Interesting would be to study how these things could be made incremental - so there is an escape-hatch to a more powerful language when you need it. This is how Scheme DSLs are usually defined - you have the DSL abstractions that most people will use exclusively, but there is always full Scheme around for when you need a more expressive tool. This one reason that Scheme macros are better than Haskell monadic interpreters!
|
|
Noel Welsh - Re: Small Time Languages
3/13/2003; 2:19:52 AM (reads: 2633, responses: 0)
|
|
The other thing I'd be interested in knowing is if VBs advantages translate to the web. When I last used VB it was all about Win32 GUI design and ASP was the MS approved way of doing web page development.
|
|
Isaac Gouy - Re: Small Time Languages
3/13/2003; 5:48:20 AM (reads: 2620, responses: 2)
|
|
I've reached the programming level where I hardly ever use a debugger
How did you achieve that? What are you doing instead? TDD?
I still can't break the habit of checking what I just wrote by stepping through in a debugger.
As a "language issue" -
There are many ways to understand programs. People often rely too much on one way, which is called "debugging" and consists of running a partly-understood program to see if it does what you expected. Another way, which ML advocates, is to install some means of understanding in the very programs themselves.
Robin Milner, forward to The Little MLer
|
|
Ehud Lamm - Re: Small Time Languages
3/13/2003; 6:49:30 AM (reads: 2690, responses: 0)
|
|
I don't want to start a debuggers thread, but I 100% agree and 100% disagree with you here:
I've reached the programming level where I hardly ever use a debugger -- Right on. Just today I remarked that I don't uderstand why students use the debugger for one of our very small exercises. They should simply think about the code.
forgotten how useful (the debugger) is to new users -- No way. The debugger is detrimental in learning how to program. People should work on their specfication and design skills. Debuggers are important when you start working on systems that are too big to understand in one sitting, written by several people, over a long period of time, and cross language boundaries. Experts working on such systems can make use of debuggers. Beginners should not be exposed to them!
|
|
Ehud Lamm - Re: Small Time Languages
3/13/2003; 7:10:31 AM (reads: 2652, responses: 0)
|
|
Is this preface available online somewhere? I'd like to add this to the quotations page.
|
|
Isaac Gouy - Re: Small Time Languages
3/13/2003; 7:43:27 AM (reads: 2631, responses: 2)
|
|
They should simply think about the code
But thinking is so hard! ;-)
Maybe they haven't learned to trust the techniques they have for reasoning about code?
debugger is detrimental in learning how to program
Don't you find that having a concrete representation of a computation (debugger) gives some students more confidence in their ability to reason about the computation?
Mea culpa, it's from the Forward
|
|
Ehud Lamm - Re: Small Time Languages
3/13/2003; 9:43:13 AM (reads: 2674, responses: 0)
|
|
Sure. And it is our job as educators to teach them how to reason about code, and how to write code that can be reasoned about.
Instead we teach them C++ and give them access to a debugger...
|
|
Ehud Lamm - Re: Small Time Languages
3/13/2003; 12:47:31 PM (reads: 2658, responses: 0)
|
|
Don't you find that having a concrete representation of a computation (debugger) gives some students more confidence in their ability to reason about the computation?
This is a good point. I think it does tend to give them more confidence, but doesn't really make them understand the computation any better.
And confidence without knowledge is much deadlier than knowledge without confidence...
|
|
Michael Vanier - Re: Small Time Languages
3/13/2003; 10:43:34 PM (reads: 2612, responses: 1)
|
|
Michael Vanier is certainly too humble to post a link to his own thread...
Actually, I didn't start the thread, but I was trying to redirect it into talking about end-user programming and some of what I consider to be logical fallacies related to it. Unfortunately, nobody has commented on those aspects of my original post, which I take to mean that everyone agrees 100% with what I said and all of my arguments were so self-evidently correct that there is no need for any further discussion ;-) ;-)
In a way, that's true; I was preaching to the choir on the LL1 mailing list. However, there are many of people who work on so-called end-user programming, and I don't think the negative side of this is generally appreciated. That negative side is (in brief) that no matter how slick the GUI or how un-intimidating and English-like the language syntax, issues of algorithm design and program design are simply *hard* and cannot be dodged no matter how hard you try.
|
|
Ehud Lamm - Re: Small Time Languages
3/14/2003; 1:02:53 AM (reads: 2629, responses: 0)
|
|
We discussed end-user programming in past, and this was the general opinion.
I think that in some respects this is too strong. When I think about end-user programming I think about DSLs, and domains that don't involve clever algorithmic tools. And I've seen such tools (in various organizations) that where put to good use, and had great ROI.
|
|
Johannes Grødem - Re: Small Time Languages
3/14/2003; 9:35:43 AM (reads: 2456, responses: 0)
|
|
Noel: Regarding round-trip changes, I think the libglade approach works fairly well. It generates interface files (XML, ahem) which the application then loads, you just have to set up a table that maps events to your functions, and query a GladeXML-object for the widgets you need.
widgets = glade.XML ("myinterface.glade")
my_widget = widgets.get_widget ("my_widget")
Etc.
|
|
Isaac Gouy - Re: Small Time Languages
3/14/2003; 11:55:21 AM (reads: 2424, responses: 0)
|
|
And it is our job as educators to teach them how to reason about code, and how to write code that can be reasoned about. Instead we teach them C++ and give them access to a debugger...
What can I say...
From the books and development tools I've peeked at (and without any experience of the language), it would be my wish as a freshman to learn, how to think about software, by learning Scheme.
|
|
Anton van Straaten - Re: Small Time Languages
3/14/2003; 4:07:25 PM (reads: 2371, responses: 0)
|
|
On point 2 in the main topic, "Why does anyone think graphical 'programming languages' will make the hard parts of programming any easier?", in Wouter van Oortmerssen's thesis about his graphical tree rewriting language Aardappel, there's a section 2.4 entitled "Graphical Languages" which discusses some related issues, including problems with graphical languages - such as the difficulty of representing complex graphs without degenerating into spaghetti.
I've used graphical dataflow languages that have worked quite well - the interface for setting up visualizations in IBM's OpenDX visualization tool is a good example - a graphical interface is used to set up data transformations to massage input data into the final form needed for visualization. In Wouter's thesis, he observes that "Graphical languages are slowly becoming more popular, though not as general purpose languages, but usually as simple, domain-specific languages." The OpenDX graphical interface does in fact translate to an underlying text-based DSL.
I think this provides a promising model for sophisticated application user interfaces: provide a DSL that lends itself to graphical programming, and you might be able to teach people to use it without getting bogged down in the details of programming text-based languages. On a much more restricted basis, we do this kind of thing with GUI interfaces to e.g. config files.
None of this "makes the hard parts of programming any easier", but it could make for more powerful programs with richer, less restricted user interfaces.
|
|
Dominic Fox - Re: Small Time Languages
3/15/2003; 2:38:02 PM (reads: 2261, responses: 0)
|
|
When I have to code VB, which is more often than not, I use the debugger as a matter of course. Write, then compile-to-P-code and execute: that way, anything the compiler picks up on (glitches in syntax, misspelled a variable name, left a string constant improperly terminated, etc.) can be corrected as the program executes; trivial errors that are only caught at runtime (like incorrect fieldnames, when you're accessing the contents of an ADO recordset pulled back from a database) are similarly instantly accessible to remedy at the point - during a trial execution run - at which they first occur.
This has next to no significance vis-a-vis my willingness or ability to reason about programming; it's simply a highly convenient way of straightening out empirical flaws in the code I've actually written. To put it another way, there is absolutely no necessary connection between being able to design an algorithm right first time and being able to spell a variable name right first time. VB's IDE can't help you with the former (and the language itself can positively obstruct you if you're trying to do anything even remotely sophisticated), but it can save you a lot of time when it comes to the latter.
|
|
Chris Rathman - Re: Small Time Languages
3/15/2003; 5:29:50 PM (reads: 2258, responses: 1)
|
|
Two assumptions are implicit within the discussion:
1). VB is an inferior programming language
2). VB programmers are inferior in their reasoning capabilities and rely on unnecessary crutches (debuggers, etc...)
While I'd agree with proposition #1, I'd probably also state that all programming languages have their flaws and limitations - even those considered by some to be beyond reproach (Scheme, Haskell, ML, etc...). Maybe for the problem domains with which one is presently concerned there are a variety of reasons to use a language other than VB, but quantification is a fairly hard to come by with most evidence being anecdotal or based on authority.
The second assumption is probably true as well, though I don't see how eliminating the debugger and relying on pure reasoning really improves the life of the average programmer. Yes, some programmers are lazy and throw code together with little thought or reason. And yes, it would be helpful to be more critical in the early stages of development. But the idea that a debugger is unnecessary is a bit more than I would accept.
To give an example, let's say we were speaking of designing of analog or digital circuits in the field of electrical engineering - a field with much more discipline than software engineering will ever hope to attain. Even with that level of discipline, I don't think you see the engineers frowning on relying on Osciliscopes or Logic Analyzers. They are tools that with which every engineer should be well versed.
Debugging is a feedback process. You make "guesses" about what response a given stimuli will produce and you test that theory. Now it's much more efficient in the long run to have educated "guesses" based on logic and reasoning. But even those guesses, no matter how well reasoned, must be tested and examined. Last I checked, the number of variables involved in many, if not most, programming endeavors is massive and many of these variables have to do with unknowns that extend well beyond the algorithmic expression within any programming language. For exampe, Operating Systems and Libraries have a multitude of versions with quirks galore. A programmer must not just rely on rationality alone, but also discover (stumble upon) many a situation that is seemingly irrational.
Ok, enuf about that. The debugger may be a selling point for making VB "easy" to use, but I think it misses the main point of why VB is popular. There are basically two reasons that I see - Forms Design and Database Interfacing. VB brings forms design to the front and center of the programming experience. Events - the main difficulty of forms processing - can be handled with little to no thought. Not that this really is a "programming language problem", but if you don't build an environment for building forms that is as intuitive or easy to use as the VB "environment", then you can speak of the limitations and quirks of the VB programming language all day long and remain totally oblivious to the main problem that most VB programs attack (hint, you spend less time on algorithms in this environment than on the design of user interfaces). The second aspect about database interfaces may not be as obvious, but not an insignificant percentage of VB programs are concerned with two primary tasks - reading/updating a database and presenting this information visually to an end user.
Anyhow, to make an already long story not quite so long, saying that VB programmers could write better algorithms in Scheme, Haskell, etc... is about like saying people that use spreadsheets could write better algorithms in Scheme than they can in Excel. You can try to reason how superior you are to those poor "Excel" scripters, but I don't accept that the programmer who uses a fine functional programming language is intellectually superior to the lackey that uses Excel. They both use tools to solve problems. Attention to the details of the toolset is important, but the goal is problem solving (Of course there are problems which are peculiar to programming as a design discipline where such attention is extremely important - but we are comparing different problem sets).
In regards to whether the VB forms paradigm helps with Web programming, the answer is a definitive no. Microsoft has tried various things such as Active-X and the newer ASP forms processes. Active-X failed for a number of reasons, including security. I've done some work with the new forms in ASP and have come away unimpressed. The problem is that web programming has a number of constraints that are the primary problem for any web application (statelessness, limited bandwidth, restriction over client installation). Any web app must deal front and center with these issues - forms design (nor algorithms) are not the primary constraint that you have to deal with.
|
|
Isaac Gouy - Re: Small Time Languages
3/15/2003; 6:47:10 PM (reads: 2232, responses: 0)
|
|
Let me make clear what I think about these issues:
2) VB programmers are inferior in their reasoning capabilities and rely on unnecessary crutches (debuggers, etc...)
A longtime ago, I was a runner. I learned not to make judgements about other runners abilities from their current appearance - they might just have run slowly the length of the street; they might just have run 30 miles, fast.
Knowing that someone uses ye olde VB now, tells us little.
Knowing that they've only ever used ye olde VB, suggests there are things they won't have learned about, because the language doesn't support them.
That doesn't tell us anything much about their potential; but it does tell us something about the size of learning step they will need to make, the habits they'll need to break.
1)VB is an inferior programming language
IMO this kind of statement doesn't say much until we fill in the context:
- compared to what other language?
- for what purpose?
the idea that a debugger is unnecessary
I don't think that's at-all what's being said.
Even someone like me, managed to write a few hundred lines with pencil & paper, have it transcribed to punched cards, and then run correctly first time - which was a great surprise at the time ;-)
You don't need a debugger to learn to think about programming, or to complete small programs. And relying on a debugger at that stage may mean you don't learn to think about the problem in other ways.
I agree with your comments on VBs popularity - VB made it easy to do the most common tasks on many projects. It was useful!
|
|
Chris Rathman - Re: Small Time Languages
3/15/2003; 8:56:00 PM (reads: 2232, responses: 1)
|
|
A longtime ago, I was a runner. I learned not to make judgements about other runners abilities from their current appearance - they might just have run slowly the length of the street; they might just have run 30 miles, fast. A very good analogy!
Knowing that someone uses ye olde VB now, tells us little. Well, statistically speaking, it does usually convey a certain amount of information about the kinds of projects that the person is likely to be engaged in. Unlikely to be cutting edge in terms of challenging problems in computer science, nor likely to be performance based. More likely to be custom programming for business processes, typically in a Client/Server framework - at least that's been my experience with VB and it's practitioners.
Knowing that they've only ever used ye olde VB, suggests there are things they won't have learned about, because the language doesn't support them. Agreed. Part of the problem with programming - not just with VB but with any language - is develop a comfort level with the tool. This results in trying to use the language for programming chores for which the language is suboptimal. Programmers will try to wrestle with the limitation of a language, rather than switch to a new tool. New languages always imply a certain learning curve, at least for those who haven't been exposed to more than a single language.
That doesn't tell us anything much about their potential; but it does tell us something about the size of learning step they will need to make, the habits they'll need to break. Very much agreed. Having inherited VB code in the past that had to be maintained and extended, I can say that a many of the VB programmers that I've worked behind have some very annoying habits. For starters, the idea of scattering code in a series of asynchronous events many times results in a maze of code and forms where the program is not thought out as a coherent whole. But this has more to do with the programmer, rather than the language itself.
I guess from a programming language standpoint, I'd speculate that good designers/programmers have a desire to gravitate to languages that are good. But economics tends to impinge on this process (when the choice is between economically thriving in a suboptimal language and intellectual desire to maximize one's capacity, the money wins out with a lot of people). On a macro level, the reverse process plays out in that companies will use languages with which they can hire talent at the lowest rates the market will bear - which usually means the languages which a mass of programmers are willing to tolerate or pursue.
I don't think that's at-all what's being said. after rereading the comments, I'd say that I'd probably agree that I misconstrued the discussion. My frustration is really not with the current thread, as most participants on LtU are quite cognizant of the issues and frustrations of programming languages. I have been dormant in my discussion on these fora for what seems like forever - a year or year and half - owing to working two jobs.
Jumping in, I'm not really trying to defend VB as a language. As Ehud said some time ago, VB is not so much a language as it is a product. I think to understand it's success, you have to look to overlook it's obvious flaws as a programming language and look at the type of problems it is used to solve. It's really not hard to find languages which are superior to VB, but as a product that tries to provide programming language capability on top of forms, the number of competitors is a much smaller subset.
You don't need a debugger to learn to think about programming, or to complete small programs. And relying on a debugger at that stage may mean you don't learn to think about the problem in other ways. I must admit that for myself, I came into programming at the juncture in history where the switch was made. I seriously doubt that I would be a programmer today, had we stuck with punch cards. Getting that Apple II was liberating and eye-opening experience that changed my whole perception of the act of programming. Along the way, I have worked with tools that required an inordinate amount of time in the compile-run-recompile process - a 6809 embedded system that I programmed had a turn around time of about 2 hours. Now such turnaround time did have the result of me giving much more upfront consideration to programming, as a couple of bad decisions could easily eat away some serious time.
Yet, I don't know that I'd recommend having today's programmers expose themselves to such resource constraints. There's got to be an easier way to teach valuable programming lessons, short of deprivation. :-)
|
|
Ehud Lamm - Re: Small Time Languages
3/16/2003; 1:58:28 AM (reads: 2238, responses: 0)
|
|
VB programmers are inferior in their reasoning capabilities and rely on unnecessary crutches (debuggers, etc...)
I for one didn't mean to imply an such thing.
As I said, there are sues for debuggers -- they are just not the best tool for beginners, who write simple programs.
I think all programmers (me included) should improve their reasoning skills. And, programming languages should be designed to make reaoning easier, without making programming that much more cumbersome.
|
|
Ehud Lamm - Re: Small Time Languages
3/16/2003; 2:04:54 AM (reads: 2248, responses: 0)
|
|
Yet, I don't know that I'd recommend having today's programmers expose themselves to such resource constraints. There's got to be an easier way to teach valuable programming lessons, short of deprivation. :-)
Oe thing I notice over and over again is that for small programming tasks (the kind you find in introductory courses) using the debugger wastes much more time and energy, than simply looking at the code (this is true even for beginners). And that's before you count the hours spent learning how to use the debugger.
Time and again I see that when a student learns the art of testing, design and incremental development, they realize they use the deubgger less and less.
They started using it beacuse at some point in time an instructor thought it easier to throw them at the debugger instead of explaining to them what they code actually means. In ninety cases out of a hundred the reason was that the instructor didn't have a clue either...
|
|
Chui Tey - Re: Small Time Languages
3/16/2003; 1:19:23 PM (reads: 2160, responses: 0)
|
|
In practice I have found a debugger indispensible. Especially when gluing components together. Components are never bug-free: and it is simply not an option to switch components (eg. it is already used in a few other sub-projects, or there are none which has the same feature-set, or cost), and with no source for the component available, a debugger helps finess a work-around.
This feature becomes even more critical in an event-based OO-world because bugs may exhibit themselves irregularly when timing is involved. For instance, I had once have to work with a component that cleans itself up between events, leaving me holding to invalid pointers. I'm sure it can be debugged using lots of "print" statements, but a debugger helps meet deadlines. :)
Software is a craftmanship and feedback is important. Just as the pottery maker learns when the clay is too wet or too dry, a software developer needs to "get to know" the components he is dealing with. No one in a commercial setting would try to improve the performance of a piece of software by reasoning alone, without the help of a profiler. Just as no one in a commercial setting should try to debug a program without a debugger.
|
|
Rudla Kudla - Re: Small Time Languages
3/17/2003; 12:04:38 AM (reads: 2143, responses: 3)
|
|
In the company where I work, we use the VB as primary development language, so I know VB very well. In my experience, there are three excelent things in VB:
1. Debugger
2. Editor (auto complete)
3. COM support
Debugger has been discussed already.
The way, how the auto complete in VB works is ... magical. I think it is strongly underestimated feature. It relieves the programmer from remembering exact variable, method and class names etc. It is very important when using new libraries - if the classes and methods are reasonably named and designed, you don't have to study documentation. Just write the object name and you see the list of methods - right in place where you need it. Write the first letter or two and the correct method is there. Start writing arguments and you see their names and types. You just have to experience this to really understand it's strength. Sometimes it feels like the code writes itself.
The other thing is VB excellent support for COM (in fact, if i remember correctly, automation parts of COM was originaly designed by VB authors). And COM is the core of Windows programming. There is no other programming language with such seamless and excelent support for COM. For example you can watch COM objects in debugger - it calls all the methods for retrieving object properties for you and display this. Of course this is not so special, all debuggers can do this for structured data types of their respective programming languages. But VB class is COM class.
Excelent COM support also means expandability. When necessary, experienced programmers can develop functionality in advanced languages (C++) for less experienced programmers.
|
|
Ehud Lamm - Re: Small Time Languages
3/17/2003; 8:25:54 AM (reads: 2134, responses: 2)
|
|
|
Michael Vanier - Re: Small Time Languages
3/17/2003; 2:10:21 PM (reads: 2132, responses: 1)
|
|
I know that they defend the practice of debugging by using print statements in the code, something that is often ridiculed. The reason is that print statements are effectively a form of program trace which executes every time the code passes through a location.
|
|
Ehud Lamm - Re: Small Time Languages
3/17/2003; 2:15:37 PM (reads: 2167, responses: 0)
|
|
Great! That's what I like to do (ridiculed or not).
|
|
Anton van Straaten - Re: Small Time Languages
3/17/2003; 2:55:57 PM (reads: 2053, responses: 0)
|
|
Print statements or other logging/trace mechanisms are "better" than debugging, because assuming the information you need was logged, it's much quicker to find it, and at least narrow the source of the bug down significantly. Debugging with interactive debuggers can be a big time-waster - a kind of self-imposed make-work. Think of it like having a good program trace, but going through it by displaying one line on the screen at a time, and having to issue a variety of different commands to move between lines (step over, step into, run till return, etc.)
Of course, there are times when there just doesn't seem to be an alternative to using a debugger. But in my experience working with other programmers, in probably 90% of cases, the debugger is used as a crutch, even by quite experienced programmers. As anecdotal evidence of this, I can't count how many times someone has started to describe a problem to me, and then figured out the answer with little or no help. That happens because they're forced to think about the problem in order to describe it. Inexperienced programmers often can't describe their problem clearly, which *is* the problem - they don't understand what they're trying to do.
Isaac asked how to achieve "the programming level where I hardly ever use a debugger" that Noel described. One way to do this, which I still try to practice, is when you find yourself "needing" to use the debugger, ask yourself why. What is it about the program's behavior you don't understand, and why don't you understand it? What information are you missing? In some cases, you may not be able to answer that until after you've found the bug. Then you can ask what you can change to prevent that sort of bug in future: perhaps better exception handling (e.g. more granular), or logging of certain information. There might be times when a program redesign would be the only solution, and that's not always possible - but that doesn't negate the fact that you're using the debugger to compensate for problems in other areas.
Re teaching of beginners, denying them a debugger seems like a no-brainer. If you let them routinely use a debugger, they're not going to think about how the program works; and if they don't think, they won't learn anything except bad habits. I notice that the HTDP folk like Matthias Felleisen say similar things.
Of course, none of this is an argument against providing good debuggers in commercial products. People like them - but people like cigarettes and junk food, too, it doesn't mean they're good for you.
|
|
Isaac Gouy - Re: Small Time Languages
3/17/2003; 7:50:32 PM (reads: 2024, responses: 0)
|
|
the practice of debugging by using print statements in the code, something that is often ridiculed
Maybe it's best not to take too much notice once folk start ridiculing something? If they are unable to articulate what the downside is, maybe they just don't know... maybe it's just their idea of fun.
logging "better" than debugging
Logging is invaluable for tracing intermittent faults and debugging large multi-tier systems. Logging complements the use of interactive debuggers, and all those other techniques that make up a software developers toolset.
the debugger is used as a crutch
Anton, it seems like you're suggesting that these programmers have some deficiency that they make up for by using a source code debugger. What is this deficiency?
As anecdotal evidence
I imagine that most of us have had this experience. The phenomenon we witness even has a common-name: tunnel vision. People get stuck down a particular line of thought, and having to describe the broader context, in order to explain the problem to someone else, allows them to exit the tunnel.
This anecdote says nothing about how useful interactive debuggers are.
when you find yourself "needing" to use the debugger, ask yourself why
Redundancy. It's a separate technique to check for mistakes. (I will already have checked the reasoning, and checked the code I wrote, and there'll be test cases. To err is human - check, check, and check again.)
People like them - but people like cigarettes and junk food, too, it doesn't mean they're good for you.
Well, at least it sort-of rhymes ;-)
Some folk dislike vitamins and health-food, it doesn't mean they're bad for you.
|
|
Dominic Fox - Re: Small Time Languages
3/18/2003; 8:13:50 AM (reads: 1984, responses: 0)
|
|
I find some of the pedagogical assertions made here rather questionable. There are myriad reasons why a student will choose not to think about something that requires thinking about; the thoughtfulness or thoughtlessness of the individual student is surely a significant factor.
I've programmed predominantly in one version or another of BASIC for the past twenty years (I'm now twenty-eight); if Dijkstra had been correct about the inevitable effects of exposure to this language on the minds of impressionable youth, then by rights my keyboard should now be swimming in drool. However, and contrary to the wise prognostications of my esteemed elders, neither resigned tolerance of crass language design, nor habitual use of a debugger, nor prolongued subjection to OO marketing hype has altogether succeeded in reducing my brains to mush.
Anyone who's any cop as a programmer will find ways to think outside of the particular environment they're programming in (good VB programmers will often experience this impulse as a kind of intense yearning). I've posted to LtU before about the anxiety that use of compile-time type-checking will make coders complacent about the correctness (in broader terms) of their code, and it seems to me that the notion that using a debugger will make coders forget to reason independently about program logic and flow of execution is a bogeyman of a very similar stripe. There is simply no reason why it should: compile-time type-checking reduces busywork, and so does the ability to debug code, mid-execution, at the exact point where a run-time error occurs. 90% of the time such errors have nothing to do with flawed reasoning about programming, and everything to do with glitches like mistyping a field name; the other 10% of the time you normally have to stop the debugger and go back to the drawing-board in earnest anyway.
|
|
Anton van Straaten - Re: Small Time Languages
3/18/2003; 10:15:52 AM (reads: 1974, responses: 0)
|
|
Anton, it seems like you're suggesting that these programmers have some deficiency that they make up for by using a source code debugger. What is this deficiency?
I'm saying they're indulging in an activity that in many cases is less necessary than they think it is - and less productive. I've done it myself, and I see it often in programmers I've worked with or supported. The reasons for that can be diverse. The "deficiency" might simply be that they are unaware that they might be capable of solving the problem in more effective ways. You gave one possible explanation:
The phenomenon we witness even has a common-name: tunnel vision. People get stuck down a particular line of thought, and having to describe the broader context, in order to explain the problem to someone else, allows them to exit the tunnel.
Fine. But I'm saying that using a debugger when you're stuck in a tunnel vision mode is not usually the most efficient way to exit that mode - if anything, that's probably the time you're most likely to abuse the debugger and waste time with it.
I'm not saying debuggers are never useful - although, I've worked in environments without a debugger, e.g. embedded and server-side, and so I can say that a debugger is never essential. (That's another way to reduce one's debugger use!)
Your anecdote would apply just-as-well if the subject was mathematical proof.
It would, except that mathematicians, when struggling with a proof, don't immediately jump to letting a machine trace through it line by line. If they did, I expect the same basic issue would apply.
Re Dominic's concerns about mush-brained BASIC programmers, I'm not making any such claims. FWIW, I've done a fair amount of VB work myself, back when there weren't many other viable languages for Windows development; and I still use VBA from time to time.
I'm also not saying that there are never valid applications for a debugger - just that there's a certain kind of debugger abuse that I've seen many programmers indulging in.
compile-time type-checking reduces busywork, and so does the ability to debug code, mid-execution, at the exact point where a run-time error occurs. 90% of the time such errors have nothing to do with flawed reasoning about programming, and everything to do with glitches like mistyping a field name
Just to be clear, I'm not really talking about a situation where an error occurs, and your development environment drops you into the debugger on the line which errored. In that environment, if you can immediately tell what the problem is, if not from the error message itself, but perhaps by perhaps inspecting a few variables, there may not be anything terribly wrong with that. In this situation, you're not using a stepping capability at all, so in a sense, all the debugger is doing for you is providing a snapshot of the error's termination environment - the call stack trace, variable values, etc.
However, there's a sense in which this can be a substitute for good exception handling, since in an ideal system, an error would be trapped and thrown upstream so that it can be put in context, i.e. instead of seeing "invalid name: foo", you get a message like "error while retrieving widget 563 - invalid name: foo". Of course, for an error like an invalid fieldname, you typically don't need that context. But in general, always relying on your language/debugger to provide all information about error context, as opposed to handling exceptions within the application, could be an example of what I'm talking about - this could lead to time wasted trying to figure out error context within the debugger, perhaps restarting the application and debugging from some earlier point, rather than simply determining the nature of the bug from an intelligently designed exception report.
This relates to the common situation I'm talking about, which are more extended debugging sessions where there's an upstream problem of more or less unknown origin. So if you're going to solve it with the debugger, you have to start at a point before the trigger problem occurs, and trace through the program until you detect something out of the ordinary. This can be quite time-consuming, and more often than not, it is not the most efficient way to resolve the problem.
As I mentioned, sometimes the cause of errors that require a debugger are outside of your control. For example, I usually try to ensure, in any significantly sized system, that things like fieldnames are statically checked. VB statically checks very little, so that's probably not an option. So perhaps VB does require heavier use of a debugger - that wouldn't surprise me at all. Perhaps it's no coincidence that this debugging discussion comes up relative to VB.
TDD combined with good exception handling and reporting might mitigate this. If you find that you use the debugger to solve a large proportion of errors, even of the small kind mentioned above, then you probably don't have a system that's very "agile" in the sense of supporting major refactorings with some degree of comfort that you'll be able to detect problems throughout the system early.
the other 10% of the time you normally have to stop the debugger and go back to the drawing-board in earnest anyway.
I'm saying that for many programmers I've encountered, the proportion is more like the other way around. I've also observed that the better programmers use debuggers much less often, and more effectively. And the worst rely on them almost exclusively because they can't really reason about programs. So I see a worst-best continuum there that's at least somewhat correlated to debugger use.
Finally, on the pedagogical issue:
There are myriad reasons why a student will choose not to think about something that requires thinking about; the thoughtfulness or thoughtlessness of the individual student is surely a significant factor.
Sure - and a good teacher should guide them to learn about the subject, help them to think about it in valid and useful ways, learn good habits, and maximize their potential. None of this is achieved with *routine* use of a debugger in a "let's step through every line of this program so we can see what it does" way - not to mention that this isn't likely to be the most productive use of class time.
|
|
Dan Shappir - Re: Small Time Languages
3/18/2003; 11:11:21 AM (reads: 1980, responses: 0)
|
|
My personal first encounter with VB came with version 1.0 of that product (and product is indeed the proper term here). At that time I found VB amazing for one simple reason: creating Windows applications was so damn easy! The alternative at the time, using C/C++ and Win16 was so painful that VB seemed to be magic.
I didn't do any additional VB till version 4 or 5, and by that time the scene had changed quit a bit but VB, with the possible exception of Delphi, remained the easiest way to do Windows. Add to that the items already mentioned such as event-driven (UI) development, powerful database support with data binding, very friendly environment with auto-complete etc.
To summarize my view point with regards to VB is that its a great environment, and that its a shame Microsoft chose the BASIC language to drive it. I guess its because BASIC hold a special place in Bill Gates' heart...
|
|
Dominic Fox - Re: Small Time Languages
3/18/2003; 1:50:46 PM (reads: 1976, responses: 2)
|
|
A small reference footnote: here is some VB code for executing an SQL query against a database, returning a recordset, and accessing the value of a single field of a single record:
Public Function GetCustomerName(oConnection As ADODB.Connection) As String
Dim oCmd As New ADODB.Command
Dim oRS As ADODB.Recordset
With oCmd
Set .ActiveConnection = oConnection
.CommandText = "SELECT TOP 1 * FROM Customers"
.CommandType = adCmdQuery
Set oRS = .Execute
End With
GetCustomerName = oRS![Customer_Name]
End Function
If we have mis-typed Customer_Name as CustomerName (although a consistent field naming scheme should help us to remember that we are using underscores as separators), then this will not be caught until run-time.
|
|
Ehud Lamm - Re: Small Time Languages
3/18/2003; 1:58:59 PM (reads: 2025, responses: 1)
|
|
People interested in this should check the recent reference about System R.
Static SQL, can be compiled so that errors such as these are caught, as well as allowing for optimizations, for ages now. Obsiously, most systems also support dynamic SQL.
I think VB is great for some things (it is, indeed, the easiest way to produce windows guis I know of).
But when it comes to SQL, I recommend you comapre the above code with the equivalent SchemeQL (full disclosure: I never did manage to get SchemeQL to work with an Access database).
|
|
Dan Shappir - Re: Small Time Languages
3/19/2003; 12:43:16 AM (reads: 2052, responses: 0)
|
|
I'm all for VB bashing, but in this case the criticism is unfair.
As Ehud pointed out most systems support dynamic SQL, and this exactly what this sample is. Writing oRS![Customer_Name] is just shorthand for oRS.Fields("Customer_Name") . Thus, it's not surprising that the field name was not checked at compile time.
However, you don't have to work this way with VB. Instead you can use the Data Environment to model your code directly from the SQL schema (what Ehud termed Static SQL). This environment contains many amenities such as an interactive query builder.
In fact, you can create DB frontends in VB without ever touching the keyboard using the Data Environment and Data Binding. In such cases you obviously won't encounter runtime errors because of field names (unless someone changes the names behind your back).
I think such errors are much more likely to occur when doing direct DB access in ASP. For that reason Microsoft has added a Data Environment and Data Binding to ASP.NET .
|
|
Isaac Gouy - Re: Small Time Languages
3/19/2003; 9:20:58 AM (reads: 1923, responses: 0)
|
|
I'm all for VB bashing
My appetite for bashing seems close to sated for this month ;-)
Noel Welsh stated: "Limiting the language can be a good thing" (although it's nice to be able to escape from a DSL to full Scheme) and in the LL1 discussion Daniel Weinreb suggested an 80:20 rule - most VB users only need to use a small fraction of VB to complete their tasks.
Could it be that Visual Basic holds mind-share over such a large group of programmers because the VB product group do continuous market research? They know what sort of things the VB product is used for and they make it easier to do those things.
Given the enormous investment (into QA, documentation, ... not just marketing) that goes into programming products, maybe the surprising thing is that niche programming languages are as popular as they are.
|
|
Anton van Straaten - Re: Small Time Languages
3/19/2003; 12:50:28 PM (reads: 1919, responses: 0)
|
|
Sure, VB succeeds because it makes certain things easy. The problem is that the things it makes easy are mostly in the realm of things like its IDE, creating user interface, etc. All things that can be made easy by throwing money and market research at it, as you say.
That would be fine if VB were a limited purpose tool, but it isn't. Once you use it to easily create a set of GUI forms, you're stuck with VB for a lot of other things too, things that it's not as well suited for. Yes, you can implement other things in other languages and use them from VB with COM, but that raises its own set of messy issues, and certainly doesn't simplify matters.
What VB(6) *doesn't* make easy is implementing complex, interrelated abstractions - to paraphrase someone's metaphor on the LL1 list, VB abandons you just about as soon as you achieve liftoff and get out of ground effect. I think you'll find most VB programmers who know almost *any* other language will agree with this; or at least agree that there are frustratingly many things they wish they could do, but can't easily do in VB. (BTW, much of this criticism may change with VB.NET, which is a whole different language, semantically.) From this viewpoint, VB can be seen as a disservice to the programmers who use it, since it may be holding them back from making full use of their capabilities.
The reason for these weaknesses, even given all the money that's been thrown at VB, is that you can't just fix a language by adding features identified by a list of marketing bullet points. For example, you can't just say "OK, let's add implementation inheritance to VB" without requiring a major redesign of big pieces of the foundation (exactly what has been done with .NET). Designing a good general purpose *language* (as opposed to a decent scripting system for GUI forms) isn't easy.
Which is why it's not really surprising that niche languages can still thrive - because they work better in many areas that can actually matter more, or they do more with less complexity, less code, less busywork. .NET will probably change this whole picture quite substantially, most likely for the better, but the .NET semantic model still leaves plenty of room for smarter languages to excel.
The real question to ask about what VB has or hasn't done right is how successful it would be if it came from a company other than Microsoft. Based on some of its earlier incarnations (who here remembers VB 3.0?) it would have been much less likely to succeed in a free marketplace - its survival was in large part due to being the sanctioned solution from the company with the monopoly on the platform.
|
|
Isaac Gouy - Re: Small Time Languages
3/19/2003; 2:06:12 PM (reads: 1893, responses: 0)
|
|
That would be fine if VB were a limited purpose tool, but it isn't.
Anton, you also say there are a lot of other things that VB is not well suited for - doesn't that say VB is a limited purpose tool?
One of the things we can become good at is knowing where the sweet-spot is for different technologies, and choosing between them appropriately.
|
|
Anton van Straaten - Re: Small Time Languages
3/19/2003; 2:51:11 PM (reads: 1889, responses: 0)
|
|
My point is that although VB should be limited to certain tasks that it's good at, like designing GUI user interfaces, in practice, a tool that does only this is utterly useless. You need to have some functionality behind those interfaces, however that is achieved. VB is not good at achieving that part. As such, the sweet spot for VB as a programming language is very limited. All the people who use it in ways that go beyond that sweet spot are being made to suffer in ways they may not even realize.
Dominic's code above is a good example (no offense to Dominic, who's already described his use of VB with a "muffled sob", iirc). I'd say, why not use an object/relational mapping layer so that you can simply say:
cust = Customer.load(id);
cust.getName();
...without having to ever write any code specifically for retrieving the customer name from the database? VB6 doesn't lend itself to this sort of thing, so you end up with ten times as much code as you need for a given task. What seems so easy when you start laying out a form, ends up costing you on the back end, which often matters more.
One of the things we can become good at is knowing where the sweet-spot is for different technologies, and choosing between them appropriately.
Sure. But the whole question about the mindshare for VB relates to questioning the size of its sweet spot - is it really the sweet spot for all the applications it's used for? I'd say no, not by a long shot.
|
|
Dominic Fox - Re: Small Time Languages
3/19/2003; 3:59:47 PM (reads: 1887, responses: 0)
|
|
It was while I was building an object/relational mapping layer in VB that the deficiencies of the language most pungently impressed themselves upon me - consider, you have a whole bunch of entities which the database represents as rows in tables and you want to represent as objects with accessor methods not only for their properties but also for collections of other entities that would be returned by a query based on the parent object, e.g. Customer.Orders, which returns a collection of Order objects derived from the resultset returned by SELECT * FROM Orders WHERE CUSTOMER_ID = <id of the customer> (We'll gloss over questions like cacheing, sharing updates between different objects tied to the same entity, and other such issues). The code for one of these objects is going to look a lot like the code for most of the others, so you really want to share some common stuff around - but there's no implementation inheritance, no higher order functions, basically no way of specialising the behaviour of a "base" object except writing a wrapper that delegates to it, and you have to write the same delegation code over and over again...
I swear I could have done the whole thing in Python in a tenth of the time.
|
|
Isaac Gouy - Re: Small Time Languages
3/19/2003; 6:19:25 PM (reads: 1871, responses: 0)
|
|
designing GUI user interfaces, in practice, a tool that does only this is utterly useless
Limited, yes. Utterly useless, no.
It's quite reasonable to use VB for a rich client, keep the business functionality on servers, and connect them together through a messaging system.
questioning the size of its sweet spot
Which gives us at least 2 approaches, increase the size of the sweet spot, or choose a more appropriate technology.
Dominic, how would you have done this before VB got objects? Do you think it would be similar to how it might work out with PHP?
|
|
Dominic Fox - Re: Small Time Languages
3/20/2003; 12:43:04 AM (reads: 1881, responses: 0)
|
|
PHP does have classes, also a fairly direct link into the MySQL API (which I've always found slightly weird, but maybe that's just me). I don't know if I'd want to build a large library of little PHP classes for use in a server-side scripting context, though; but this may just be a prejudice of mine about PHP, that it's best suited for small-and-light implementation of stuff that doesn't really need to scale. It's a similar scenario with ASP; I've tended to define a class or a couple of functions for standardised data access, but used "raw" recordsets manipulated by global functions to represent data entities. On the other hand, I haven't worked on anything that large using either technology.
VB without objects just sounds like a total loss to me...but I guess you could have done something with a set of modules full of global functions and UDTs and passed handles around in place of object references - much like the Win32 API.
|
|
Noel Welsh - Re: Small Time Languages
3/20/2003; 7:15:40 AM (reads: 1872, responses: 0)
|
|
How did you achieve that? What are you doing instead? TDD?
I still can't break the habit of checking what I just wrote by stepping through in a debugger.
When a project is working the way I like it to I'm doing TDD. Its almost the same thing as stepping through with a debugger, just the actions are saved and rerun automatically.
|
|
Isaac Gouy - Re: Small Time Languages
3/20/2003; 7:34:43 AM (reads: 1851, responses: 0)
|
|
without objects
Maybe I've been working with objects for long enough to be open-minded to other approaches - just to see if they might offer better solutions.
Philip Greenspun is often celebrated for Greenspun's 10th rule. However, his writings (1998) seem more pragmatic than that would suggest:
"Yet nobody uses Common Lisp for server-side scripting. Is that because Java-the-hype-king has crushed it? No. In fact, to a first approximation, nobody uses Java for server-side scripting. Almost everyone is using simple interpreted languages such as Perl, Tcl, or Visual Basic... My computer science friends would shoot me for saying that Tcl is as good as Common Lisp and better than Java. But it turns out to be almost true."
I find it refreshing to be reminded that it can be reasonable to model everything in the database and avoid OO and O/R layers.
|
|
Anton van Straaten - Re: Small Time Languages
3/20/2003; 8:21:00 AM (reads: 1831, responses: 0)
|
|
designing GUI user interfaces, in practice, a tool that does only this is utterly useless
Limited, yes. Utterly useless, no.
If you reread my post beyond the part quoted in italics, you'll see I wasn't saying that VB was utterly useless - I was saying that any tool that does only interfaces, with no ability to provide functionality behind them, "however that is achieved", is utterly useless (except possibly as a demo creation tool). What I said about VB was that "the sweet spot for VB as a programming language is very limited".
It's quite reasonable to use VB for a rich client, keep the business functionality on servers, and connect them together through a messaging system.
Being aware of numerous projects that have used this approach, having consulted on some of them, and having helped clients move away from VB to avoid the problems that this approach entails, I strongly disagree that it is "quite reasonable" to use VB in this way. Isaac, have you actually used VB this way yourself, or are you just speculating?
An immediate example of why it's not "reasonable" would be the O/R mapping example discussed above. VB bound controls, even from third-party sources, tend to be designed to be bound directly to a database. If you want to bind your controls to an object-based source that comes from outside of VB, you're talking about some significant custom work, and abandoning standard VB idioms. So the assumptions and restrictions of VB start to infect the supposedly more powerful back end.
Which gives us at least 2 approaches, increase the size of the sweet spot, or choose a more appropriate technology.
The latter is a good idea. :) Still, as I've said, I suspect that VB.NET will increase the size of the sweet spot quite a bit - but that's because it's really a new language, with a strong syntactic resemblance to VB, on top of a completely different semantic core. The breadth and depth of the change can be seen in the difficulty of migration between VB6 and VB.NET.
"Old" VB is being consigned to the trashcan of history, where it belongs (it'll be interesting to see how many users refuse to abandon it). Trying to increase the size of its sweetspot, without a complete redesign, would have been misguided.
This is actually one of the traditional fates of a product like VB - it starts out as a quick fix to an immediate problem (creating Windows GUIs), accumulates cruft as its weaknesses are shored up, and eventually has to be thrown out and replaced, even if migrating from old to new requires significant work.
PHP is quite similar. It only thrived because it provided a quick and accessible solution at a time when there weren't many easy solutions to the problem it addressed. It's likely to go through something similar to VB - it'll either need major revamping, or it'll eventually be replaced by more powerful tools that have become equally accessible.
Tools like VB and PHP have been designed with accessibility as a primary goal. They succeed at that goal. That doesn't mean that their underlying technology should be admired or emulated. Their underlying technology merely represents the line of least resistance to the designers at the time, and that expedient choice usually haunts such tools, and limits them even in the areas in which they're supposed to excel.
|
|
Anton van Straaten - Re: Small Time Languages
3/20/2003; 8:36:09 AM (reads: 1819, responses: 0)
|
|
I find it refreshing to be reminded that it can be reasonable to model everything in the database and avoid OO and O/R layers.
Ha! I dare you to try this. You can do it, but the cost can be staggering. Prepare to devote your career to busywork - rewriting the same SQL over and over in slightly different ways (where is SchemeQL when you need it?), working around the lack of any central place to put reusable logic (stored procedures aren't enough), I could go on but I've already written enough in this thread. I've been there, and done that, and burnt the t-shirt in a desperate attempt to forget the horror.
It's worth noting that since Greenspun wrote that in 1998, he sold/ lost control of his company and the code was all converted to Java, and Java is now rather dominant for the sorts of applications Greenspun's company implemented. What Greenspun, along with others, hit on was a quick and dirty way to get stuff done fast. It wasn't maintainable or sustainable, and history since then has proved that.
I'd be the last person to argue that current mainstream technology is ideal. But that doesn't mean that older, more primitive mainstream technology is preferable.
|
|
Isaac Gouy - Re: Small Time Languages
3/20/2003; 8:40:09 AM (reads: 1805, responses: 1)
|
|
Isaac, have you actually used VB this way yourself
Yup. Been there, done that, love my MOM.
It only thrived because it provided a quick and accessible solution at a time when there weren't many easy solutions to the problem it addressed.
Let's go further "X only thrived because..."
Providing quick and accessible solutions is a good thing.
We'd like tools that make simple tasks quick and easy, and make difficult tasks possible. Too often, it seems that we are forced to chose one or the other.
working around the lack of any central place to put reusable logic (stored procedures aren't enough)
"All modern RDBMSes provide for the execution of standard procedural languages within the database server... pioneered by Oracle with PL/SQL and then Java..." Greenspun version 2?
sold/lost control of his company
Now there's an interesting story ;-)
Java is now rather dominant
Java and .Net both a disaster: research
written enough in this thread
We should have some pity on LtU readers ;-)
|
|
Brent Fulgham - Re: Small Time Languages
3/20/2003; 9:23:40 AM (reads: 1796, responses: 0)
|
|
I'm not sure that it's a valid to draw the conclusion that Greenspun was wrong about Tcl (and Java) from what happened to Ars Digita. He was quite open in admitting that the development of Java tools was a purely marketing move (along with an abandoned port to MS IIS and SQL Server) so they could compete in bids where RFP's explicitely stated Java was a requirement. It was not done for technical reasons.
It's interesting to read the (probably one-sided) impressions of Eve Anderson on the rise and fall or Ars Digita, RIP. But that's more a story of social dynamics, machiavellian boardroom maneuverings, and huberus than the technical issues we discuss here.
If you look at the design of ACS (now only available as OpenACS), you will see that it has a quite strong Object Oriented design, implemented on top of Tcl. In fact, I would argue that this fusion of styles is probably far superior to a naive implementation one might create in Java or C#. The combination of functional and OO styles makes the code and corresponding data model quite clear. In contrast, the OO layer used in many Java-based platforms can be obfuscating and convoluted. As we have discussed here before, the OO paradigm does not bind in an intuitive manner to relational databases.
Perhaps ACS is just a great design using lower-quality tools, and perhaps my bad attitude towards Java and C# is the result of having to maintain really bad designs with good tools. But I know which software has been easier to use and maintain in my experience.
Oh, and Ars Digita turned a profit every year until they started using Java. Then they entered a tail-spin as they became less responsive to customers, began fighting the venture capitalists funding the conversion (which was only for marketing purposes, remember), and the rest is history. (But of course, that is nothing more than a mean-spirited anecdote!) :-)
|
|
Anton van Straaten - Re: Small Time Languages
3/20/2003; 10:48:44 AM (reads: 1791, responses: 0)
|
|
Providing quick and accessible solutions *is* a good thing, but we should recognize when better alternatives have become available, use them, and throw away the useless old junk! We're so often forced to choose one or the other, because once people have their hands on a quick fix, they tend to defend it to the death, regardless of its flaws - they even try to make virtues out of its flaws. That's not the way to achieve progress.
The headline of the article Isaac quoted ("Java and .NET both a disaster") is rather misleading - a typical journalistic troll, really, since the headline doesn't follow from the contents of the article. The article says that there has been a high percentage of unsuccessful "large scale" Java implementations. I can think of numerous reasons for that - the large scale alone is one - and none of them is an argument for going back to 1998 and using Tcl to script databases. Note that the Gartner report itself is not recommending some alternative solution.
Regarding having pity on LtU readers, I agree. I'd rather be discussing innovative ways that the problems we've been discussing are being or have been addressed, rather than disagreeing on just exactly how much VB, PHP, or Java sucks (whether on their own or relative to each other).
So in that vein...
Isaac mentioned going back to using more or less direct access to relational databases. That could actually work very well, if relational databases had more power built into them, i.e. were not restricted as much as they currently are by SQL. In that area, functional languages make a lot of sense, as SQL itself demonstrates. We need a powerful functional language within the database, though, one which provides good abstraction capabilities in addition to good querying capabilities. Without that, the "semantic gap" between the database and the next tier is never going to be any more manageable or less complex than it is now. O/R mapping is just an attempt to graft abstraction capabilities on top of the database, that really belong inside it, and a lot of the complexity and restrictions we have to deal with in that environment arise from that fundamental hack.
Postgres was actually an interesting step in a similar direction, but I think that picking the O/R direction as it did was (in hindsight) a mistake. Many of Stonebraker's goals mentioned in The Design Of Postgres (1986) could be much more easily and cleanly achieved with a functional approach, than with O/R - not necessarily because a functional approach is so much better, but because it's more suited to dealing with relational data without an impedance mismatch.
Some of the most successful and long-lasting languages embedded in larger systems are functional in nature, and this seems to arise as a fairly natural consequence of the job they have to do. SQL is one; another are spreadsheet languages as in e.g. Excel, as covered by a Peyton-Jones paper previously posted here (User-defined functions in Excel).
The specific application for ML was mathematical proofs (more or less), but even lacking such a specific goal, general purpose functional languages constrain themselves by attempting to be formally sound. Mainstream, general purpose languages acknowledge hardly any constraints, except what's possible or impossible - and the results, not surprisingly, are less than impressive.
Back when SQL and Excel were first designed, knowledge of what functional or formally-based languages were capable of, and how to achieve that efficiently, wasn't widespread (to say the least). A redesign of these systems today could obviate a lot of the things we currently do with more general purpose languages, and simplify many systems dramatically. This will happen eventually, because it's the only direction in which real improvement can be achieved - we're not going to get anywhere by designing ever-larger APIs to compensate for foundational weaknesses.
|
|
Ehud Lamm - Re: Small Time Languages
3/20/2003; 11:02:30 AM (reads: 1830, responses: 0)
|
|
All modern RDBMSes provide for the execution of standard procedural languages within the database server... pioneered by Oracle with PL/SQL and then Java
Is anybody here working as DBA in a large enterprise? If so, they can comment.
If not, and people ask, I can try to dig up some references showing why this is a very bad idea.
When I was a DBA my team quickly realized how problematic this approach (stored procedures as well as triggers) can be. Obviously, eventually Oracle won the day, so people started using these, and all the problems we feared started appearing. But I left before that happened, so I have little first hand experience.
|
|
Dominic Fox - Re: Small Time Languages
3/20/2003; 1:13:32 PM (reads: 1777, responses: 1)
|
|
Ehud, I would actually be quite interested to read about why stored procedures are a bad idea; I've used them exclusively in preference to the available alternatives for a while now, basically because it saves having to think about cursors and locking issues (if you want to do a DB update, you call a stored procedure passing in some parameters...) and because I gather that stored procedures are pre-compiled and execute quicker (not that I've ever written code in an environment where this would become an issue). What are the bad things about them? Is it that programmers tend to revert to a more imperative style when it is available to them (as in T-SQL), and end up tying up RDBMS resources that should be used to select and serve data executing routines?
|
|
Ehud Lamm - Re: Small Time Languages
3/20/2003; 2:02:44 PM (reads: 1798, responses: 0)
|
|
It's is a compilcated issue, and mostly irrelevant to LtU, so I'll be brief.
Two main concrens: The programming style is problematic from a software engineering point of view (code is scattered around multiple databases, ensuring consistency of the system is more complicated, application code and system code mingle etc.) The second aspect concerns the database itself (and the DBA's job): more overhead which usually means worse performance (this depends on architecutral details, of course, but in general the database is usually on one machine/a.s/processor while clients are distributed), more difficult system management (e.g., updating application code may require shutting down the database, etc).
All these issues are, of course, only part of a more complicated picture. You really need to think about your overall system architecture. Enterprise systems are usually complicated beasts, with complicated organisations behind them, meaning there are many issues to consider.
As someone who comes from the system/DBA camp, I tend to push as much functionality as possible towards the application programs, and away from the parts of the system I control (since application programmers will always hurt the reliability and scalability of my system )
|
|
Brent Fulgham - Re: Small Time Languages
3/20/2003; 3:03:49 PM (reads: 1770, responses: 0)
|
|
Could you elaborate on your problems? I have experience working on two releases of an "enterprise web application" coded in C#. One phase made heavy use of stored procedures and triggers, while the second makes use of dynamic SQL generated in C# classes, which is then passed to the RDBMS to retrieve the data.
In my opinion, the stored procedure version provides much better partitioning between code and data, and is consequently easier to modify and MUCH easier to debug in the field. The dynamic SQL version really resulted in the replication of a lot of RDBMS functionality within C#, which seems like a bad idea to me.
I think that languages like Dylan, CLOS, or Goo may provide a much better model for building these kinds of applications because they allow code reuse to be orthogonal to class implementation. What I like about that is we can define classes to represent sets of data, then define methods that work on combinations of those data sets. This cleans up a lot of the crufty middleware code needed to marshall/demarshall stuff in and out of the database if your object representation is not a 1:1 match to your DB schema.
|
|
Anton van Straaten - Re: Small Time Languages
3/21/2003; 7:20:22 AM (reads: 1763, responses: 0)
|
|
I'm with Ehud, on being strongly against major use of stored procedures, if it can be helped.
I've listed some of the areas in which I've seen problems, below. I'm talking in particular about stored procedures written in the database SQL language, but most of it also applies to procedures written in e.g. Java - calling an external language from within the database is no panacea.
- Very limited abstraction capabilities.
- ...which results in things like repeated query subpatterns that are difficult to share efficiently/effectively/manageably across queries and procedures.
- Use of non-relational data patterns, such as trees (even very shallow ones) and temporal data, is not well handled at the stored procedure level, due to lack of abstraction capabilities. It doesn't help much to call into an external language - too many things work against you, including level of granularity - in general, the usual impedance mismatch/semantic gap.
- Limited encapsulation - creates fragility.
- Reuse is very poor, for the above reasons and more. If you want to see examples of the cut-and-paste antipattern, look no further than your nearest stored procedure repository. There are ways to achieve reuse, but they all suffer from the issues mentioned above.
- Management and structure of codebase - much more inflexible than program code in other languages. Poor organization abilities.
- Ability to refactor is nonexistent - codebases tend to be rigid, difficult to change.
- Reasoning about applications which make heavy use of triggers can be a problem. Disciplined use of triggers for the things they're best suited for can be OK, but it's easy to go too far, using them to try to compensate for other deficiencies such as the above, and then find that it's difficult to figure out where something is happening.
- For you debugger fans out there, debugging a problem involving multiple stored procedures and triggers tends to be a lot less easy than debugging the equivalent program code in a decent language.
I think you can get away with stored procedures in very vanilla systems, where the procedures are relatively small and not doing anything particularly complicated. If there's a close mapping between your table structure and the entities that the user interface is dealing with, i.e. not a lot of transformation has to happen between the database and the front end, then stored procedures will work (but probably are hardly necessary, too.) An example of this might be a typical web store, with Customers, Products, Orders, OrderItems - all things that map directly onto the final screens. However, these sorts of applications are essentially trivial, and almost any reasonably well thought-out approach will work.
If, however, your applications contain abstractions which, when normalized in a relational structure, result in a lot of tables which the user wouldn't necessarily recognize, and thus data needs to be assembled in various ways to turn it into higher-level entities for further processing and display to the user, then stored procedures are double-plus ungood, for the kinds of reasons listed above. That's just one characterization of the kind of scenario where this is true; there are probably plenty of others.
Code size is an issue. I'm currently working on a financial services back-office system that has 11,000 lines of SQL in stored procedures, which as far as I'm concerned, has exceeded levels at which it is effectively manageable. The biggest problem, and the slowest part of making changes to this system, is when the stored procedures need to be changed in a non-trivial way.
Plenty of big companies make heavy use of stored procedures for some complex applications. But when they succeed, they do so by throwing money and people at these systems, and the end results can leave a lot to be desired. If your goal is to work smarter, not harder, using stored procedures for complex applications is not an effective approach.
|
|
Isaac Gouy - Re: Small Time Languages
3/21/2003; 7:52:48 AM (reads: 1729, responses: 0)
|
|
elaborate on your problems -- strongly against
When questions about the value of a particular approach (or language feature, given that this is LtU) are raised, what would be really valuable to me, is a discussion about trade-offs - rather than a list of problems or reasons not to do something.
stored procedures for complex applications is not an effective approach
(I understand that this was a response to the particular phrasing of a question.) So what would be more effective approaches, and what are the trade-offs?
|
|
Brent Fulgham - Re: Small Time Languages
3/21/2003; 9:44:35 AM (reads: 1738, responses: 1)
|
|
Ehud:
The programming style is problematic from a software engineering point of view (code is scattered around multiple databases, ensuring consistency of the system is more complicated, application code and system code mingle etc.)
How is this problem at all helped by moving logic from stored procedures into the application level? You are more likely in this model to have multiple "application" servers running the source code, rather than having stored procedures in a single database (or mirrored system). I can see your point vis-a-vis performance, but I believe invoking the idea of "scattered code" to be unconvincing.
One of the recurrent themes in Date's writing is the idea that the database should contain mechanisms to maintain data integrity. From that standpoint, triggers and stored procedures seem to belong in the database, since this helps to insure data integrity. It becomes far less likely that a poorly written application program can munge up the data if it is calling carefully crafted stored procedures to get and set data.
Anton:
I think we agree on most points. I don't believe that the business logic of an applicatoin should reside in the database. However, far too often I see the application layer filled with sorting/searching functionality that I believe belongs in the database. A product like DB2 has millions of dollars of research and development behind making queries and updates fast, reliable, and recoverable. A software house working on an application program is foolish to spend its resources replicating this functionality in the application level, and will more often than not end up with subtle bugs that will corrupt the database.
I think the real problem we are all wrestling with is the fact that modern RDBMS's do not yet support the abstractions we need to write good software.
|
|
Anton van Straaten - Re: Small Time Languages
3/21/2003; 10:03:55 AM (reads: 1753, responses: 0)
|
|
Isaac, I understand where you're coming from, but I feel a lot stronger about stored procedures than I let on. The pure procedural approach to programming, which is what stored procedures offer a rather weak version of, was something we were already looking for better alternatives to in the '80s. Now, decades later, talking about the "tradeoffs" of stored procedures, in my opinion, is like talking about the "tradeoffs" of using a horse instead of a car to commute to your job 50 miles away, in a driving winter rain. I say that based on bitter experience (the coding equivalent of frostbite...)
An approach that I've found workable given current mainstream technology, as I've already alluded to, is O/R mapping. In Java lately, I've been using the Hibernate library for this purpose. It uses a concise XML DSL to specify the mapping. It's been pretty good at mapping onto legacy relational schema. Marshalling/demarshalling is not something you need to worry about much with it. It provides an object query language which is aware of your object relationships, so is much more concise than the equivalent SQL. It also provides an ODMG-standard query language.
However, I've also worked with multiple homegrown mapping layers, in various languages, and almost all of them have provided an environment that's better than stored procedures for development of any significant applications.
In any reasonably OO-like language, it's possible to address reuse in various ways. I'm not talking about the kind of inter-application reuse that OO critics like to claim doesn't exist - I'm just talking about being able to properly factor a reasonably large code base. Again, despite all the criticisms of OO which I understand and, to a large extent, agree with, it's a lot better than a plain procedural approach - it gives you tools that you just don't have in a mainstream procedural language.
Actually, I've found Java to be one of the weaker OO languages in this respect, due to the nature of its static type system. However, code generation can help with that, and that's become pretty common in the Java world for that exact reason.
As for tradeoffs: certainly, O/R mapping doesn't solve all problems. It does introduce complexity, and the question is how well that complexity is hidden, and what it "costs", whether in terms of performance or whatever. Using a mapping layer that's not homegrown helps here, because you then don't have to spend time working on that layer. In general, I have not found the costs of using O/R mapping to be at all significant, and I use it even in small systems, because it results in simpler code.
To address some of the points Brent made:
I have experience working on two releases of an "enterprise web application" coded in C#. One phase made heavy use of stored procedures and triggers, while the second makes use of dynamic SQL generated in C# classes, which is then passed to the RDBMS to retrieve the data.
In my opinion, the stored procedure version provides much better partitioning between code and data, and is consequently easier to modify and MUCH easier to debug in the field.
I'm not sure what kind of partitioning is meant here. If the C# code was "manually" generating the SQL, rather than doing so automatically via a mapping layer, then I can see the problem. Moving hardcoded SQL of any kind into the application language (e.g. Java or C#) isn't the solution. But with a reasonably automatic mapping layer, application-level code is easy to modify, and there's good separation between code and data.
For debugging in the field, part of the benefit of having an application in a language like Java is the ability to have pretty good exception handling. We log exceptions to a database and give users an error ID which refers to a database entry, so the call stack and exception info can be called up at any future time. We log all sorts of other stuff too, including SQL submitted to the database (by the mapping layer). In "enterprise" style apps, this can be important, because it allows you to see what a user really did, after the fact, rather than what they claim they did.
Also, the semantic level of an object-based application is higher, so errors tend to be relatively meaningful at level of the application's higher-level objects, rather than manifesting as something like a missing row in a join. In my experience, one of the more difficult kinds of problems to debug is when something goes wrong in a multi-hundred line stored procedure. Breaking stored procedures up into the size that OO methods tend to be is not usually viable.
The dynamic SQL version really resulted in the replication of a lot of RDBMS functionality within C#, which seems like a bad idea to me.
The easy solution to this is simply to change that opinion. :) Yes, you do replicate some DB functionality in the application language, but as I mentioned above, the issue is really what that costs you. In practice, assuming you don't have to write it all yourself, the costs have little impact at the application level. Performance is dominated by the database itself, so the performance cost of O/R mapping is typically marginal. And there are some real benefits to having total control over your data in your application language. For a start, complete independence from any particular database product & version can be achieved (easily), which can be a big win in some situations.
Of course, you're still dealing with both object & relational data, so there's a division there that can still rear its head. For example, if the object resultset returned from a query needs significant further query-style processing, you usually have to do that in the application language. In those cases, you have the choice of actually using SQL directly - if it's any easier, which it often isn't anyway; or simply biting the bullet and writing the code in the application language.
If Java weren't so restrictive - if it were more like Scheme, say - this might not be so bad. As it stands, I tend to want to start using inner classes as closures along with map/filter/fold-style functions to create high-level functional-style solutions to my query needs. However, other developers look at me funny when I do that, and complain that "no-one" can understand my code.
Coding big nested 'for' loops to get queries done is not my idea of productivity. However, in anything other than one-off cases, this can be made more productive with a bit of OO structure. Again, is the point of doing this sort of work in a reasonably powerful language - you can compensate for deficiencies in other areas.
One solution to the query issue would be a query language that worked on application objects. I know there've been some attempts along these lines, but I haven't explored actually using anything like this.
Another solution which I've used occasionally, so far only outside of actual applications, is to use a JVM-based Scheme like JScheme or SISC to get more power and conciseness. We also use BeanShell which provides an interactive scripting layer that conforms to Java semantics, but doesn't require type declarations. It also provides the ability to do runtime evaluation of code, which we use in our data-driven user-interface layer.
Isaac - your turn. I'm curious about using MOM to connect a VB user interface to a back end. Can you give some details? What kind of back end? Which MOM? What are the "tradeoffs" :) of this on the VB side? I found that using COM to do this was not really a good solution, especially in distributed applications, but perhaps I should have been trying MOM?
Should we start a separate thread in the discussion group if we're going to continue in this vein?
|
|
Isaac Gouy - Re: Small Time Languages
3/21/2003; 12:36:59 PM (reads: 1714, responses: 0)
|
|
What kind of back end? Which MOM?
Java servers, ActiveWorks aka webMethods enterprise, VB is just display surface (no attempt to get into the complex modelling for which it is so unsuitable)
O/R mapping seems old-hat after all those C++ and Smalltalk implementations years-ago. Same with OO databases.
Should we start a separate thread in the discussion group
We might take a moment to wonder if anyone cares ;-)
I'm going to take pity.
|
|
Anton van Straaten - Re: Small Time Languages
3/21/2003; 1:54:08 PM (reads: 1699, responses: 0)
|
|
Isaac - I don't know if anyone else cares, but you asked the question about effective approaches. But I get the impression that I'm not telling you what you want to hear...
O/R mapping hardly seems old hat compared to stored procedures. There were some pretty obvious reasons that C++ failed for these sorts of applications; and Smalltalk's failure was more due to requirements and limitations of the target audience than the language. None of this means that useful ideas should be ignored.
I'm not looking only for technological solutions that seem new and exciting, I'm interested in things that work, save me and my clients time and money, and allow me to focus on more important things.
OO databases are mostly addressing a different sort of problem. What O/R mapping does for you is let you deal with relational data at a higher level - creating an abstraction layer that from what I've seen, is difficult to credibly achieve otherwise. Most enterprise Java applications are 3-4 times bigger than they need to be, and far too inflexible, precisely because they're essentially using manual labor to do a kind of sub-par O/R mapping.
A product like DB2 has millions of dollars of research and development behind making queries and updates fast, reliable, and recoverable. A software house working on an application program is foolish to spend its resources replicating this functionality in the application level, and will more often than not end up with subtle bugs that will corrupt the database.
First, corrupting the database is impossible - you're still using the same database, including its referential integrity enforcement, transaction support, etc. You don't lose reliability or recoverability. Certainly, it would be possible to make a mistake in the mapping layer which allows two different sessions to work with copies of the same object, and step on each other's updates. But this problem exists in any solution which invokes the database from another language, O/R or not. Using a mapping layer actually provides you more control over this issue. I agree that not having to develop the mapping layer yourself is a win.
As for performance, there can actually be advantages to moving the business logic into an external application layer. First, you can scale up with multiple application servers talking to a single database. Second, when the database is primarily serving simpler requests, and delegating some of the hard processing work to other boxes, it's capable of serving a higher load. One of the other things that's offloaded from the database is caching, especially of infrequently updated info.
I think the real problem we are all wrestling with is the fact that modern RDBMS's do not yet support the abstractions we need to write good software.
Yes, I agree. I also agree with much of what Date says, although I think there's a bigger picture in the functional languages direction which he's not exploring (perhaps just because he's realistic).
But given RDBMS's that are bad at abstraction, I'm not willing to take the punishment of using things like stored procedures as a primary development vehicle. I'm willing to pay a price in various ways, whether it be complexity, cost of extra hardware, and even some calculated risk, in order to gain productivity.
|
|
Ehud Lamm - Re: Small Time Languages
3/22/2003; 12:24:39 PM (reads: 1730, responses: 0)
|
|
I am not sure the problem is with the abstraction facilities in RDBMSes, but if it is and this related to programming language abstraction mechanisms, this is on topic for LtU. Alas, I am not sure I have much to contribute about this issue.
I'll be glad to continue the SP discussion by email, but I think it is not directly related to the issues that fall under the LtU mandate.
By the way, when I talk about enterprise systems I mean mainframe beased systems running things like DB2, CICS etc.
You will find several relevant tutorials here.
|
|
|
|