## What will programming look like in 2020?

Things are too quiet lately, so maybe its time to start a fun thread. I used this question as a lead in to a presentation on live programming I recently gave at a working group, but I thought it would be fun to gather a bunch of predictions if anyone is willing.

What will programming look like in 2020? Keep in mind that programming in 2012 mostly resembles programming in 2004, so could we even expect any significant changes 8 years from now in the programmer experience? Consider the entire programming stack of language, environment, process, libraries, search technology, and so on.

## Comment viewing options

### Note that your examples can

Note that your examples can still be bent to use units (in systems that tolerate user-defined measure units), and that it may provide some interesting safecheck on the calculation being accomplished. For example if the way you compute your percentage is to sum some "completion units" and then divide by the total size (in completion units) of the problem problem, you can check that the result is a scala (which makes sure you haven't forgotten to do the ratio), and statically prevent stupid mistakes such as multiplying two completion measures together instead of summing them. Same thing for letter ratios: by declaring two different units for the two letters you're measuring, you can make sure you don't mix the count at some point by defining a read_char interface returning a sum type with either a count of the first unit, or of the second unit, or nothing, and write your code abstractly and safely on top of that.

So it looks like it's more "for those simple examples it's no use paying additional price for the additional safety of units" than "these morally are without dimensions". (Of course you can formulate any computation in a way to manipulate essentially only scalars, which is a manner to disable any unit sanity check.)

### Pain points of units

I'm curious what the pain points are for people who have experience programming with pervasive units. Some examples of problems I've seen with them are:

- Lack of polymorphic zero (easily fixed by adding one)

- Going further, Matrices are somewhat hard to type correctly.

- Confusion between affine / linear. For example, do you support Kelvin and Celsius? What's the conversion between them?

### Pain points of units

As you may know, my programming language Frink was designed from the outset to track units of measure through all calculations.

The "pain points" are actually surprisingly few for simple, well-defined imperative languages, but are many as I read over this posting and as I keep in mind that I'm trying to create a language that does symbolic math, keeping units in mind at all times.

The bookkeeping for doing math with units is trivial. A very simple way is to just keep track of the exponents for each base dimension you care about and add, subtract, multiply, divide, or compare them as necessary. See the above slides and following for details.

• A minor pain point is that addition, subtraction, or comparison of units requires that they have the same dimensions. This leads to the need to catch exceptions or have other error recovery methods wherever "non-conformal" operations occur (e.g. adding length + time). All of a sudden, your code that contained simple additions and subtractions may be throwing exceptions everywhere. You get used to this. (Multiplication and division always work.) Some languages don't care. I could write 1 Meter + 1 Day in Mathematica for the last decade and it wouldn't complain. Dunno if it still works that way.

• You soon find out that your exponents for your base dimensions will probably need to include rational numbers, not just integers, if you want to perform common tasks in quantum mechanics, or even river flows. Again, that's relatively easy to achieve, but you'll kick yourself if you didn't plan for it initially. Hint: The "cgs" (centimeters-grams-seconds) system of measure doesn't know anything about electromagnetics, but if you try to extend it to the Gaussian and Heaviside-Lorenz systems almost require fractional exponents to work with electromagnetic fields. Beware people who quote "cgs" because they're totally unspecified when talking about electromagnetic units. Capacitance is measured in centimeters in the Gaussian system. Augh. Non-orthogonal systems are not a lot of fun to make a machine reason about.

When you try to work with real-world units, you may realize that you're building on a world of shifting sand that was organized by committee, and sometimes instead of sand, it's dung. I recently refereed a great paper on historical and current confusions in systems of measure, which I agreed with heartily. (I'll link to it soon if it's been published.) You'll find that definitions shift, and even the most authoritative standards bureaus often have errors and are self-inconsistent. I probably read and compare standards documents as closely as anyone does, and have submitted huge laundry lists of errors in "authoritative" publications (which were all acknowledged as errors, and corrected with an average delay of 3 years.)

• How many square meters in an acre?
• How many Newtons in a pound?
• 1 furlong/fortnight equals how many m/s?
• What's the value of the gravitational constant G? And again in feet^3 day^-2 pound^-1
• My surveyor measured a distance very accurately as being exactly 15287.323 feet. To the nearest millimeter, how many meters is that?
• What's the radius of the earth?
• How many candela is a monochromatic light source emitting exactly 1 watt at a wavelength of 400 nanometers?
• 1 Hz is how many radians/s? (Just give me 3 significant figures.)
• Convert the following Celsius temperatures to Fahrenheit: "Media reports said the temperature hit 52.1 degrees Celsius in the northern town of Kottagudam on Monday. The temperatures are about 5 to 7 degrees Celsius above normal."

Hint: Almost all of these are trick questions, but actually aren't if you have done your research. Some may require questions as answers in my best Talmudic style.

Another source of error: Humans write units inconsistently. While the International System of Units (SI) publishes rules and style conventions for writing units of measure, they are rarely followed in my experience. If people followed them, their mathematical expressions with units would be usually unambiguous. However, they're not followed, with the following most common offenses:

• Not understanding the precedence of multiplication and division. Your 5th-grade teacher taught you this. Multiplication and division have the same precedence and are performed left-to-right. Many authors and even journals get this wrong, forcing you to perform dimensional analysis to guess what equations might mean. The SI rules get this right, but even journals like Nature allow sloppy inconsistent articles. Put in your parentheses. Division is not at a lower precendence than multiplication.

• Don't try to parse English sentences. English is ambiguous, yet you see things like the Google Calculator and Wolfram Alpha try to parse English sentences and end up with rules that parse perfect mathematical notation totally incorrectly and bizarrely. They both end up with bizarre ad-hoc rules that utterly fail on the simplest calculations and actually punish those who understand normal mathematical precedence correctly with incorrect answers. The Fortress language proposed different precedence rules for operators surrounded/not surrounded by whitespace in units of measure. I have to say, that sounded dangerous to me.

• One of Frink's design goals is, "When in doubt, be pedantic." Frink's web interface will warn you about a huge slew of potentially-confused calculations and help you write them correctly, rather than trying to make a "do what I mean" interface and guessing horribly incorrectly much of the time, yielding wrong answers.

Your point about "polymorphic zero" is interesting. In other words, some unit systems try to make it so that 0 feet == 0 days. This is hardly ever good if you want to reason about the correctness of your calculation with units. You usually want your program to signal that units are not conformal and you're doing something really, really wrong, and flag an error. Frink does not have a "polymorphic zero". I've never really missed it, unless I realized I'm writing code in a units-wrong way.

I'll point out where you might want to have a polymorphic zero. Let's say you're summing a list of numbers. You might start by saying: sum=0 but if you try to add sum = sum+1 meter to this, you have a conformance error. Zero does not have the same dimensions as 1 meter. The solution is relatively simple. The first argument to your summation function must become the first element of the list, not zero. You have to realize that there is no identity element for addition when units of measure are to be tracked. This is a necessary understanding of the different field that is units of measure. (In Frink, the summation of an empty list is undef).

Matrices are interesting, and they are the only time I've ever missed a polymorphic zero. But that was just briefly--a minute of coding eliminated that. I was multiplying matrices for transforming graphics in Frink, and realized that some units, sometimes, might have units of length, which wouldn't add correctly with zero elements in the matrix. But this was solved quickly enough, by adding null elements to the array, and not adding them to anything if they were null.

A polymorphic zero is a way for your program to unintentionally leak and lose information about units of measure. It should most likely be avoided.

By the way, you mentioned Kelvin and Celsius. Here's Frink's short answer about temperature scales.

All of Frink's conversion factors are multiplicative. It requires a function call to disambiguate scales who don't share a zero, like Fahrenheit, Celsius, and Kelvin. (In short, to create a Fahrenheit temperature, you need to do something like F[98.6]) I think this was a fairly good decision. Again, when in doubt, Frink's design goal is to be pedantic. Force the user to think and understand the differences and intentionally disambiguate. While I decided later it was possible to have some units have a multiplicative and additive factor when converting between units, the number of units that needed an additive factor was incredibly small, and it would just slow down performance.

Case in point. Try to guess in advance what the Google Calculator will return for each the following:

• 1 F + 1 C
• 1 K + 1 F
• 1 K + 1 F + 1 F
• 1 K + 2 F
• 1 F + 1 F
• 1 F / 1 C
• 1 K - 1 C
• 1 K / 1 C

Similarly, logarithmic scales like (the ratio scales) bels and decibels, the Richter scale, seismic moments, and stellar magnitudes all require their own function calls. It's an easy enough solution. The number of logarithmic and non-zero-sharing scales are very small.

I'll conclude by saying that you might miss a polymorphic zero (and other algebraic issues) when working with symbolic math. Some of the issues appear when considering what you thought were identities like x/x or x-x in Interval Arithmetic, or performing symbolic arithmetic.

You may not know that Frink is trying to be a symbolic math package, and performs lots of symbolic mathematical transformations. It can transform arbitrary mathematical equations into other equations. Frink tries to be able to validate all calculations involving units of measure, but that means that its solving transformations might include transformations that aren't valid under the rules of units, like 0 _x <-> 0 which loses all information about the units of measure of the pattern _x. I'm still trying to solve this rigorously. It's new territory. I don't know an environment that has both units of measure treated rigorously and algebraic transformations.

This also affects lots of other numerical types, as anyone who has designed a computer algebra system will know. Operations on quaternions are not commutative. Operations on intervals are not distributive (they're called sub-distributive.) Operations on units of measure have no absorptive unit for multiplication and have no identity element for addition. You really have to know in advance what values might get put into a variable to do the right thing.

Yeah, the last two paragraphs are the pain point. The huge pain point. I don't think that rigorous units + symbolic math have been solved in any working programming language, so it's new territory.

### I think Matt meant

I think Matt meant polymorphic zero in the sense that Haskell has them with type classes. If you track units in types then if you are summing a list you can deduce the unit from the type, even if the list is empty. So if you sum an empty list with element type meter, you get zero meters.

Temperatures are a good example where conventional dimensional analysis doesn't work. 1 celcius + 1 celcius = 2 celcius is not a valid operation. Adding a point (x cm, y cm) to another point (x' cm, y' cm) to get a point (x+x' cm, y+y' cm) is not a valid operation. For this you need "additive units" (conventional dimensional analysis being "multiplicative units").

### Invalid subexpression in a valid operation.

1 celcius + 1 celcius = 2 celcius is not a valid operation.

Heh. That's interesting, actually. 'Celsius' has different meanings depending on context. As a base value it denotes an absolute temperature, and as an addend, it denotes a temperature interval (ie, the temperature difference between two specific temperatures 1 degree apart).

This is an artifact of the zero being a chosen point on the scale rather than a physical absolute. Any base-value Celsius temperature is a measure of the addend temperature interval between the base-value temperature and the chosen zero point, which in this case is the freezing point of water.

So Celsius temperatures can be averaged in the normal way as (t1 + t2)/2, but the subexpression (t1 + t2) within the averaging operation isn't valid. The averaging operation works because (where Tf denotes the freezing point of water and T1, T2, Tf are all in a temperature scale whose zero is at absolute zero...)

((T1-Tf) + (T2-Tf))/2 = ((T1+T2)-2TF)/2 = ((T1+T2)/2)-Tf.

But you have to do some algebraic analysis to discover that the expression is valid even though its subexpression is not.

Ray

### It's really the same thing

It's really the same thing as tracking normal units, except for addition instead of multiplication. Let me complete the analogy with meters:

1 m * 1 m = 1 m is not a valid operation.

but

(1 m * 1 m)^(1/2) = 1 m is a valid operation

this is exactly analogous to (t1+t2) not being a valid temperature but (t1+t2)/2 is a valid temperature. If you take the log() of both equations you can see why.

The celcius scale really consists of two separate things: the scaling (how much is a temperature difference of 1C) and the zero point (how much is 0C). So instead of writing down 10C we should make the zero point explicit: 10C + z. The precedence here is (10*C)+z. In this system a quantity does not just have a multiplicative unit, but it can also have an additive unit. In general a quantity x has a unit that is a pair (A,B) and the rules for manipulating them go as if the quantity was (x*A)+B. So for example if we add two quantities x+y with x having unit (A,B) and y having unit (C,D) then that is only valid if A=C, and the resulting unit is (A,B+D).

So for example if we have 1C + z then that has unit (C,z). If we add it to itself it becomes (1C + z) + (1C + z) = 2C + 2z, so the unit is (C,2z). Note that this unit is incompatible with (C,z). If we take a temperature difference (10C+z) - (6C+z) then we get 4C, so the unit is (C,0). If we add that difference back to a temperature: (6C+z) + 4C = 10C+z we again have a unit of type (C,z). Averaging two temperatures ((10C+z) + (6C+z)) / 2 = (16C + 2z)/2 = 8C + z is again a temperature of type (C,z).

In exactly the same way you can track the difference between points and vectors: points have unit (m,p) where m is meter and p is the location of the origin. So a vector (the difference between two points) has unit (m,0).

Hope this makes sense...

### Thanks for the thoughtful comment

Your work on solving transformations looks particularly interesting to me, so I'll check it out some time. (BTW, Jules is right about what I intended for polymorphic zero)

### Surely "unitless" is a unit?

Surely "unitless" is a unit? It's what you get when you divide a distance by a length, for instance, so it's part of the closure of the set of units over the mul & div operators, and should be considered the unitary element (no pun intended). Unitless has the property that all operations between unitless quantities are themselves unitless, so it induces the "don't care about units" behaviour you desire. Possibly, unitless should be the default.

OTOH, arguably, test scores are really measured in "marks", letter occurences in "letters" and percentages in "percent". Units really are everywhere, and unitless quantities are fairly special.

### Yes, all, with a big "but"...

Yes, all calculations. That is, Frink has a pure numeric tower, and built on top of that of that is what I call a "unit" which is a numeric scale and its dimensions. (These libraries are portable and can be used outside of Frink.) From the Frink language level (internally and externally,) there are no raw numerical values. All numeric values are "units", with dimensions. So there's no incentive for the programmer (me) to "cheat" and be lazy when implementing some internal functions and implement them as non-unit-aware. All of Frink's functions are units-aware, and try to help you do the right thing.

From the Frink user standpoint, you just write your programs with units, and Everything Just Works. In other languages where units are an afterthought, or an add-on library, most library code won't be able to use units, so it's easier for the programmer to just "cast away" units (and usually never get them back,) so the user actually has an active disincentive to use units--it makes their code less interoperable with other code.

(The same is true for using "good" numerics as I've touched on in this thread. In Frink, all calculations use its Do The Right Thing With Numbers numeric tower, so you automagically get benefits like being able to run interval arithmetic analysis with literally zero modification to many existing programs--just start feeding them intervals.)

The big "but" is of course, it often doesn't make sense to pass arbitrary units to many functions. It makes no sense to calculate the cosine of 6 meters. So functions like cosine ensure that the numbers passed to it are dimensionless or flags an error. Or, to quote Dave Chapelle, "what's the square root of this apartment?" will reveal a flaw in your question itself, and hopefully help you get it right.

Technical note: Trigonometric functions actually check to see if the unit passed in to sin, cos, etc. has the same dimensions as the "radian". This allows you to define a standard units file that is radians-aware, if you want. In the current standard units file, the radian is (correctly) defined as the dimensionless number 1, as radians are dimensionless. (Regardless of how it changed back and forth in the International System of Units' (SI) stupid inconsistent changes in 1960 and 1974. Read Frink's standard data file for my diatribe about this and other stupidities in the SI. Hint: Look for "Alan's Editorializing" sections.)

As others noted, some of the numbers you calculate with are just dimensionless numbers. Like counts of things. (Although you can extend Frink's list of dimensions at runtime to make sure you don't add counts of apples to counts of oranges!)

There's no pain for the programmer to write programs with or without units. If you write 2+2 everything works fine. Yet if you write 2 feet + 2 meters everything works fine. Finer! It does the right thing for you automatically. The effortless parsing of units everywhere makes it easy to write programs that are physically right, with a high degree of confidence. Using units in Frink is trivial, so you might as well use them.

In conclusion, dimensionless numbers in Frink have many optimizations to make their use more efficient, but not so astoundingly much that it's worthwhile making everything dimensionless to make your program a bit faster. Yes, it's definitely more expensive to Do The Right Thing, but it helps you be sure that your answers are right, and not just fast. Making everything units-aware means confidence in all y'alls calculations.

### More cross-development

We're seeing more development that takes place on machines different from the target. For a long time, one could develop for desktop machines on desktop machines, and that was viewed as the normal case. Tools for game console development and embedded development were available but not predominant. With handheld devices, cross-development is the normal case.

By 2020, cross-development will be the normal case for most tool chains. If source and target happen to be on the same machine, that will be a special case.

### Programming AR and Ubiquitous Systems

While programming a smartphone from a smartphone doesn't seem very plausible, I'm entertaining an idea for programming AR glasses - leveraging an external editing surface with pen and paper. I think the free hands, wider visual projection, and larger editing surface are pretty essential to the task.

### why not?

I've never owned a smartphone specifically because of the lack of dev tools that run on smartphones. If I can't write programs on it, I don't see a good reason to own it.

Why isn't it possible to have a smartphone development tool that actually runs on a smartphone?

Certainly these phones are more powerful computers than developers had on their desktops a few years ago. Why are there no compilers? Why is no one using various emulators to run old compilers?

Is it really just a problem of being locked out of the parts of the machine that matter by the manufacturers and service providers? And if so why are people putting up with that?

Ray

### Phones are physically too small for IDEs

Why isn't it possible to have a smartphone development tool that actually runs on a smartphone?

Mostly, there are severe HCI hurdles: the screen is too small to provide both an editing surface and navigation or feedback; the hand covers half or more of the screen while editing, and pointing becomes too imprecise if the text is actually small enough for context. One of the hands is stuck holding the phone (and there are some cramping issues for extended sessions), etc.. I briefly described some of these issues in the article I linked above, and in slightly more detail at the augmented-programming google group (re: why visual programming is a failure - half of my answer: not big enough). Sean McDirmid has also discussed it briefly, e.g. why he favors a tablet and considers a phone an unlikely target for coding with touch.

Even if someone develops an IDE that runs on the smartphone, it will simply - due to physical and human issues - be too damn painful to effectively use. (It isn't a problem of computing power or being 'locked out'.)

AR glasses are a more promising target because we:

• free up a hand and avoid hand cramps
• get a lot more surface and pixels - at least logically - based on projecting onto real-world surfaces (desk, walls, different pages in a mokeskin notebook, etc.) Much of this space can be used for editing or feedback.
• much of the navigation becomes implicit, physical, and readily accessible - i.e. look, grab, maybe even change the page.

### I would qualify this with

I would qualify this with speech-based dialogue systems. If we can ever figure out how to input/view/change/debug programs using conversational dialogue, then the phone will work just fine of course.

### Speech-based

Perhaps, but I think the AR glasses (which will universally have microphones close to the mouth without hindering visual feedback) will still be better at it. :)

### They do exist

It is not a question of computing power it is question of interface as people mentioned below. People don't write anything substantial in general, smartphones as secondary devices they don't serve the same functions as computers even if they superficially could be made to do the same things.

On the other hand there are scripting languages that exist for smartphones that script smartphone behaviors. Scriptable launchers, scriptable calculators, scriptable calendars, javascript embedded in browser bookmarks... Those all exist and those small DSLs are growing and thriving. The code that exists on smartphones though is stuff that people can write quickly and performs adhoc.

### Coding on a phone? I do that!

Unfortunately, this conversation has fallen onto the second 200-comment page, so I don't know how many people will find it.

I've never owned a smartphone specifically because of the lack of dev tools that run on smartphones. If I can't write programs on it, I don't see a good reason to own it.

I'm the same way, but a text editor and a hardware keyboard are almost enough for me. I'm not very picky about the editor I use, and most of the time I don't mind if I can't actually execute the code I'm writing. And when I do need to test my code, usually it's browser-friendly JavaScript code, so I don't even need another app.

I chose my smartphone specifically so that I could program with it, and I did so on and off for about a year. Then I started bringing my laptop on my commute, and that's stolen the spotlight.

Web browsers may be what I use to run code, but they're far from the only option. Alan Eliasen mentions Frink and SL4A below, and some other interesting options are Ruboto IRB, AmbientTalk, on{X}, and Anjedi. Anjedi is precisely "a smartphone development tool that actually runs on a smartphone."

### I used to code on a phone

I coded on a phone for a while too. I used a Nokia 9210, which has a hardware keyboard. I installed a Forth on it and wrote apps in the text editor on the phone, copied the text to the clipboard and use a word in the forth system that evaluated the clipboard. From there I built a minimal development system. All this to avoid using Symbian's C++ API. Fun at the time.

### Punctuation

A forth dialect sounds like a good idea to me. Some punctuation characters are annoying to type on my phone's keyboard (altogether,  [ ] { }  ^ ~ _ \ |), and forth-style syntax could be usable in spite of that. One of the projects I pursued on my phone was written in Arc, whose lisp-style syntax was nice to work with for the same reason. (That project never got to the point of being runnable, however.)

### Frink again

Interestingly, the answer to your question may be Frink again. Frink runs great on Android and has run for years on some older phones (like Symbian.)

There are technical reasons that make it hard to compile your favorite programming language on smartphones, too. Android really wants you to write in Java, and not native code. I got lucky with Frink. From day 1, I wrote code that would execute on any Java 1.1 or later environment. When Android came around, it was a fairly simple procedure to port Frink. So my stupidity and laziness paid off!

The SL4A project is another attempt to bring several programming languages to Android, but you won't find it in the app store, so people may not know about it.

Apple's licensing agreements prevent developers from releasing any programs which contain an internal scripting language, or that can download and execute code from the internet. So that's why you don't see programming languages on Apple devices.

I've been making Frink into a platform for developing robotics using the IOIO board for Android, and it allows programming robotics directly on the phone. Previously to use IOIO, you had to install literally hundreds of megabytes of various software tools, including a Java compiler, Dalvik compiler, Android libraries, etc., IOIO libraries, build IOIO libraries, learn to build an Android application, write XML, write Android GUI code, etc. Then you had to compile code first through Java, then through Dalvik, and transfer it to your phone. Just to blink an LED.

With Frink for IOIO, you install one 600 kB application, and then write code directly on your phone in minutes, even in the field.

As to the question "why are people putting up with that," I dunno. Most people tend to be consumers of programs, not creators of programs. It made my iPhone-vs-Android decision trivial, though.

Obligatory related xkcd cartoon.

### Apple's licensing agreements

Point of order: Apple amended their terms a while ago, and now they do allow programs to contain internal scripting languages. The restriction on those languages running code sourced from the internet remains, however, and is the reason there is no Scratch for iPad (even though it would be useful without that feature - a shame).

There are now many programming languages available for the iPhone and iPad, although none seem particularly usable. I've got CBM BASIC and a Scheme on mine.

MAME is also available and is in the same boat wrt capabilities. In that case it's interesting to note Apple don't appear to mind if the user manually copies executable code from the internet to the device via iTunes (otherwise MAME would be useless).

### Consider Virtual Worlds

- A programming language allows the "programmer" to present statements to another actor. The idea of "programming" means that we want to specify behavior. That means when that actor is given a certain stimulus, it will produce a reasonably predictable response (even when we have randomized it over some range).
- Being specialists, we tend to confuse syntax with language. But the real language is the set of API that the framework (actor proxy) provides.
- To bring this discuss up to a more future-oriented level, we need to imagine the future context. Suppose, instead of writing document oriented web pages, we are directing avatar responses to user gestures. The user gestures could be mediated by sensors. User gestures are a kind of sign language. "Click" means "I choose ___".
Because sensors will continue to proliferate, the language needs to separate the sensor and gesture from the interpretation.
- A great deal of pattern knowledge can be used to automate many programming tasks. For example, persistence can be almost completely removed from the programmers workload in much the same way that we no longer bother to write our own collection libraries. Think about behavior driven development - "Given-When-Then" test specifications often contain enough information that the solution might be produced automatically.
- I will go out on a limb and say that I believe that programmers will be giving advice in the form of goals and constraints (which may include soft constraints like esthetic values) and the "programs" will be assembled from well-known patterns. Some specialists (similar to those who write system-level code today) will contribute to the evolution of those patterns.

### Please make it so, or punched cards

The fastest super computers already may have more computational power than a human ( Computers vs Brains, Scientific American Nov 2011 ) So I am guessing that by 2030 at the latest and possibly by 2020, if humans are around we will either be programming by making a request of the computer (Intelligent computers will have independent will) or we will have smashed all advanced technology and be back to punched cards.

### Software change will accelerate exponentially

I have many predictions but these 3 are the more important ones:
1. Long term, AI will write most code and between now and then, we need an environment that is as accessible from the inside by programs as it is from the outside by programmers.
• AI will not happen overnight but will gradually become more and more pervasive.
• This implies that the programming environment must be "alive" at all times and have all source code, data and even documentation available and changeable from inside the system.
2. Having a PL look after the coding while the data is looked after by an RDMS "over there" will be seen as primitive.
• Programming code and data were meant to be together rather than in separate places.
• This implies that OOP will prevail over FP because computer problems are both data and code and encapsulation will help to isolate solutions. C++, C# and Java are not very good examples of OOP IMHO.
• Any language that doesn't intimately contain extensive tools for medium and large scale data processing will not be main stream.
3. Most software development will be in creating web appliances, accessible through a browser or from other web appliances.
• Look for many new language environments to be adopted in unheard of short times as the internet will make communicating between all systems more democratic.
• Without the need to update applications on individual machines, code updates will automatically be made on a daily, if not hourly basis rather than in months or years.
• Applications will use web appliances seamlessly to create their functionality even if those appliances are hosted/written in totally different languages.
• Apps on mobile devices will use functionality that is a mix of local and "over the net" appliances with most of the "app" on the server.
• Many desktops will run their own server/database/language that will be able to intelligently communicate with web appliances and local or other users through a browser.

I have to confess that I am currently creating a system that can implement what I envision above. The winning languages by 2020 will most likely be created by developers with lots of real world experience rather than academics. Language features will require real world problems that need solved rather than just curiosity, for wide spread acceptance. I predict that most general purpose languages without database, security, multi-core/multi-user and web functionality will decline slowly into oblivion. I understand that almost everything can be programmed in a language like C, but there is a reason that web frameworks like "Ruby on Rails" is preferred for web development.

### Long term, AI will write

Long term, AI will write most code and between now and then

Disagree. The hardest problem to get right is requirements analysis, and any AI advanced enough to be able to communicate effectively with all humans and their ambiguous language and fuzzy reasoning will also suffer the same human failings IMO.

we need an environment that is as accessible from the inside by programs as it is from the outside by programmers.

Agreed.

Programming code and data were meant to be together rather than in separate places.
This implies that OOP will prevail over FP because computer problems are both data and code and encapsulation will help to isolate solutions.

Disagree. I'm increasingly seeing data representation and behaviour as separate concerns each with their own scaling and complexity problems. Handling them separately is the only way to scale. Viewing subsystems as encapsulated components is necessary however, but the modular abstractions needed to do this are available in FP.

### The definition of AI is a very personal issue.

AI can be defined in many ways. What I call AI will start out by automating parts of code generation (think report generators, input screens, etc). Programs could also be created that create increasingly more complex code based on some structured form of requirements. You might not call this AI but it could evolve into that over time. It doesn't have to parse natural language to be AI!

Our CPUs execute code that manipulates data. When we solve problems, we use both code and data. Why would I want my code in one place and the data it uses someplace very far away? When I get errors, I always look at the code or data I last changed? If errors occur far away from those changes, it takes much more effort to find the problem, especially if the error just crashes the executing program. Programming behind interfaces also means that the code and data can be completely refactored without breaking the calling messages.

I see no evidence that OOP has a "scaling" problem. FP was designed for Mathematicians who think they are the only people that know how to program. Ignoring the "data" part of the "code and data" that makes up computing, just isn't smart IMHO.

I think encapsulation is the most important feature of OOP but having multiple views of the same data isn't against the OOP rules. On the other hand, somebody has to own the data! I think OOP has gotten a bad rap from lousy implementations like C++, C#, and Java.

I programmed extensively in APL back in 1975 and it looks like the early model for FP. It was powerful BUT has impossible to read or maintain. The idea was if you needed to change a function, just write a new one because functions were small and fast to create. No thanks!

In 35 years of programming, including over 1,000 projects and a language/database system, I have never used more than a few Math formulas. Why would programmers adopt a language that forces everything to be a Math formula when CS has simple coding techniques that already work? Why would anyone think that using recursion instead of simple looping is a step forward? I dislike all the jargon that comes with CS and OOP and along comes FP and makes up a bunch more. Please show me where Haskell contains security, multi-user access to functions or data, scalability over local clusters etc. FP wants to take input, pass it along to a string of functions, get some output and then stop. It provides for first class functions BUT does it allow you to create arbitrary functions at execution time and then run them? Does it incorporate seamless integration with other systems over the internet? My system uses any number of cores without any change to the compiled code what so ever. With Haskell, you have to create threads yourself.

### I see no evidence that OOP

I see no evidence that OOP has a "scaling" problem.

I'm referring to development scaling, not program performance. I believe Mark Miller coined the phrase "delegation is the cornerstone of civilization". The more you can separate responsibilities and delegate development and management tasks, and then aggregate that individual work into a cohesive whole, the more you can scale development, ie. larger programs, faster development.

I believe OOP at all levels too easily couples data and behaviour which makes persistence and schema upgrade into harder problems than they already are. We should be separating and delegating such concerns to make these problems easier.

I programmed extensively in APL back in 1975 and it looks like the early model for FP.

APL is sort of like point-free FP IIRC, which is a subset of FP. If you're recommending others not judge OOP by C++ and Java, I certainly advise you not to judge FP from APL or point-free programming specifically.

### Development Scaling

I don't see how "development scaling" has anything to do with OOP or FP. I don't have any very large scale development using my language yet however I believe my high level message passing and the ability to forward those messages onto multi-cores and multi-local computers will scale. This aspect of my system has nothing to do with OOP per say but message passing was defined as Smalltalk's most valuable contribution by it's author.

Persistence in my system is totally automatic and updating schema is trivial even while the same Object is being used my other users. Neither attribute has anything to do with OOP IMO. Just by calling some data a "concern" is no reason to separate it from the functions that act upon it. Please give concrete reasons why 1/2 of a problem should be implemented in one location and the other half implemented elsewhere?

I agree that implementation problems with APL aren't proof of anything in FP programming of today.

### I don't see how "development

I don't see how "development scaling" has anything to do with OOP or FP.

Depending what you mean specifically by OOP and FP, it could have many implications for development scaling. There are definitions of OOP and FP that are pretty much duals of each other though, so if you take that view, then they are effectively equivalent on this matter.

Neither attribute has anything to do with OOP IMO. Just by calling some data a "concern" is no reason to separate it from the functions that act upon it. Please give concrete reasons why 1/2 of a problem should be implemented in one location and the other half implemented elsewhere?

Because data could evolve independently, data may or may not be memoized/cached in various ways, it may be a view derived from some other data, it may or may not be persistent, it may or may not be mutable, it may or may not be atomically or transactionally updateable, it may or may not live in more than one location, etc.

Functions too may evolve independently by uploading fixes for bugs, by evolving interfaces to restrict or relax permissions, etc.

Co-mingling data with function evolution and vice versa just complicates already complicated problems. Consider the complexities of a clustered, fault-tolerant RDBMS and consider that we'd ideally like to specify all the properties provided by such a data store at the fine-grained data field level, within the language. (Edit: you could of course provide all of these properties via code, but the problem is knowing that your code actually achieves said properties -- lifting some of these properties to the language level reduces burden of this verification to common implementations, and the compiler can properly optimize them in ways that library implementations can't)

Depending what you mean specifically by OOP and FP, it could have many implications for development scaling. There are definitions of OOP and FP that are pretty much duals of each other though, so if you take that view, then they are effectively equivalent on this matter.

The point was language features that promote "development scalability". What features of either paradigm talk to the problems of multiple programmers on a single project or test systems? My system actually deals with both issues.

Because data could evolve independently, data may or may not be memoized/cached in various ways, it may be a view derived from some other data, it may or may not be persistent, it may or may not be mutable, it may or may not be atomically or transactionally updateable, it may or may not live in more than one location, etc.

In OOP, you can group data and code into objects but you can also just have code or just have data. Some systems (like mine) allow objects that contain collections of any of the 3 different kinds of objects in the first line. FP seems to ignore the data part of the code/data partnership that computer systems are made of. That doesn't sound equal to me!

My system can accommodate all the different types of data you list as well as transactions, from a system based on OOP principles.

Functions too may evolve independently by uploading fixes for bugs, by evolving interfaces to restrict or relax permissions, etc.

My system handles that case natively (no "work arounds" required).

Co-mingling data with function evolution and vice versa just complicates already complicated problems. Consider the complexities of a clustered, fault-tolerant RDBMS and consider that we'd ideally like to specify all the properties provided by such a data store at the fine-grained data field level, within the language. (Edit: you could of course provide all of these properties via code, but the problem is knowing that your code actually achieves said properties -- lifting some of these properties to the language level reduces burden of this verification to common implementations, and the compiler can properly optimize them in ways that library implementations can't)

You never get problems that require changed or new data as well as code? How is it efficient to have them in 2 far of places?

Actually "a clustered, fault-tolerant RDBMS" is not a bad description of what I am creating "within the language". My compiler is always available and invisible. I have no libraries. My system lifts more programmer "burdens" than any language I have used or read about.

### In OOP, you can group data

In OOP, you can group data and code into objects but you can also just have code or just have data.

This is not really pure OOP which suffers from problems of binary methods. What you probably have is a mixed object/abstract data type, in which case you're using ideas from OOP and FP languages.

In any case, there is no example of abstraction you can achieve in OO that you can't also achieve in some FP language.

### Code and Data

Interweaving data with code is problematic:

1. the data is inaccessible to alternative views, thus hindering extension of the system with plugins/features/components that would need access to that data
2. it is difficult to maintain the data (consistent views, etc.); the system is more vulnerable to disruption and data races
3. it is difficult to maintain or incrementally the code without losing data; often the whole app must be restarted
4. it is difficult to persist the data because it is entangled with code

Separating code and data doesn't mean we can't leverage interfaces, first-class functions, and the like. Views, potentially mutable views (lenses, etc.), are a useful class of interfaces that respect separation of code and data. We can model systems that gather and scatter data, funneling it to the code from various sources and potentially publishing it to observers. I describe this further in a few blog articles: local state is poison and data model independence.

Ignoring the "data" part of the "code and data" that makes up computing, just isn't smart IMHO.

Separating concerns doesn't imply ignoring them.

### Encapsulating data and code just makes sense.

Not closely associating code and the data that changes it is worse than problematic!

My system will easily allow multiple views without duplicating the data. Any number of views can be created that include any number of other objects or any parts of them. What OOP rule says that multi-views can't be created?

Why is it difficult to maintain the data? My system guarantees that only 1 user can access any Server Object at a time so no locking or blocking. Why more vulnerable? I am not familiar with "data races", please explain?

In my system all Server Objects have a message queue and continue to run regardless of previous errors. Infinite looping code times out. Schema and code can be changed in a single message interval and other facilities are provided for incrementally testing Servers without separate test systems and then can publish the new code within a single message interval. Depending on the type of change, changing and testing code can occur in seconds without stopping other users.

Persisting of data is automatic and cannot be controlled by the programmer. No problem "persisting" the data at all.

Keeping the data and code together also doesn't mean you can't leverage first class functions and interfaces. I don't need "lenses" as the message passing and high level interfaces can easily be used to provide any view of data that I wish. You are correct that "I don't respect the separation of code and data". I am arguing that it is a very poor programming practice. Triggers and stored procedures in RDMS were invented because data needs it's code and vice versa!

If you like separation of the data then just use a RDMS. I would like you to prove that "local state is poison"? My programming experience says you are simply wrong but I am open to good arguments.

I have implemented many business systems and I used triggers and stored procedures. The triggers eliminated all "batch processing" and allowed me to create online, many dimensional data views at any time. Instead of having to process huge amounts of data before looking at it, I could guarantee that the tables implemented all the business rules that related the tables together, in real time. Stored procedures were also very helpful as they allowed the processing of data on the same machine as it was stored.

Do you argue that triggers and stored procedures are useless? If not then doesn't it make sense to have code and data together?

If I create an Object with no data but just functions, then how it that different from FP if those functions can't access any other outside variables (no globals)? Why would ignoring the role of data in problem solving be a good thing?

I agree "Separating concerns doesn't imply ignoring them", however, why do you deny the fact that problems are solved with both data and code? In my system, I can put multiple lists, tables and their indexes in a single Server Object. I can "borrow" that data (without copying it) from any other Server Object and treat it as if it was local without any explicit locks. In my system, you can have data, just functions or both as needed. Can you say the same about FP?

### OOP is not well defined, but

OOP is not well defined, but OOP traditionally includes information hiding through encapsulation of private state, and information sharing through aliasing of common intermediate objects. This directly hinders ad-hoc alternative views because you can't easily view or capture the information that's hidden (i.e. where 'easily' means 'without deep invasive changes to code or big design up front'). While you could create OOP systems where object state is open to ad-hoc views and extensions, you'll often be working around the OOP system rather than with it, especially if you must sill achieve useful security and modularity properties. It is not fun to fight the paradigm. Externalizing state from the start (no local state, and thus no hidden private information) is much better for achieving ad-hoc views without sacrificing security or modularity.

Why is it difficult to maintain the data?

Lots of reasons. Data access and update is not uniform, and is instead regulated through ad-hoc domain-specific interfaces with hidden effects. Changes to data can break the code that's tightly coupled to it. There are often non-uniform and non-compositional concurrency properties.

My system guarantees that only 1 user can access any Server Object at a time so no locking or blocking.

That's useful, but is severely undermined by not being compositional. I.e. it becomes difficult to treat two smaller server objects as one larger server object, or treat two sequential messages as one larger message. Developers will be under pressure to create monolithic structures (objects, procedure-calls) just to achieve useful concurrency properties. This seems to be a common problem for event/messaging systems. (Cf. Why Not Events.) You may experience race conditions, deadlocks, and other problems the moment you need to involve more than one object in an observation or update.

Keeping the data and code together also doesn't mean you can't leverage first class functions and interfaces. I don't need "lenses"

I never suggested otherwise. But you earlier emphasized the value of programming behind interfaces. With lenses and similar, we don't need to 'keep the code and data together' to do that. Clearly you don't value the separation, but some of your arguments against the separation can be countered by use of lenses.

Triggers and stored procedures in RDMS were invented because data needs it's code and vice versa!

Stored procedures are a useful form of abstraction but aren't deeply integrated with the data. Triggers essentially model external code/agents observing and operating on the database, which is a useful pattern (cf. blackboard systems). An RDMS is actually a fine example of externalizing state. Unfortunately for your arguments, triggers and stored procedures are not a good example of or analogy for objects or other programming models that deeply interweave code and data.

Why would ignoring the role of data in problem solving be a good thing? I agree "Separating concerns doesn't imply ignoring them", however, why do you deny the fact that problems are solved with both data and code?

I agree that problems are solved with both data and code. What I am rejecting is your position that code and data should therefore be tightly coupled. I think it is better programming practice to separate them.

### All very good points BUT

I don't use the idea of "aliasing of common intermediate objects". As I said, all my Server Objects have very flexible interfaces (not function calls) and these interfaces can easily provide multiple and efficient views of data in one or multiple other Objects. To get an "ad-hoc alternative view" is as simple as using the following message.

Map <= :cust.select fields=fname,name,add_1,phone where="custn='123456'"

This would return a MAP of variables named fname â€¦ that contain the data from the customer collection where the â€œcustnâ€ is â€˜123456′.
Map <= ::comp2:cust.select fields=fname,name,add_1,phone where="custn='123456'"


This message would do the same as above, except it would get the data from the computer named â€œcomp2â€.

The above messages are actual lines of code. Messages can also be sent by defining a "message" variable and using it's functions to send the message. The "Map" of variables can be "dumped" into your program with a single command or you can deal with them individually or as group.

I have a full definition of my project at www.rccconsulting.com if you would like more information. I am very interested in all comments (good or bad) about my systems design.

Erlang has a message passing model, green threads and concurrency for up to 100 of thousands of "things". It's language is functional and it runs continuously and can be changed while it runs. Message passing isn't just for OOP languages obviously. In my system, you wouldn't typically encapsulate individual objects at the highest level (I define 2 quite different kinds of classes and objects) but encapsulate lists and tables more like a RDMS does. One big difference is that my Server Objects (SO) can encapsulate multiple lists, just one or none. Only SOs have interfaces and a SO can be used to abstract any number of other "slave" servers as needed. Getting selected groups of data from any server is not "working around the OOP system" as you say. Unlike most other languages, I include a very flexible "user/module" security system in every interface of the servers so that I can "achieve useful security and modularity properties". In my system, somebody is always responsible for each chunk of data.

Not forcing data access and update into a controlled environment is a problem that encapsulation helps to solve. I laugh when I read about "hidden effects and side effects" of FP. I enjoy the ability to directly reach beyond my immediate function and access data within the local SO. With FP you just end up passing more parameters and then trying to remember which one comes first and their types. Changes to data can break the code that is tightly coupled to it AND that is exactly why they should be defined side by side. Changing the data in a RDMS can also break your code way over there. All my messages are queued and all have timeouts so no "concurrency properties". Like data persistence and memory management, concurrency is automatic and no programming features are provided for their implementation. You can change the number of operating system threads used, without chaning the code or re-compiling. Unlike most GC systems, no pausing occurs because most memory is restored to bigger useable chunks all the time.

You say my messages are not compositional. One of the fundamental abstraction tools in my language is using a server as a gatekeeper for a group of other servers who can't be called directly. Why would I want to combine 2 sequential messages? I could easily just add more options to an existing interface to make a more flexible message or create another interface with any functionality I wanted. Creating an interface can be as little as 2 or 3 lines of code. My system treats each server concurrently so "monolithic structures" would decrease rather than increase concurrency.

If processing is done in a single server then it all belongs to you while your message is being serviced. Your code can be written as if it was only single user. By assigning levels to servers and only allowing messages to servers on lower levels, deadlock can't occur.

In the case of transactions, you can "borrow" all the data collections you want from multiple servers and timeout if you can't get confirmation of the "exclusive use". A timeout or error would automatically release the "hidden locks" and you would have to try again. If all is successful, then do the transaction as if all these collections are local and commit or rollback depending on success or failure. It is possible that some transactions could try to cross lock the same data but this problem occurs with all transactions. I would design the transactions so that cross lock problems were minimized. Another method would be to use an "arbitration object" to allow each kind of transaction to occur in order, using the message queue of all SO.

Stored procedures are stored in the RDMS so how is that not "deeply integrated with the data"? Triggers for me, encode the relationships between the tables, not any external agent. I have used RDMS servers since the early 1990's and although they have many advantages, they make me always need to change data over there with the code over here. There are other problems like trying to shoehorn your data into flat tables, inflexibility in adding your own "power tools" to the data collections and the fact that the tool comes from a third party with all the version problems that entails.

At it's highest level, I believe that all organization should always have somebody in charge of everything. Whenever you have anything that doesn't have a clear authority, you have problems. Whenever you have more than 1 authority in charge of something, they must cooperate or there are problems. In computer systems that also means security. In my system, security is provided for both data and code by using the interfaces of the SO. I could even allow foreign SOs to be created without any security problems, as all the data and code are fully protected in their SO. In my system, it is always obvious what the authority hierarchy is but I also provide the ability to borrow collections of objects where other temporary organizations are required.

You said: Unfortunately for your arguments, triggers and stored procedures are not a good example of or analogy for objects or other programming models that deeply interweave code and data.

In my system, most SOs are collections of objects like a table in RDMS as opposed to single Objects. Triggers can be used on all data changes if you add the trigger code, just like a RDMS. All functions in a SO act like stored procedures because they are right there and can access the encapsulated data directly. Data from other objects can be obtained using messages or in rare cases by using my data "borrowing" commands.

As you can see, my system looks more like the inside of a OOP database with more flexibility in the language, full OOP, concurrency through message passing, first class functions and SQL at the same time.

I see no value in a language without large scale and useful data handling built right in.

### Integrated Code and Data

Stored procedures are stored in the RDMS so how is that not "deeply integrated with the data"?

Deep integration suggests that data is stored within code, and code within the data. Access to data (whether to observe or manipulate it) would be regulated through ad-hoc, problem-specific, hand-written code. Stored procedures typically sit beside the data but aren't usually entangled with it. (Similarly, application programs often sit within a file-system, but aren't part of it. Co-hosting is not integration.)

I don't use the idea of "aliasing of common intermediate objects".

Then how is it you can have views? Having two or more views (including the canonical view) of the same object seems to involve aliasing, since the view itself must alias the object.

Why would I want to combine 2 sequential messages? I could easily just add more options to an existing interface

Adding options to an existing interface is typically invasive and difficult to generalize. Developers may even need special privileges to add the methods, e.g. if they're for a database on a different computer. Complexity and maintenance efforts tend to aggregate at the server rather than being distributed across clients. If developers are abstracting objects via their interfaces, they may also need to update many different objects and add the same new interface to each (cf. expression problem).

If developers are able to use multiple, simpler methods or messages to achieve a goal, then they can have simpler server objects and more options for managing complexity. There are some nice middle paths that involve code distribution, e.g. batch processing or enabling a client to extend a server interface. But even code distribution ultimately relies on the ability to achieve complex behavior through composition of multiple simpler actions.

A timeout or error would automatically release the "hidden locks" and you would have to try again.

Timeouts have a lot of their own problems, resulting in some unpredictable failure modes (various forms of thrashing). I'm not all that fond of transactions, either (cf. transaction tribulation)

I see no value in a language without large scale and useful data handling built right in.

I think there is much value in having useful data abstractions be a standard part of the language. I've enjoyed languages where arrays, matrices, and collections are the common elements (collection oriented programming). I've contemplated development of languages where databases (i.e. sets of tables or Datalog definitions, or even modeled as probabilistic grammars) are first-class values. But even for these cases, I currently favor designs where state abstractions are logically separate from the code, existing independently of the code.

### Deep integration suggests

Deep integration suggests that data is stored within code, and code within the data. Access to data (whether to observe or manipulate it) would be regulated through ad-hoc, problem-specific, hand-written code. Stored procedures typically sit beside the data but aren't usually entangled with it. (Similarly, application programs often sit within a file-system, but aren't part of it. Co-hosting is not integration.)

I would never consider any data other than temporary data to be "entangled" within code according to your definition. OOP certainly doesn't suggest this and my system won't either.

Then how is it you can have views? Having two or more views (including the canonical view) of the same object seems to involve aliasing, since the view itself must alias the object.

A Server Object (SO) can implement any view by sending a message to other Servers, request whatever data it wants, optionally process that data and present it as original. The same can be done for changing the data. The Servers that actually physically contain the data can be defined as slaves or can allow themselves to provide content directly to other Servers.

Adding options to an existing interface is typically invasive and difficult to generalize. Developers may even need special privileges to add the methods, e.g. if they're for a database on a different computer. Complexity and maintenance efforts tend to aggregate at the server rather than being distributed across clients. If developers are abstracting objects via their interfaces, they may also need to update many different objects and add the same new interface to each (cf. expression problem).

All the points you have made here might apply to other systems but not to mine. My interfaces can change without any change to any existing messages or functions. Adding and changing interfaces is not only fast and easy but the disruption to the existing interface is minimal. Interfaces can change without any re-compiling of anything.

If developers are able to use multiple, simpler methods or messages to achieve a goal, then they can have simpler server objects and more options for managing complexity. There are some nice middle paths that involve code distribution, e.g. batch processing or enabling a client to extend a server interface. But even code distribution ultimately relies on the ability to achieve complex behavior through composition of multiple simpler actions.

My SOs can have very complex interfaces or very simple ones. SOs can be "slave" to other SOs or reachable by any other SO directly. An SO can look at messages and then just pass them on as needed. This means that the SO that actually satisfies the message doesn't have to be known by the SO that initiates the message. Adding and changing all parts of my system from the SO, Objects, Functions, and the data contained in the automatically persistent Objects can be done by a person or any authorized SO by editing a "document" or submitting change commands. For a programmer to change the definition of an SO, they would request that creation "document" from the SO, edit it, and by sending it back to the SO, all necessary changes would be made in one message cycle. This could also be done by another SO as well.

Timeouts have a lot of their own problems, resulting in some unpredictable failure modes (various forms of thrashing). I'm not all that fond of transactions, either (cf. transaction tribulation)

I think the biggest problem with timeouts is when they trigger when nothing is actually wrong. A timeout in my system would reply to the message so I see no way that "thrashing" would occur. Whenever a function is called or a message is sent, the result should always be tested to make sure that the operation was successful or not. Ignoring erroneous results in any language can cause problems. Timeouts and transactions are optional so if somebody doesn't want to use them, that they exist, is not a negative.

I think there is much value in having useful data abstractions be a standard part of the language. I've enjoyed languages where arrays, matrices, and collections are the common elements (collection oriented programming). I've contemplated development of languages where databases (i.e. sets of tables or Datalog definitions, or even modeled as probabilistic grammars) are first-class values. But even for these cases, I currently favor designs where state abstractions are logically separate from the code, existing independently of the code.

In my system, Lists and Tables with even millions of records with unlimited indexes are nothing more than normal data items, no different than a "long integer". I have rarely ever made a program that didn't use one or many tables of data. I find the examples of "sorting" quite funny when people talk about FP and other languages. All my data structures where it makes sense have a function called "sort" that just works.

The design of my system was to eliminate most low level programmer concerns like "memory management, disk management, concurrency, multi-user access, security, distributed programming, locking, sockets, ports, email, parsing, conversions, etc". In most cases I don't have any facility for the programmer to affect these things at all. Most changes to anything doesn't break compiled code or existing messages. Security is flexible and surrounds every SO. Source code can be stored in a table, imported from external files or communicated even over the internet, compiled and run in just 2 lines of code while the system continues to service other users. These ad hock functions would have the same privileges and speed of any other function the user that initiated the message would be entitled to.

The biggest difference that my system has from others is that my language was designed for programs to create and manipulate other programs at the source code level, automatically compile them and then execute them in the system as if they had always been there. A preprocessor that adds new features to my own source code could be made in 10 lines or less with no loss in speed at execution time. Somebody else on the Lambda site talked about "security problems just waiting to happen from imbedding SQL etc" which is why all my SOs are enclosed in a very flexible security shell.

A system that doesn't encapsulate data must have a security system around the whole system rather than selectively at any level of abstraction the programmer needed! This is just one more advantage of programming to OOP principles.

### Temporary Data

I would never consider any data other than temporary data to be "entangled" within code according to your definition. OOP certainly doesn't suggest this and my system won't either.

Well it is difficult to persist 'entangled' data, so in some senses it tends to be temporary. Nonetheless, I think OOP does suggest this entanglement - certainly any form of private state, but also co-data, existential types, various scene-graphs, stacks and queues, strategy patterns. Information hiding, encapsulation of information, and access to that information through ad-hoc hand-written problem-specific code are common practice in OOP languages.

A Server Object (SO) can implement any view by sending a message to other Servers [...] SOs can be "slave" to other SOs or reachable by any other SO directly.

Okay. So you have aliasing between SOs and servers, or between SOs and other SOs. That's what I expect from an OOP system.

All the points you have made here might apply to other systems but not to mine. My interfaces can change without any change to any existing messages or functions. Adding and changing interfaces is not only fast and easy but the disruption to the existing interface is minimal. Interfaces can change without any re-compiling of anything.

What is your solution to the expression problem? How are your changes to interfaces non-invasive? Do you not need any special privileges to do so? You say my points don't apply to your system, but it seems you didn't provide relevant counterpoints.

An SO can look at messages and then just pass them on as needed. This means that the SO that actually satisfies the message doesn't have to be known by the SO that initiates the message.

That's a useful feature for generic programming and data plumbing. It's also not uncommon in many OOP systems (e.g. Smalltalk, E, duck-typed or actors systems). I've not been ignorant of this possibility while making my earlier points.

A timeout in my system would reply to the message so I see no way that "thrashing" would occur.

The thrashing is an emergent property; it occurs not for the individual message, but rather for whole subsystems. This is usually a consequence of attempts to recover from an error/timeout response by retrying. In many cases, timeouts are caused due to other concurrent processes, which may also be retrying.

Timeouts and transactions are optional so if somebody doesn't want to use them, that they exist, is not a negative.

We often pay for features that we don't use: we might need to integrate with someone else who uses the feature, or there might be a lack of effective support for alternative solutions (e.g. time warp, temporal data). I imagine that, if timeouts and transactions are the answers you provide, then in practice those are the answers people must use despite any problems.

The design of my system was to eliminate most low level programmer concerns like "memory management, disk management, concurrency, multi-user access, security, distributed programming, locking, sockets, ports, email, parsing, conversions, etc".

I am also interested in addressing these concerns or making them easy for programmers to address, though I'm not fond of entirely 'eliminating' concerns about disruption or security or resource consumption. I prefer security concerns to be coupled with the language, and for disruption to be modeled at specific gateway abstractions. I'm interested in mechanisms by which memory management becomes part of the type system, i.e. supporting static reasoning about memory consumption.

my language was designed for programs to create and manipulate other programs at the source code level, automatically compile them and then execute them in the system as if they had always been there

That sounds interesting. I have some similar plans for a system I'm developing, leveraging the Haskell plugins system to achieve native code performance.

A system that doesn't encapsulate data must have a security system around the whole system rather than selectively at any level of abstraction the programmer needed!

The 'must' isn't true. Encapsulation is useful for security, but there are securable architectures that don't involve objects or full encapsulation of data. For example, a system may allow developers to logically partition data in an ad-hoc manner, and developers can secure those partitions between subprograms (i.e. such that subprogram Fred does not have general access to data recorded in subprogram Alice's partition). Stateful abstractions (including processes, applets, virtual machines, etc.) can then be aligned and associated with different partitions.

### Well it is difficult to

Well it is difficult to persist 'entangled' data, so in some senses it tends to be temporary. Nonetheless, I think OOP does suggest this entanglement - certainly any form of private state, but also co-data, existential types, various scene-graphs, stacks and queues, strategy patterns. Information hiding, encapsulation of information, and access to that information through ad-hoc hand-written problem-specific code are common practice in OOP languages.

In my system, persistence is entirely up to the system so regardless of the problems, the programmer doesn't have to worry about it. I see nothing in the OOP principles that say you should intersperse data and code. A normal programming practice, even in dynamic languages like PHP, is to define your variables at the start of a function. I see no connection between local state or the other data collections you mention and "data entanglement". In my system, all data is "hidden" inside a SO (no choice and so guaranteed and free). Responsibility for data is guaranteed. My interfaces handle access and they are very flexible and inheritable from a prototype. Again, no problem in my system.

Okay. So you have aliasing between SOs and servers, or between SOs and other SOs. That's what I expect from an OOP system.

Aliasing to me means that you have 2 names for the same thing (this implies a pointer of some kind). My SOs have no pointers (so no aliasing) but they are very flexible. A single SO cannot have 2 names!

What is your solution to the expression problem? How are your changes to interfaces non-invasive? Do you not need any special privileges to do so? You say my points don't apply to your system, but it seems you didn't provide relevant counterpoints.

Defining an interface has no expressions. Expressions can be put inside an interface function but any code can be put there to implement the interface. I am not sure I know what "expression problem" means. All code including interface code can be changed in one message interval where you have exclusive access to all data and functions of the SO. "changes to interfaces non-invasive?" see example below. All SOs and individual interfaces can have security defined for reading and/or writing. Users security tokens are queried for read or write before the interface function takes over. The interface can then query by user or what module last sent the message to determine what rights should be allowed. Once inside a SO, anything goes (there is no security on any normal object). All security is handled by the interface functions with a default from the SO itself. I show how security can be implemented below with "security=all|prog". This means read only for everybody and read/change for programmers only.

My interfaces can only be defined on an SO rather than a normal object. In my system, I have 2 quite different groups of objects (SO and regular objects). My interfaces are actually functions that can contain any code that is acceptable in a function but they always get a Map (container of arbitrary typed variables) as input, and they always return a Map as a result. If the code of an interface doesn't recognize an incoming variable, it will just be ignored without any error (valid variables, optional variables and valid values can also be specified). You could detect any non recognizable variables with a single command and create an error if you like. I will give you an example of how an interface could be totally rewritten without breaking a previous message.

Messages:
map <= :server.report name=xyz type=regular             // old message uses name and type
map <= :server.report version=1.1 prog=run_code         // new message uses version and prog

Interface definition:
server server {
#interface
report name type=regular|fancy|bad  security=all|prog   // old interface spec : name and type are required
report [version] [prog] ...                             // new interface spec : version and prog are optional
}


The name of the interface is "report". In the old message, there were 2 variables that produced a certain result. In the new interface, the variables have changed to "version and prog" from "name and type". The interface code just detects the presence of the variable "version" and if it doesn't exist then gets "name and type" like before and then executes the functionality that already works. If "version" does exist, then check to see that it is 1.1, expect a variable called "prog" or produce an error, and then implement the new functionality by calling the "program" called "run_code" to implement the report.

The above syntax is how you define an "interface" in the "server object" document. The "..." means that the SO interface won't automatically reject any extra variables (name and type in this case). A better approach would be the following.

report [version] [prog] [name] [type]                   // this would reject messages with non implemented variables


This means that any of "version, prog, name, type" are optional variables, where all others produce an error. The interface code itself can sort out a correct combination of variables or produce an error message.

That's a useful feature for generic programming and data plumbing. It's also not uncommon in many OOP systems (e.g. Smalltalk, E, duck-typed or actors systems). I've not been ignorant of this possibility while making my earlier points.

I haven't and wouldn't suggest you are "ignorant" at all :) but I do think you mix the OOP principles with particular implementations. That might be a natural mistake as we only really "know" about features by the implementations we have used. No language is totally original and I confess, up front, to borrowing good ideas where ever I can find them.

The thrashing is an emergent property; it occurs not for the individual message, but rather for whole subsystems. This is usually a consequence of attempts to recover from an error/timeout response by retrying. In many cases, timeouts are caused due to other concurrent processes, which may also be retrying.

A timeout is an error to me and I would stop the message flow. I also have a very easy to use, error catching mechanism so a person could retry on any error if they wished. If you retry from errors, any professional programmer would implement some stopping condition wouldn't they? A bigger timeout or a count of the re-tries! I am not saying that somebody couldn't create thrashing but I don't think it could happen by default.

I could always use an SO to sequence groups of messages that might cause this kind of problem. To use an SO in this manner is no more than 1 or 2 lines of code.

We often pay for features that we don't use: we might need to integrate with someone else who uses the feature, or there might be a lack of effective support for alternative solutions (e.g. time warp, temporal data). I imagine that, if timeouts and transactions are the answers you provide, then in practice those are the answers people must use despite any problems.

I suggest that timeouts be thrown for erroneous code that is hard looping. If you have a better solution to this problem than timeouts, I am all ears. In general, well behaved programs would never timeout.

I am also interested in addressing these concerns or making them easy for programmers to address, though I'm not fond of entirely 'eliminating' concerns about disruption or security or resource consumption. I prefer security concerns to be coupled with the language, and for disruption to be modeled at specific gateway abstractions. I'm interested in mechanisms by which memory management becomes part of the type system, i.e. supporting static reasoning about memory consumption.

My system was designed so that almost all data structures can dynamically grow (all my collections can dynamically grow). I wanted a granular concurrent system which could execute hundreds of green threads, spread over as many real threads as you allow in the configuration. Allowing many programs to run in the same memory space and for them to have no interference with each other was no small feat. To get them to run quickly means keeping them isolated as much as possible and minimizing the number of locks. This requires a memory management system that operating systems weren't designed for. I have a full description of how I designed my memory manager and why on my web site. I have spent a lot of time determining and writing my memory management system and I can't imagine it just being "part of the type system". I agree that security should be part of the language but I also believe that concurrency etc should also be. I want programmers to forget about the language and concentrate on the project they are coding (my syntax is so simple it looks like pseudo code). I am sure if you read my full system description you would dislike a lot more of the design decisions I have made!

I am not sure what "disruption to be modeled at specific gateway abstractions" means. I allow no mechanism to allocate, release or measure memory usage except at the global level. My Lists and Tables have exactly the same functionality except that one is fully in memory and the other works largely from the disk. If a system runs out of memory, they would have to convert some of their Lists to Tables. It sounds like your target market would want a less "automatic" language than what I am designing.

The 'must' isn't true. Encapsulation is useful for security, but there are securable architectures that don't involve objects or full encapsulation of data. For example, a system may allow developers to logically partition data in an ad-hoc manner, and developers can secure those partitions between subprograms (i.e. such that subprogram Fred does not have general access to data recorded in subprogram Alice's partition). Stateful abstractions (including processes, applets, virtual machines, etc.) can then be aligned and associated with different partitions.

"Must" is probably too strong. I have spent many hours looking at Haskell documentation and I have no clue how data is organized. To me, data definitions and the power tools that act on them are most of what I have ever programmed. Does Haskell have a way of defining data collections that can organize those data structures and secure their access? Seriously, I don't know!

Whatever ad-hoc manner of organizing and securing your data, doesn't that imply that access to the data would be restricted for some functions or that responsibility for that data would have to be spread over more than one authority? How is this better than encapsulation?

### I see nothing in the OOP

I see nothing in the OOP principles that say you should intersperse data and code.

Which OOP principles are you looking at?

Aliasing to me means that you have 2 names for the same thing (this implies a pointer of some kind).

Affirmative. But there are many ways to have two names for the same thing. And they don't even need to be the exact same thing, so long as there is some overlap. For example:

1. reference to an SO, called Fred
2. reference to an SO, called Sally, who transparently forwards some of her messages to Fred

With such a setup, you have two names that can reach Fred. Relevantly, an update through one reference/name can be observed through the other. This is a clear example of aliasing.

A single SO cannot have 2 names!

That isn't relevant. Even when you consider pointers, aliasing is often a consequence of having two different variables (which have different names) containing the same pointer. The fact that you store references to a single object in two different variables or cells is a sufficient condition for aliasing.

To avoid aliasing really requires that only one user can hold a reference to the name. There are type-systems to help control aliasing, e.g. 'uniqueness typing' or 'linear types'.

SOs and individual interfaces can have security defined for reading and/or writing. Users security tokens are queried for read or write before the interface function takes over. The interface can then query by user or what module last sent the message to determine what rights should be allowed.

Identity-based security is maybe better than nothing, but isn't especially simple, efficient, expressive, or composable. (And it can cause weird partial failures, e.g. when a user only has the authority needed to get started on a data change but not to finish it.)

Have you ever read about object capability model security? If not, consider perusing erights.org.

I am not sure I know what "expression problem" means.

You can easily look it up. There is a lot of literature on the expression problem, and how it is traditionally addressed in OOP vs. FP. I linked to a Wikipedia entry earlier. The expression problem is relevant when you're abstracting on interfaces, i.e. when you program against a set of methods and documented contracts without providing the implementation. In these conditions, a change in interface may require updating a large set of concrete classes or objects, in addition to client code.

The expression problem is largely orthogonal to live update of code, which is what you have been describing. That said, live update of code is an interesting problem in its own right, especially in combination with schema or architecture changes.

I do think you mix the OOP principles with particular implementations.

I don't believe in "the OOP principles". OOP is not well defined, and the best I can do in defining it is to point at existing systems and especially software design patterns that are widely acknowledged as or strongly associated with OOP. Were I today to take a philosophical stance, it would be largely shaped by Kristen Nygaard's classification: Object Oriented Programming: A program execution is regarded as a physical model, simulating the behavior of either a real or imaginary part of the world. That is, OOP is about modeling the program execution (emphatically NOT the problem domain) as a mechanical interaction of objects. But today I favor a descriptive definition of OOP, based on how it is used in practice - i.e. the design patterns, vernacular, and best practices of popular OOP languages.

If you retry from errors, any professional programmer would implement some stopping condition wouldn't they? A bigger timeout or a count of the re-tries!

Yes, those would be the typical responses from professional programmers. And those responses typically lead to the thrashing behavior I was describing earlier.

Except in some special cases where we can usefully observe intermediate values (e.g. hill-climbing or search-based optimization problems, or certain animation/simulation problems) there often isn't much useful we can do on a timeout, other than try again. And it isn't always clear how much more time it would take to get the job done.

When I began developing new paradigms, one of my goals was to eliminate the need for timeouts from software design patterns, at least for the error handling use case. I do find some utility in more formal, deterministic timeout models: 'expiring' tuples from a tuple space and other models of animated state.

I have spent a lot of time determining and writing my memory management system and I can't imagine it just being "part of the type system".

I wouldn't use the word 'just' there. It takes rather careful design to effectively support static reasoning about memory resources (without overly hindering expressiveness).

I am sure if you read my full system description you would dislike a lot more of the design decisions I have made!

Lol. Perhaps so. But I'm not inclined to read further at this time, and when I looked earlier I found the portal a little sparse (e.g. no direct links to your system).

I have spent many hours looking at Haskell documentation and I have no clue how data is organized.

Haskell has no consistent design with regards to data management, except that it leaves the problem to its users. But some frameworks in Haskell do favor particular options, e.g. Happstack with acid-state. Some packages, such as acid-state and perdure, enable developers to work with persistent state as though it were a permanent part of the application.

I have not suggested use of Haskell's model (or lack thereof). I did mention Haskell, but I'm developing a framework that has its own consistent model for state resources.

Whatever ad-hoc manner of organizing and securing your data, doesn't that imply that access to the data would be restricted for some functions or that responsibility for that data would have to be spread over more than one authority? How is this better than encapsulation?

I wouldn't say that "whatever ad-hoc manner" is better than encapsulation. But the structured, partition-based approach I described earlier has some advantages for open systems. It enables developers to model mutually distrustful systems as operating in different home partitions, yet does not hinder observation or manipulation that has consent of a parent process, thus extension is always feasible with consent of the parent, without invasive changes to existing code.

There are also some advantages for live programming, since logical partitions easily remain stable across code updates (compared to dynamic creation of objects). You've described some active approaches to update the interface of an object, but live programming really needs the more passive approach where you change the source code and the change is automatically applied to the system's behavior and any entangled state.

### Just responding to your responses!

Affirmative. But there are many ways to have two names for the same thing. And they don't even need to be the exact same thing, so long as there is some overlap. For example:
1. reference to an SO, called Fred
2. reference to an SO, called Sally, who transparently forwards some of her messages to Fred

With such a setup, you have two names that can reach Fred. Relevantly, an update through one reference/name can be observed through the other. This is a clear example of aliasing.

I disagree. If I was just forwarding an unchanged message from A to B then I would just eliminate A altogether. I said that an SO could combine the functionality of part or all of many other SOs on interfaces that could be totally different than other SOs and it could add more functionality of it's own as well. That is not an alias.

That isn't relevant. Even when you consider pointers, aliasing is often a consequence of having two different variables (which have different names) containing the same pointer. The fact that you store references to a single object in two different variables or cells is a sufficient condition for aliasing.

To avoid aliasing really requires that only one user can hold a reference to the name. There are type-systems to help control aliasing, e.g. 'uniqueness typing' or 'linear types'.

I have no need to add such concepts to solve problems I don't have.

Identity-based security is maybe better than nothing, but isn't especially simple, efficient, expressive, or composable. (And it can cause weird partial failures, e.g. when a user only has the authority needed to get started on a data change but not to finish it.)

Actually my security system is based on User security and Module security. A person can have a very flexible system of rights based on Dept, Level, Exceptions etc or it can be super simple. The end result is a set of tokens that are available to all SOs that can be used to resolve permissions. You could also add time of day or local versus remote IP address security. In the case of slave SOs that can't be accessed at all by most SOs, you can verify that only trusted SOs will get access. I even have SO grouping tokens so that you can add SOs to a group of modules to access some SOs without changing their code. Because the interfaces on my SO implement security, a programmer can implement any security model of any complexity they want. My security system is:

1. Simple by default
2. Efficient because security testing is never in a big loop. (No security within functions or object collections)
3. I think it is expressive because it is totally obvious in most cases without even looking at the interface code.
4. You can combine any of the techniques I already provide with any other custom system you design. Sounds "composable" to me.

Have you ever read about object capability model security? If not, consider perusing erights.org.

Do you think hiding the name of an Object is a useful security model? In my system, I use variables starting with an "_" to denote a "system variable". These variables are guaranteed to be read only by all programs. Security information is always available but impossible to forge or change.

You can easily look it up. There is a lot of literature on the expression problem, and how it is traditionally addressed in OOP vs. FP. I linked to a Wikipedia entry earlier. The expression problem is relevant when you're abstracting on interfaces, i.e. when you program against a set of methods and documented contracts without providing the implementation. In these conditions, a change in interface may require updating a large set of concrete classes or objects, in addition to client code.

I got this quote from Wikipedia:

The Expression Problem is a new name for an old problem. The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining static type safety (e.g., no casts).

1. I presume "cases" means "Classes" and my system can change a Class or add a function at execution time without recompiling. If it couldn't then recompiling would be done automatically and unseen by the programmer.
2. My system has dynamic typing but uses explicit variable typing (optionally) to provide some error checking at compile time. Arbitrary variables can be "dumped" into a function at execution time so if you use that feature, you would have to type check your own code (I have made this very easy to do.) I can't "retain static type safety" as I never had it to begin with(feature IMO)!

The expression problem is largely orthogonal to live update of code, which is what you have been describing. That said, live update of code is an interesting problem in its own right, especially in combination with schema or architecture changes.

Absolutely everything a programmer can change in my system can be done by a function at execution time so long as they have the permission to do so. In my system, code can be "re-compiled" while others continue to execute their functions. Compiling doesn't mean stopping in my system which I believe the quote writer assumed.

I don't believe in "the OOP principles". OOP is not well defined, and the best I can do in defining it is to point at existing systems and especially software design patterns that are widely acknowledged as or strongly associated with OOP. Were I today to take a philosophical stance, it would be largely shaped by Kristen Nygaard's classification: Object Oriented Programming: A program execution is regarded as a physical model, simulating the behavior of either a real or imaginary part of the world. That is, OOP is about modeling the program execution (emphatically NOT the problem domain) as a mechanical interaction of objects. But today I favor a descriptive definition of OOP, based on how it is used in practice - i.e. the design patterns, vernacular, and best practices of popular OOP languages.

Alan Kay said that message passing was the most important thing in Smalltalk(OOP). I would say that it is the second most important feature and that encapsulation is the most important. I don't agree that OOP has anything to do with "real world objects". I have Classes that are the templates for Objects, but I don't have "constructors", "destructors", "public properties" and many other constructs that I believe make OOP programming harder than it should be.

The emphasis in my SOs is on "collections of objects" (or even collections of collections) rather than objects and I even give significantly different properties to my 2 levels of Objects. Pure my language is not, nor is it a flavour of some existing language. I believe that the value of a language doesn't come from absolute originality but from the ease with which it can solve groups of problems.

I wouldn't use the word 'just' there. It takes rather careful design to effectively support static reasoning about memory resources (without overly hindering expressiveness).

I agree there are other strategies, however, as I don't allow a programmer to actively have anything to do with the memory management, my design decisions are obviously for a different audience.

Lol. Perhaps so. But I'm not inclined to read further at this time, and when I looked earlier I found the portal a little sparse (e.g. no direct links to your system).

Here is a more direct link (It was the big button on the bottom of the page you went to).

Check out the "Current Design of MAX (big picture)" or for the "Why", try the "The Why of the design of MAX?".

We are all busy so I understand if you don't read a description of my full design. You have spent a considerable time on my comments and they have been appreciated.

Haskell has no consistent design with regards to data management, except that it leaves the problem to its users. But some frameworks in Haskell do favor particular options, e.g. Happstack with acid-state. Some packages, such as acid-state and perdure, enable developers to work with persistent state as though it were a permanent part of the application.

For applications I envision my system being used for, your above quote would disqualify it regardless of any other desirable features.

I wouldn't say that "whatever ad-hoc manner" is better than encapsulation. But the structured, partition-based approach I described earlier has some advantages for open systems. It enables developers to model mutually distrustful systems as operating in different home partitions, yet does not hinder observation or manipulation that has consent of a parent process, thus extension is always feasible with consent of the parent, without invasive changes to existing code. There are also some advantages for live programming, since logical partitions easily remain stable across code updates (compared to dynamic creation of objects). You've described some active approaches to update the interface of an object, but live programming really needs the more passive approach where you change the source code and the change is automatically applied to the system's behavior and any entangled state.

I'll put the following in point form.

• My SOs have security built around them all.
• Changing security doesn't break messages or function calls.
• Most SOs would have long term persistence in my system with dynamic objects created mostly in functions for temporary use.
• SOs could be created dynamically in my system but most of the time they wouldn't be.

I don't see what I have said that would lead to your last point. Changes to the persistent data, interfaces, functions or classes are automatically applied to the "system's behavior" and as I have stated before, my data and code aren't tangled.

### Quibbling definitions of

Quibbling definitions of 'aliasing' isn't as productive for this discussion as trying to understand what I meant by the word when I used it earlier. And aliasing isn't a problem; indeed, it can be a useful feature. But it can interact with other features (such as concurrency and memory management) in some awkward ways.

Actually my security system is based on User security and Module security.

Even if your identities come from users and modules, it's still identity-based security: you are administrating ACLs, performing lookups, and making security decisions based on who performs the request. Identity-based security inherently encounters the confused deputy problem, and is not simple, efficient, composable, or expressive compared to various alternatives (cf. object capability model, F* language).

From your response, I think you don't know the meanings (in any formal sense) of expressive or composable. But that is not a discussion that would help you or me at this time.

Do you think hiding the name of an Object is a useful security model?

Object capability model is very useful, and it relies on unforgeable names.

I presume "cases" means "Classes" and my system can change a Class or add a function at execution time without recompiling

For the expression problem in the context of OOP, cases typically refer to subclasses, or the concrete classes that implement some abstract interface. The relevant issue is that an update to the interface may require updating many classes that implement the interface - i.e. a shotgun update. Whether or not the update occurs at runtime is not relevant to the expression problem.

I believe that the value of a language doesn't come from absolute originality but from the ease with which it can solve groups of problems.

I agree that originality is not what makes a language valuable. Though focusing on ease of expressing solutions seems narrow. If a language makes it easy to do the right thing but also makes it easy to subtly do the wrong thing, then the latter seems to undermine the former. Ability to explain, maintain, extend, reuse, adapt, and do other useful things with the code is at least as important as the ability to write it in the first place.

For applications I envision my system being used for, your above quote would disqualify it regardless of any other desirable features.

Even without knowing what you envision, I expect other people would find Haskell (with the appropriate packages) well qualified for those same applications.

### Ideas are what's important, not technical definitions.

Quibbling definitions of 'aliasing' isn't as productive for this discussion as trying to understand what I meant by the word when I used it earlier. And aliasing isn't a problem; indeed, it can be a useful feature. But it can interact with other features (such as concurrency and memory management) in some awkward ways.

I totally agree that "quibbling" isn't useful. Your comments about "aliasing" wouldn't have been made if you had spent 2 minutes actually looking at the overview of my system. Your comments on aliasing might be correct for other systems but they obviously didn't apply to mine.

Even if your identities come from users and modules, it's still identity-based security: you are administrating ACLs, performing lookups, and making security decisions based on who performs the request. Identity-based security inherently encounters the confused deputy problem, and is not simple, efficient, composable, or expressive compared to various alternatives (cf. object capability model, F* language).

From your response, I think you don't know the meanings (in any formal sense) of expressive or composable. But that is not a discussion that would help you or me at this time.

I looked up the "deputy problem" and it doesn't apply. A few points on my security:

1. Inside a secure system SO, users, passwords and their list of security tokens reside.
2. When a user makes a request, this token "Set" follows the message in a secure "read-only" variable type.
3. These tokens are used to flexibly determine if a user has access to an interface.
4. The programmer decides what changes to state or functionality an interface can have. The interface itself has no security.
5. In fact, all interfaces have 100% access to the functions and data that reside inside an SO.
6. Changes to the definition of the SO or it's interfaces means you have to have the right security tokens to do that.
7. Messages that haven't passed the security checks cannot change an interface or anything else inside an SO.
8. The only security that exists is the set of tokens that follow a user's message and these can't be changed outside of the secure system SO. There is no higher security to usurp.
9. In the case of Module security, the identity of the sending SO cannot be spoofed or changed.
10. As I have said, my security system isn't closed. A programmer can implement any additional security they wish and use the built in security to make sure that the custom security isn't compromised.

You are absoluteness correct that I don't know all the technical terms as well as you do. Unlike you, I don't make rash generalizations. I take each idea on it's own merits, unlike you. I am open to all ideas regardless of their source. I have no interest in what is the "standard or orthodox methods" and sometimes that means I will commit the same mistakes that others have already made. It also means that I will also come up with innovative solutions or combinations that others have not and I think this is more important.

Big words don't impress me. Math notion is no more convincing to me than spreadsheet models of future profits. I think the purpose of communicating is to convey ideas. Cloaking simple ideas in jargon doesn't make the idea correct, better or more useful. Regardless of how much you think you know, are you really that sure that you can learn nothing more?

Object capability model is very useful, and it relies on unforgeable names.

Yes, I did read about the "capability model" a long time ago and I quickly forgot the name because I found the algorithm to be useless. Hiding a name is not security. It seems to me that is also used in "Erlang" but I could be wrong.

For the expression problem in the context of OOP, cases typically refer to subclasses, or the concrete classes that implement some abstract interface. The relevant issue is that an update to the interface may require updating many classes that implement the interface - i.e. a shotgun update. Whether or not the update occurs at runtime is not relevant to the expression problem.

I looked up the "expression problem" and showed how it doesn't apply in my case so you ignore my comments and change the question. The form of debate that I use is obviously different than yours.

I agree that originality is not what makes a language valuable. Though focusing on ease of expressing solutions seems narrow. If a language makes it easy to do the right thing but also makes it easy to subtly do the wrong thing, then the latter seems to undermine the former. Ability to explain, maintain, extend, reuse, adapt, and do other useful things with the code is at least as important as the ability to write it in the first place.

So you get to decide what is the "right thing"? I don't think I am "focusing" on ease of expressing solutions at the expense of the other issues you mention. Anybody can create complex systems that implement complex functionality BUT it takes great skill to create simpler systems that can implement complex functionality. This is no famous person's quote, just something I have learned over a 35 year career. I agree that if the "simple" systems don't have the tools to solve your problem then how "simple" it is doesn't matter. I totally agree that "maintenance" of code is at least as important as the speed of "creation", however, these 2 properties are quite connected in most languages. Code "reuse" and "adaptability" are also very important. No argument there.

Even without knowing what you envision, I expect other people would find Haskell (with the appropriate packages) well qualified for those same applications.

I would agree that programmers like you would prefer Haskell to what I am creating. My audience will not be compiler writers, Math purists or others whose purpose is to explore the minute details of computer code. My target audience will be one that cares about the problem they want to solve and they want the language to disappear from their concerns. I think of my design approach for this project as providing a powerful, balanced, sports car that has an automatic transmission rather than a stick shift. There is nothing wrong with sports cars with stick shifts, I have had more than a few of them in my time.

### I wish you luck with your

I wish you luck with your project. I am not convinced by your claims and arguments, but I don't feel further discussion is useful.

### I believe that you are right.

This topic is now entirely long enough. There's no point in going on.

### Layering

Topic is a little bit old now but I had a thought. Explicit layering of different languages designed around specific paradigms - whether domain-specific or for generalized computation - is an accelerating trend, with ample examples in both industry and academia. As well, the "real world" of computing is offering an increasing number of affordances to the code-generation process - not an either-or where you're locked into programming within a single workflow driven by your technology, but a whole palette of tools that can be mixed towards application needs, with room to "maneuver" and trial different technologies at a stage of partial completion, linked by common protocols, data formats, etc.

So just extrapolate that for 8 years and you can get a snapshot of ideas and events that may develop from the trend:

• An explosion of new tiny languages
• Language-inside-language architectures(not necessarily "domain-specific")
• A war over which language should be "top-level" and subsequently, the actual meaning of "top-level"
• New language designs that attempt to formalize layering
• New protocols or type system designs that attempt to improve the state of cross-language negotiation
• An increasingly blurred line between "user" and "programmer" as applications expose more and simpler languages to the user, rather than hardcoding their functionality

This trend is not a new thing, it's easy to see throughout the history of computing. The question is really one of whether it's going to hockey-stick, and if it does it may also subsume most of the other trends because it applies a "divide and conquer" algorithm to programming as a whole - as the average language becomes smaller and simpler, you will see a positive feedback loop.

### runtime conflicts

I often use the word layer and I guess my ideas roughly match what you said, but seven years doesn't seem long enough for big change, let alone an explosion. I'll validate your basic premise anyway, though folks should worry a lot about chaos which can bleed off value. The phrase "lots of moving parts" is almost never a compliment in tech: complexity is vice while simplicity is virtue. Some layering experiments will implode in the aftermath of noise and unstable results, which may undercut any positive feedback loop.

Languages typically don't get along when they disagree about some basic aspect of runtime contract, where memory management looms large as a source of conflict. The phrase "impedance mismatch" is sometimes used to note software conflict based on mismatch of premise, api, affordance, planning, or something else causing cooperation to lose. Threading is another problem, as is synchronous versus async control flow organization. Something as simple as exceptions can turn into an interface headache. Basically, a one-size-fits-all concurrency plan for multi-core is not going to emerge in just seven years -- not one melding all languages -- partly because serving sync legacy code will involve hacks and experiments causing technical debt of their own.

I thought I was going to say something positive when I started. Oh yeah, I was going to say async systems are good at gluing things together, and can be agnostic about many different flavors of async api mixed together. If sync code calls an async api, it's easy to wire together a bunch of different async code interfaces, and they get along fine. (When you get a reply, you have no idea how many legs were involved, and don't care.) But async code calling sync code doesn't work, unless it's smallish and non-blocking, or unless it's transformed into async messages to sync code running in a worker thread pool -- which is how you farm out blocking system calls in an async system. Layering async is easy, but layering with sync code is hard, and the mismatch is at least as bad as mixing gc (garbage collected) with non-gc languages.

The top level must be something loosely organized, with message passing or coroutines, or something with a model addressing concurrency. Otherwise it's just old school entry points in other languages that look like your own via foreign function interfaces.