I've run out of programming languages to study

It feels like there is nothing new under the sun.

I've learned to use, implemented a few of and studied the theory of (over the course of the last two decades) HTML, SQL, C, Java, Objective-C, Prolog, Common Lisp, CLOS, J, Scheme, Ocaml, Haskell, Agda, Oz, Reversible programming, Quantum algorithms to name a few.

Now I know that I am wrong, and there are lots of original things (not equivalent to what I have already seen) out there but I haven't been able to discover them. That's why I'm posting.

So what new discoveries have you come across recently? And what gems from history might I have overlooked?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

A few I've found interesting

A few I've found interesting outside those you mentioned:

  • Temporal logic programming - Bloom, Dedalus
  • Concurrent constraint programming and many variants - temporal, stochastic, soft
  • Kaleidoscope - interesting composition of imperative + temporal constraint
  • Inform 7 - interactive fiction, English-like programming at its best, mix of OO and rules
  • Agent-oriented and multi-agent programming models
  • Grammar-based programming models - generative grammars, multi-agent grammar systems, recursive adaptive grammars (RAGs, John Shutt's earlier thesis), attribute grammars
  • Kell calculus
  • Orc, YAWL
  • Multi-media or real-timed programming models (Chuck, Max/MSP, Fran, Frob, CSound)
  • Wolfram Alpha (& interactive language based on it)
  • Croquet Project, Second Life, Lambda Moo
  • E language, object capability model patterns, robust composition, Waterken
  • K language (more KDB than the language)
  • Conal Elliott's Tangible Values
  • Spreadsheets, steady-state reactive programming, Wave
  • IBM's hyperspaces
  • Maude, BOBJ
  • Clojure
  • Curl

I'm also working on something new - reactive demand programming.

What did you find interesting about Curl?

Just wondering! (edit: I wrote a non-trivial demo app for our CIO once when he was curious about Curl's potential, back when he was undecided on Silverlight vs. Flash vs. DHTML vs. desktop smart client vs. "Mars Rover Project" ;-)).

I like Curl's consistency

I like Curl's consistency and relatively gentle slope between simple formatted text and rich applications. I'm not a fan of polyglot programming.

I can recommend my language

I can recommend my language Babel-17 :-) It has a spec and even a Netbeans plugin. In case you take a look at it, tell me if you find anything in it that is new to you / that surprised you (positively or negatively).

qi lisp 2 now shen

random favourites of mine that i don't have time to learn and keep up with: http://www.lambdassociates.org/, ATS, ... will add more...

You forgot UML ;-0

...the last language you learn before you die...muahahahah.

Seriously, though, it seems like you are missing out on visual languages and dataflow languages, and the intersection thereof. Also, I don't see a term rewriting language like Maude in your list, nor do I see any meta-environment-specific things like ASF+SDF. Check out things like Lustre, Esterelle, etc. It also amazes me how many people think they "know" actors when their only experience with the concept is Erlang, which wasn't based on Hewitt's work and actors were only used to describe Armstrong's dissertation after-the-fact.

And what about Smalltalk, Strongtalk, Newspeak, etc.? Objective-C is boring in comparison.

What I usually do is supplement my language theory reading with practical stuff so that I know problem domains I could apply it to, such as artificial intelligence, especially the use of sensor networks...

Newspeak

Bracha's Newspeak is constrained in interesting ways; the decisions behind its module system (banning imports, cutting out static) are worth learning.

I have no plans to learn UML before I die. I'm sure they'll be teaching it in hell.

Synchronous programming languages

Speaking of Lustre and Esterel, you may also give a try to Lucid Synchrone, a higher-order Lustre-like language extended with various constructs inspired from Esterel.

Just wondering...

What was inspired from Esterel?

Lucid-like languages are a good example of abstracting away the language and looking at a more broad category of something. The OP could also search for "intensional transformation" or "intensional programming" in an academic search engine like the ACM Digital Library.

Esterel introduced (?)

Esterel introduced (?) synchronous semantics, sort of under the abstract state machine model: time step begins, effects happen (and are unified in case of conflict), next time step begins with previous effects + new inputs. Lucid, FRP, etc. took the same semantics, introduced higher-order values, but eliminated explicit effects. Interestingly, we're full circle now: Coherence uses transactions/search for rich unification while Revisions and Cilk do a simpler, more Esterel-like approach of explicit merge operators (hyperobjects/reducers in Cilk parlance).

Inspiration from Esterel

What was inspired from Esterel?

Mainly valued signals IIRC.

Io is interesting. Invent

Io is interesting.

I find the implementation of optimizing compilers interesting, perhaps you could check that out.

Invent your own!

Pure

http://pure-lang.googlecode.com

It's an impure functional programming language based on generalized term rewriting.

Suggestions

You've covered most of my experience in your list so I have to resort to languages that I've heard interesting things about instead:

  • Smalltalk - worth looking at for the clean and elegant use of messages and objects.

  • APL - interesting as a low-level look at parallelism

Also, you didn't mention any assembly language variants in your list. It's always good to learn one or two to gain some insight into what is happening under the floorboards.

I'm in a slightly different boat (I'm also in flooded Australia)

It really depends on what you want of your languages as to what you may find as new and interesting to study, or even old and novel to study. Sometimes I find it good to look for languages and concepts outside of programming for inspiration.

Personally I've come to realize that programming languages, as powerful and practical as they are today, need to step out of themselves and take a good hard look at their fundamentals, both in the physical and meta-physical sense. For me, learning the majority of existing languages has been a case of how high in the register machine Tower of Babel do I wish to climb, and eventually I climbed down.

The only interesting languages for me right now are mostly impractical and systems or esoteric languages tackling the problems of centralized and circular foundations that register machines have laid out before us. Languages I currently know about that seek to be decentralized and metacircular at their most fundamental level (the instruction) are however few and far between: Nock and it's use of CCN is the only I know of that even comes close at a software level. CCN however doesn't offer the granularity needed here, nor do I believe Nock does combinatorially. I'd be interested to hear of anything like Nock or anything close to a metacircular zero-operand combinatorial instruction set computer.

Other than inspirations like Henry Baker's zero-operand call to arms in The Forth Shall Be First

view the return stack itself as the instruction buffer cache!
which I extend to; view the instruction buffer cache as the network that bootstraps the network.
There's also Enchilada's use of confluently persistent data structures, Ripple and namespace/database, and the old FIRST & THIRD almost FORTH in building a language, Frank Sergeant's 3-instruction FORTH for a minimalist approach to the same, OISCs, Not the applicative SKI combinator calculus but instead Brent Kirby's Theory of Concatenative Combinators, Joy, otuto, YASBL (I note the spatial geometry in these last two with the extra stack dimension), number theory, category theory and geometric algebra. All of these as well as the physical realm in the underlying hardware and physics itself are the only things that keep me interested in programming. Hardware that still for the most part is unable to reconfigure itself, that is nowhere near being able to self-assemble, and is therefore limited by how much of the system can become decentralized and made redundant, autonomous, or dare I say; more intelligent.

In the same vein, I've been

In the same vein, I've been mostly looking at languages that, first, are trying to help us with the constraints of current systems: Blelloch's NESL and scan model languages (now popular for multicore and GPU systems), the Swarm language for moving the computation to data, LINQ's extensions PLINQ, DryadLINQ, and Rx, and many others. At the same time, our computing substrate is changing: one of the most fascinating substrates is biological. There is some really fun language ideas and opportunities here.

My point is that if you pick a constraint or topic, you should find a lot. What about security? Multi-principal programming and privacy are becoming a big deal, leading to: capabilities, information flow (quantitative, differential, ...), PCC, etc. What about programming-by-demonstration? Synthesis is now a big deal, with sketching as the most prominent, but the HCI community has us beat: the PL and SE community is nowhere near where they want us to be.

There's definitely a lot of

There's definitely a lot of exciting stuff out there. Well, I suppose topics such as security, moving computations to a GPU, programming with touch, et cetera might bore most people to sleep, but I find it exciting.

You can always dress it up:

You can always dress it up: security => programming with multiple cellphones at the same time, GPUs =L massive physical simulations and AI, biological and neural computing => freaky science, PBD => minority report, ... . If there's something cool, we need an interface to do it.

Isn't security solved in the common case ?

Regarding security, I have the strange feeling that what have always known "the solution", which is common sense: apply the principle of least privilege. I'm under the impression that it's simple and would work, only that application and system developpers just don't do it.

Is there a reason I'm missing why, in the 21th century, my web browser is still able to open any file I own, other than the file it uses for internal work, some private temp files, and the download locations I explicitely specify?
It might fall under dmbarbour's theory of "path of least resistance" --- people are not yet forced to think about what authority they need -- but I find it a bit hard to accept.

(I didn't name it "capabilities", because I think capabilities are a bit more than that, and other models may also be adequate. Actually I don't care what the developpers of the application I use think of security or use as security paradigm, as long as they respect the POLA)

POLA isn't a solution.

POLA isn't a solution. It's a principle by which you might judge a proposed solution. I actually favor a different set of principles:

Ka-Ping Yee's Principles for Secure Interaction Design:

Path of Least Resistance
Match the most comfortable way to do tasks with the least granting of authority.
Active Authorization
Grant authority to others in accordance with user actions indicating consent.
Revocability
Offer the user ways to reduce others' authority to access the user's resources.
Visibility
Maintain accurate awareness of others' authority as relevant to user decisions.
Self-Awareness
Maintain accurate awareness of the user's own authority to access resources.
Trusted Path
Protect the user's channels to agents that manipulate authority on the user's behalf.
Expressiveness
Enable the user to express safe security policies in terms that fit the user's task.
Relevant Boundaries
Draw distinctions among objects and actions along boundaries relevant to the task.
Identifiability
Present objects and actions using distinguishable, truthful appearances.
Foresight
Indicate clearly the consequences of decisions that the user is expected to make.

It turns out that object capabilities are also really good for achieving the SID principles.

POLA differs from Yee's Path of Least Resistance in a significant way: POLA is an assertion about how many authorities a component holds, whereas Path of Least Resistance is an assertion about how many authorities a (lazy) agent grants.

This is reflected in how I treat code distribution... e.g. a service 'replicated' to my machine might contain its own authorities for a few low-level services - perhaps an http cache, random number generation, file-like persistence, and time. Since I didn't grant those authorities, I'm not the one responsible for them, nor am I at any additional risk than if I was using a remote service. Under the hood, some carefully chosen services might be implemented via unum pattern in order to optimize them - e.g. no need to go all the way across the network for random numbers. In comparison, a typical POLA approach is to distribute pure code (no embedded capabilities) then use a sandbox or powerbox to control which authorities that program ever holds. This way, the code has no authorities you did not grant, and you can ensure it holds a minimal set. The problem with this: you become responsible for all the broad, low-level stuff in the program's minimal set, and it is very difficult to audit or grok potential abuses of low-level services. Much better if you can focus your attention on granting high-level authorities, such as access to a webcam or GPS.

Ah, well. A lot of smart people get fixated on POLA and confinement to the point of losing sight of the real requirements for security, which inspires such rants as the above. Confinement patterns should be available when necessary, but are far less often necessary than adherence to POLA would suggest.

I don't care what the developpers of the application I use think of security or use as security paradigm, as long as they respect the POLA

Here are some reasons you should care: robust composition and extension of applications/services (e.g. transitive service invocation, support for modular plugins, mashups, scripting, remote debugging), performance, scalability, points-of-failure, simplicity, and administrative overheads.

Security models or mechanisms can differ greatly in other relevant properties while still achieving desired security properties. Object capabilities are a huge win when judged in terms of these other properties, though they can be further augmented with static analysis and control over information-flow.

We've seen this fail, e.g.,

We've seen this fail, e.g., alert boxes asking you if something is ok just trained users to hit "ok".

Security is hard both in terms of the math and the people, and, unfortunately, we need to combine them. An interesting paper I read recently touching on the more general problem (though often questionable for the CS and SE side of things) is "The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility".

Take passwords as another

Take passwords as another simple example of the tension between security and usability. The more complicated the password, the more secure you are, supposedly. However, while our PCs might be protected by an 8 character alpha-numeric password with a special character, our bank accounts at an ATM are protected by a 4-6 digit numeric PIN! It turns out this is often good enough, especially since you have to have the card (easily cloned and not even chipped in the States) to use the PIN. As a result, we are seeing devices move to simple PIN-based passwords rather than complex PC passwords.

It turns out the new hot device solutions for security is just a walled garden: don't allow apps to do much of anything except store some of their own state, access the network, and twiggle some sensors. This is augmented with some vetting and reputation building via an app store. We have seen some abuses (Android apps spying on users) but in general people seem to be happy with it. Nobody that I know of is looking at complex formal models to enhance security in the consumer domain, that is very telling.

I don't see the walled

I don't see the walled garden as very different from the browser world (process separated, limited storage, etc.), and that is just the story the OS community has been saying for decades.

The problem is the app model sucks. Open up cnn.com or whatever your favorite site is in firebug and see how people like to write apps (in short, linking in lots of third-party stuff and user content). As the app model matures, we'll see the same things as the web model: as an example, people are already linking in advertisement software. It's essentially the web model but worse (perhaps the silver lining of having such an archaic developer environment is that people are slow to do things they could have quickly done on the web, saving us from bad software being created).

Stepping back, traditional desktop security issues are largely solved. Buffer overflows etc. are increasingly not a concern. In contrast, application-level attacks, like XSS, are. I'm serious: there are monthly etc. tallies about attacks. Attackers may or may not hijack your computer (e.g., by heap spraying the JSVM), but they will hack your website account (XSS). Some of the problems seem to be self-inflicted (JS is great for privilege escalation and has no intended support for multiprincipal programming), but others are just hard problems we don't know how to handle (how *do* you do multiprincipal programming? In practice, this is new ground for the computing industry.)

Whatever you think about the

Whatever you think about the walled garden app model, it is where all the innovation seems to be taking place. For example, Apple Google, Microsoft, etc... could eventually pull advertising duties into the OS, meaning they would disallow (by fiat of course) potentially insecure third-party advertising infrastructure. That alone would reduce the needs of many apps for additional privileges. The more they pull into the OS, the less privileges apps need to serve users (innovation and legal concerns of course remain).

My gmail account was hijacked last week, no idea how they did it (used a unique password, always used https) but I know this is a real problem. However, I'm not sure how any of the more sophisticated security schemes could have prevented that.

I'm trying to respond to

I'm trying to respond to this in a non-flamey way. If it comes out badly, it's not intended to, and is actually my second rewrite of this response.

it is where all the innovation seems to be taking place

There is great business innovation in the app model; I'm not really qualified to talk about that, but it's interesting to compare something like download.com and Steam/OnLive: the apple/android app stores seem more like download.com to me. There are all sorts of interesting conclusions one can draw from that, e.g., the role of advertising. However... that's not about the technology.

In terms of the technology, as app models support dynamic languages / embedded browsers and browsers go more native, the line really blurs. The only really interesting distinctions to me are that current apps are native and single principal, but I view that almost as an anachronism from starting a new eco-system using old technology. It also ties into the next point:

could eventually pull advertising duties into the OS

Why do you trust Apple more than Google than Flashmob? I agree that I do trust the OS and, as a dual, for managed languages, the language. As a third seeming tangent, why is advertising so special? Similarly, why not use multiple ads in a system, e.g., iAds for the visual ad and then several invisible services for user tracking?

Modern software comes from multiple sources, involves many developers of varying ability, and increasingly, dynamic user content. The web has gained the most critical mass here -- I'm often surprised when I look at just what javascript is running on my favorite site -- but if something is to replace the web, not supporting this in it seems like a step backwards (and thus it would ultimately reappear).

I know this is a real problem. However, I'm not sure how any of the more sophisticated security schemes could have prevented that.

If there was a good solution, it wouldn't be a real problem ;-) (I'm a little facetious here -- e.g., SQL injection attacks are more of a social problem than a technical one.) I think it's just a matter of time for the most part -- we already are making great headway. Key players like advertising frameworks and platforms like Facebook are pretty solid by utilizing a lot of good techniques and effort. The research community has been doing a lot too -- browser-based and search engine malware detectors have a lot to thank for here, and approaches like symbolic execution are only recently being carried over. I'm fairly optimistic. OTOH, I do think this is also one of the biggest language challenges of the decade.

In terms of the technology,

In terms of the technology, as app models support dynamic languages / embedded browsers and browsers go more native, the line really blurs. The only really interesting distinctions to me are that current apps are native and single principal, but I view that almost as an anachronism from starting a new eco-system using old technology

The point is that we didn't need managed languages or better security models to make device apps safe and reliable, we just needed an app store. Whatever anachronisms remain are pretty irrelevant.

Why do you trust Apple more than Google than Flashmob? I agree that I do trust the OS and, as a dual, for managed languages, the language. As a third seeming tangent, why is advertising so special? Similarly, why not use multiple ads in a system, e.g., iAds for the visual ad and then several invisible services for user tracking?

I trust Apple or Google a lot more than I do someone I've never heard of before. It all comes down to who to blame. Blaming a small time app developer isn't very useful, while blaming Apple is. I'm much more likely to trust a small time app developer if Apple takes care of advertising.

Browser cookies exist primarily for marketing and advertising reasons. Take that reason away by including infrastructure in the OS (and forcing apps to use that infrastructure), and we can get rid of cookies, life becomes better for everyone. Cookies won't come back from that, they are just too horrible.

Clean slates are great...

Clean slates are great... but a temporary fix: 1) you throw away the baby with the bathwater and think you've solved it... but 2) you're just starting on a new ball of mud. There may be solutions to the problems, but what about the app store model qualifies as one?

Modern software requires multiprincipal composition and a host of security-sensitive features. I mean this in a very real sense. Cookies are a great example: they're just a form of persistence for otherwise sandboxed apps; HTML5 has various *new* forms of this and even Flash, a different technology, has its own system. The security challenges are intrinsic (and, with the increasing role and sophistication of software, increasing!). Saying the app store solves it doesn't make it true. E.g., apps also have a persistence layer: how else do you save game state between phone use during disconnected operation?

As the app store matures, components (#include, script="blah.src", "import antigravity", whatever) will come back. Advertising has already demonstrated that; there are multiple advertising platforms. There are many other reasons to compose software from different parties -- I'm surprised that's a contentious point.

I'm not arguing that

I'm not arguing that composition is evil, far from it. Composition is great, but many of the components that you compose with should basically be trusted parties. That is, rather than including someone else's code in your app, it is often better just to link with a trusted service. Perhaps the service comes from the OS, or perhaps its a third party that is vetted and installed on the device separately from your app. The state of composition today...the app composes services by including their code, we have no idea about the integrity or trust worthy-ness of that code, or even if it is really the code that the app developer thought they were reusing. Instead, if I link to an advertising service, that service should have some level of reputation within the ecosystem, and there should be no way to circumvent that.

The apps should have privledges, we know who to blame if something goes wrong (the app), but those privledges shouldn't automatically extend to third-party code.

App stores are a clean slate where we can begin implementing more sane policies with regard to composition. Its still early, we have Apple saying "no third-party components" which is of course unrealistic and easily gamed. But there are better solutions than just going back to the crazy web model. If anything, we will see the web model move toward a better app model.

Right, I think we're in

Right, I think we're in agreement. Not sure if the web platform will be the one to substantially fix the composition problem, but the problem isn't going to go away for anyone in the general end-user application space. The OS vs. the browser providing the services, to me, is an implementation detail (though important) -- for example, program analysis, hardware, and hypervisers all have also been demonstrated to be effective for certain properties, and we are starting to see all of them being used.

The insight that security controls *need* to be richer is the tricky one to me. What are they? Who turns them? Adoption issues? I have one vision for the important knobs here -- see my W2SP 2010 paper -- and some experience in getting different people to turn them -- see my Oakland 2010 and, to a lesser extent, WWW 2010 one -- but there's a lot more to the issue (e.g., user and third-party control of their data rather than application providers/composers like facebook, automated agents like search engines and browser filters, the role of collaborative security features to bridge the CSCW gap, etc.).

Security and Usability

You and Leo (and many others) have observed tension between security, flexibility, and usability in our modern application architecture. Some would conclude this tension is necessary, that there will always be a trade off between these properties. In the capability security community, however, we understand this tension to be a consequence of the modern application architecture and life-cycle (including development with of shared libraries, distribution model, maintenance and upgrade). Security, flexibility, and usability can coexist, but we'll need to redesign much of our architecture to make it happen.

Take passwords as an example. Currently, we use identity-based security for dozens of sites. This isn't very secure: people tend to use the same passwords at every site. This isn't very flexible or composable: we run into all sorts of problems with delegation. (When Bob is acting on behalf of Alice, whose name and password does Bob use?) And it isn't very usable: name and password repeatedly become a barrier to getting work done, over and over again.

Take those annoying 'permission' dialogs as another example. As Leo mentions, those just interfere with usability and train users to press 'Ok' without really understanding what rights they are granting or how they might be abused. Even worse, the rights are usually really low level and coarse grained... e.g. access to a network or a filesystem. Low-level services and APIs are pretty much impossible for users to grok and audit, just like low-level code is pretty much immune to static analysis. Even coarse-grained access to UI, seemingly benign, subjects us to click-jacking and denial of service.

Users can't reason about low-level authority. Any grant of authority MUST be at a level of abstraction and granularity that users can understand.

The reason to seek better security models is to improve usability, flexibility, and performance.

The object capability model allows us to isolate identity-based security to just an initial portal login for human users (since humans can't remember capabilities) that grants access to a user agent. It allows very fine-grained delegation, which makes services much more flexible and extensible and allows multi-principal composition.

And alternative application models based on ocaps and fine-grained object APIs (rather than sandboxing and VMs) have been developed that avoid issues for those dialogs. For example, we can treat applications as shards of remote services replicated to our machine, rather than something we download then run in a sandbox. In the few cases we need limited confinement, that can be expressed as part of the API on our side (e.g. pass a pure function to this method).

Even under these conditions, reputation and trust will still be an important part of the application architecture, especially the distribution model... e.g. similar to how users might trust services requested Amazon or Google more than from an arbitrary site. But we don't need the walled gardens.

There may, however, be some tension between security and profitability. If our web-apps were to be rich, high-performance, extensible and secure, the app-stores would splinter and be much more competitive.

Capabilities are flexible,

Capabilities are flexible, but the point is it's unclear how to actually use them (or any other current technology) to solve CSCW problems in a full (i.e., contextual) system.

Ocaps won't help you solve a

Ocaps won't help you solve a problem before you even define it. (And CSCW consists of many poorly defined problems.) They also won't solve a problem that has no solution.

But there are many capability patterns for solving specific CSCW-related problems. You probably underestimate how much work has been done, if you haven't been reviewing history and keeping up with the capability security community.

I am on the ocaps list

I am on the ocaps list etc.

Rewriting your statement a bit:

But there are many assembly instructions for solving specific CSCW-related problems. You probably underestimate how much work has been done, if you haven't been reviewing history and keeping up with the compiler community.

-- from a CSCW viewpoint, we're in the assembly level of security controls. Adding DAC, ocaps, provenance tracking, etc. is still very low level. Technical solutions for CSCW problems start more like the paper @ WWW last year on machine learning of security policies for Facebook. I would agree that the infrastructure for deploying such solutions sucks (and, for example, might benefit from an ocap layer), but that's just a stepping stone.

Furthermore, an interesting question posed by the paper I cited originally is whether it is even possible to close the socio-technical gap -- for one interpretation, is end-user security AI-complete? (I don't really want to argue that one, but it's pretty darn deep!)

Powerful assembly instructions

CSCW is an uber-broad ill-defined subject covering such things as e-mail, IRC/IM, MUDs/MOOs, Croquet Project / Second Life / virtual worlds, collaborative editing, bookmarking and sharing views, teleconferencing, remote learning, blogs, wikis, distributed version control systems, bulletin boards, project management software, bug tracking, calendars, workflow, electronic trade and contracts, et cetera.

I'm saying that there are many ocap patterns to secure and federate many specific applications of these classes.

If this seems low level it's only because CSCW isn't so much a domain as a cloud thereof, and everything looks low-level when your head is in the clouds.

The WWW paper, while I have

The WWW paper, while I have my own critiques, suggests that we're not very far off (... in decades) of usefully automated social models for security. It isn't even alone in the field.

Indeed, I think that's going to be one of the defining technology arenas for the next long while: duplicating what Google achieved for search for other domains. Social data is here, now we're starting to build it into our systems. Netflix, voting, finance, Amazon, and Google have a head start, but it's growing. Sociology/psychology aren't the same soft sciences we might have found hokey before. You're right in that I'm picking out a subset of CSCW research, but that's largely because I have an engineer's viewpoint and get to stand on the shoulders of others.

(Bringing this back to LtU land: I'm still not sure about the implications for PL. Sort of like Norvig's question about how you do test / verify a search engines.)

PINs vs passwords

While it is true that a token + a secret provides a higher degree of security than a secret alone there are other factors to consider. In the case of a password the threat model allows for an unlimited number of offline attempts. For that reason the search space has to be very large. For most devices that use a PIN the threat model is online guesses where revocation is possible after a small number of failed attempts, hence the much smaller search space.

It's interesting that you say nobody is looking at formal models to enhance security in the consumer domain. The biggest problem at the moment is something that was predicted by formal methods. The current generation of attacks against chip and pin work by faking the terminal that the card authenticates against. That this lack of authentication would create problems was evident in the specification from the start, but it's only being looked at many years after chip and pin was introduced.

Another boat, high and dry in the desert.

For me, computer languages are composed of decisions, loops, calculations, and data...a rather narrow perspective learned while coding in assembler. The reasons for using anything but assembler is to make programs easier to code and easier to read, i.e., less expensive.

A number of cost interesting paradigms have been discovered and implemented in languages over the years, including structured programming, meta-programming and pure functional programming. Additional paradigms have been proposed, but proven difficult to implement, for example "reusable code that will improve productivity 10 fold" and "a programming language with features like a natural language."

I read about new (to me) languages and find most have interesting features, but I always look for an advantage that can significantly decrease programming costs, It has been a long time since I have noticed any significant productivity improvement from a language. The move away from assembler coding is one, and (arguably) the spreadsheet is another.

Like you, I have run out of programming languages to study. Although, there are many more I might examine, one that gives a significant advantage would quickly become public knowledge, which makes my study list short. In the mean time, I work on my own system that has as its core a calculator-like meta-language, but it is only a gleam in my eye.

One feature it will possess is the ability to make objects that interact not only with users but with programmers. A programmer interaction is a type of dialogue between the object and a programmer. For example, assume a novice programmer wants to sort a list. The programmer starts a client (part of his IDE) to communicate with the library of interactive objects. An AI assisted dialogue might progress as follows:

Programmer: "I need a sort routine for MyTable."

Library: Will the data in it usually be unsorted or partly sorted?"

Programmer: "partly"

Library: "Will searches always retrieve a single element, or will some searches return a sorted partial list of elements?"

Etc. Of course an AI front end is not necessary, which would make the dialogue a simpler, more tedious, computer style.

This kind of help application could be developed independently of the objects, but object maintenance and library maintenance would become out-of-sync and the interactive library become not very useful. Such an effort would be more difficult than maintaining existing style documents.

Help for programmers integrated with an object definition can be examined by a meta-program to assure help exists for every attribute and method, and IMO more. Is it possible formatted data in the help system might assist either test creation or theorem proving?

Whether an interactive object library is economically feasible cannot be forecast, but it is an interesting possibility of my language.

I need to experience a language to really understand it, which must be a common need. Otherwise, why would people spend so much time developing their pet language:)

I hope you find many interesting languages, and I hope language developers invent economic advantages.

Do you have a pet idea that might make an economic advantage when integrated with a language?

Ahem

I need to experience a language to really understand it, which must be a common need.

I think you also need a good description of why the language was designed the way it is. For example, if Newspeak was just a binary with incomprehensible documentation, then it would be a lot like looking at a "doctored Algol" back in the days when there was the first release of Simula. In that case, your only hope of really understanding it would be to see 20-50 years into the future, a'la Alan Kay.

Speaking of Newspeak, your response above suggests to me you should read up heavily on Newspeak, since there is no doubt in my mind that the basic ideas in it are best suited to your conversational programming. In addition, there have been many papers over the history of computer science about conversational programming, including Gerald Jay Sussman's Ph.D. thesis on expert debugging.

Also, side question: Part of what you are discussing here is better tooling and interactivity. You mention that no language after going higher-level than assembler mattered to you. In what regard did your productivity improve? What sort of text editor/IDE/tools did you use as a professional software engineer?

Experience a language

Yes, I need documentation for a language, like everyone else, but to really understand it I need to use it.

Thanks, Newspeak is really a neat language, and I was especially interested in reflection via mirrors.

To me, for reflection to work requires disallowing changes to existing instances, and changing classes that affect new instances. After the changes, it may be necessary to make instances of either the unchanged class or changed class. In addition, it may be necessary to keep more than the original and one change, or may be necessary to stop an instance, discard the original class, and restart the instance using a changed class.

My approach to reflection is probably better described as keeping several versions of a class, rather than mirroring. However, I do not have working code; thus, it may be necessary for me implement reflection via mirrors, if my current plan does not work well.

I added some info to my LtU profile concerning languages, IDE, and tools.

What have you read/seen about Newspeak so far?

You should watch Gilad Bracha's Lang.NET Symposium 2009 talk about the Hopscotch IDE.

Also, Gilad has some good blog posts I recommend reading:
Through the Looking Glass Darkly

Patterns of Dynamic Typechecking; I found his description of abstracting over expressions to be very similar to how we do some stuff here at work, even if we are not collectively bright enough to articulate it to each other in that fancy way.

Foreign functions, VM primitives and Mirrors ; Gilad explains why the way Hotspot was designed reduces the programmer's power, disallowing them control over the land of milk and honey

What have I read?

Not as much as I would like; I am slowed by dyslexia.

Thanks for the additional references to Gilad Bracha's work. I skimmed a couple of documents about Hopscotch and will study it further.

I couldn't find Gerald Jay Sussman's paper on expert debugging. Rather, I found it but do not have access to ACM papers:(

All old MIT stuff under old

All old MIT stuff under old DARPA grants is free material and available on dspace. http://dspace.mit.edu/

As for dyslexia, have you considered e-readers that can do text-to-speech?

Thanks again. I got along

Thanks again.

I got along for 65 years without text-to-speech and had not tried such a program. I just tried the "Text-to-Voice" plug-in for Firefox, but have not decided how much help it can provide. One must read with it to see nuances like X2, which it pronounces X 2. It reads just a wee bit faster than I do, so it is hard to keep up. Need to check some others.

Newspeak & Hopscotch are

Newspeak & Hopscotch are very nice and for some things will improve productivity, imo. Thanks again Z-Bo

Newspeak & Hopscotch are

oops

perhaps it is time to look deeper

Perhaps it is time to look deeper.

Languages may be thought of as devices to describe a domain. They are useful in exploring the domain you are interested in and finding out surprising relations ships or mappings. (It is especially interesting if the language stops short of being Turing complete because you might be able to take advantage of it and prove stronger properties.) Perhaps you can start thinking about domains that you want to model, and how you would create languages to model them.

Some of my favorites

One interesting language that could be a new experience to you, is Factor. Take a look here: www.factorcode.org
A "new" Lisp may not sound too exiting to you, but I really like PicoLisp for its small footprint and its many interesting features. Home page: picolisp.com
Component Pascal, part of the BlackBox Component Builder, may not be regarded as the hottest new technology today, but in my mind it's still one of the most beautiful pieces of software you can run on a PC. Unfortunately, it requires Windows, but if you can, play with it for a few hours. www.oberon.ch/blackbox.html

another pack

Here are some that I like and don't see mentioned already.

• Algol 60 – a classic, therefore a must ;)
• Algol 68 – from the 60s but surprisingly modern in many respects
• BCPL – the typeless precursor of C; differs in interesting respects
• Beta – a rather originally designed OO language from the authors of Simula
• Dylan – a Common Lisp in a non-Lisp syntax
• Icon – uniquely rich expression and control structure
• Lua – one of the highest ratios content/size, a marriage of elegance and practicality
• Nial – an array language (different style from any of APL, J, and K)
• PostScript – a stack language with a strong functional (concatenative) subset; its graphical model defined that of all modern vector graphics languages and the like (SVG, Cairo, HTML5 canvas, …)
• REBOL – a small, practical, mostly functional language, well equipped for Internet programming
• Refal – an early, very inovative and original functional language with declarative style, pattern matching, one-argument functions, sequences (rather than lists) and metaprogramming
• REXX – probably the first true command & scripting language
• Russell – all types are abstract (even numbers and Booleans); types are values
• SETL – the set language
• Snobol – a still very original and very effective model of text processing
• TXL, OmniMark – text transformation languages based on rules

Mildly interesting

  • Lollimon - concurrent linear logic programming, monadically reconciling forward and backward chaining. My candidate for paradigm least likely to go mainstream.
  • Starlog/JStar - if your language is so declarative, why are you still using data structures?
  • ISL, an Image Schema Language - food for thought in language design.

HDLs, etc.

Hardware description languages are a bit different:

...and something like OpenModelica and LabVIEW might be worth looking into.

Nimrod

Have you ever heard of Nimrod? It's quite a good language. Maybe you would like it.

Thanks

I didn't know about this language: I haven't finished reading about it, but its syntax is really good, so thanks for the link.

Sounds like a nice problem to have

as for what to do next: "Philosophers have hitherto only interpreted the world in various ways; the point, however, is to change it."

incorrect reply loc

incorrect reply loc

Scala anyone?

Nobody has mentioned Scala yet. Not only does it have a lot of really interesting perspectives on the fusion of functional and object-oriented programming, it's also a quickly growing job market. Unlike most of the languages on this list, it's both interesting and viable in a Java-dominated industry.

From a pure theoretical perspective, it doesn't have the elegance of some of the other languages on this list. But it does a good job advancing its basic premise, that OO and functional programming are two sides of the same coin.

I won't claim it's the most interesting or cleanest language out there, but I think it's maybe the most practical language in existence right now. In my opinion, any programmer who doesn't know Scala has a gap in their knowledge.

Along a similar vein, F# is an interesting look. It looks a lot like OCaml with a bit of Haskell, but there's a bit of unique stuff there, and if you like functional programming and have to deal with Windows, it's a great solution.

Finally, anybody who doesn't know JavaScript extremely well should get familiar. It is the most widely used programming language on the planet, and there is a pretty big movement toward its use as a general-purpose language in a lot of different environments.

I know, I know, recommending JavaScript is almost a heresy to languages fans, but have you considered that JavaScript is a functional programming language with closures, heavily influenced by Scheme? Its use of Self-style prototypical inheritance is a fairly unique and interesting feature, which opens up a lot of really compelling methodologies.

If you are, like many people, only familiar with JavaScript in passing, I strongly recommend JavaScript: the Good Parts by Douglas Crockford. It will give you a better appreciation for the power and unique properties of JavaScript and how to use them well while avoiding the dumb parts.

There are lots of other great recommendations for more-academic languages here, and I love digging into the theory, the algebras and the calculi. All three of these languages are both interesting and practical enough to be worth a close look if you're unfamiliar.

Mainstream languages should

Mainstream languages should be a given, but to truly understand Scala, Javascript, or F#, you have to really live in them. Smaller academic languages are generally designed to emphasize a few points, and it is not necessary to live in these languages to get these points.

Is Scala really taking off in the job market? It seems that at the level needed to program Scala well, the jobs would be more language agnostic and just go after the best programmers (who can use any language well).

Perhaps the OP should start designing/implementing his own languages. It is the next step to better understanding after you've taken in lots of languages.

Scala is taking off in the

Scala is taking off in the job market, at least in a relative sense. See http://www.indeed.com/jobtrends?q=scala.

That's not a good chart

This one is better, and does yourself a service by providing relative comparison: scala, java, haskell, F#, ocaml, c# Job Trends: Relative

As you can see, the world market for Java and C# jobs is saturated. On the other hand, the opportunity for new and emerging markets using newer technologies is growing rapidly.

To put this in context, Sean is right about the mainstream, if you look at this chart as well: scala, java, haskell, F#, ocaml, c# Job Trends: Absolute

Furthermore, 3000% of 3 is less impressive than 10% of 100,000. If you "drill down" into their data set, you see there are 500 Scala jobs vs. 112,000 Java jobs.

For even better metrics, this would need to be compared with labor data from organizations like the Bureau of Labor Statistics (the department of the US government that constitutes Ben Bernanke's "foot soldiers").

When you say taking off I

When we say taking off I mean trends not absolute numbers. So I repeated the absolute numbers, but without Java and C#, which hid everything else. (Btw you have to discount the Haskell numbers as they are mostly not PL job related. For instance the first page of job listings on indeed.com yields only two programmer jobs). Looking at that it is clear that (a) Scala is taking off, but it's not very far off the ground yet, and (b) Indeed's relative numbers are meaningless for small absolutes.

You are right, sorry

That is even better than what I tried.

Get an IDE and it will really take off.

Working on it as we speak

Working on it as we speak :-) (i.e getting an IDE).

As I understand it,

You don't have a proven strategy for a convenient, responsive IDE?

Perhaps I am mistaken.

At SPLASH, one of the speakers about meta-environments commented that Scala would have been better designed if it was co-evolved with an IDE. I of course got up to augment that perspective, and argued that neither meta-environment research nor Scala designers could produce both a language with its ambition and a good IDE. One example I gave was understanding complexity of the compiler pipeline and how features depend upon one another. It is not clear to me how building Scala using meta-environment tools like ASF+SDF or Stratego and so on would result in as rich of a language.

It would be nice to see a pure "design paper" about these trade-offs; I have had to learn most of this stuff through reading very old papers (from the 1980s) and experimentation with various language ideas. You would be one of the people presumably qualified to write such a paper.

Scala IDE

A "good" IDE for Scala is still a problem. The Eclipse IDE is the official one and it's plagued by continuous problems (I'm on the mailing list). It's not just not having all the nice refactoring tools and syntax awareness that the Java IDEs have. It's also stability and speed issues.

I'm not blaming anyone here. I understand good IDE support for a language like Scala is hard. I think it's a good lesson for language designers now and really for years now that if you want a good number of people to use your language then you should also think about implementing something more than an Emacs mode. I'm constantly surprised that even to this day, many developers don't get it. I blame it on a kind of geek machismo where Emacs and Vim are perceived as the tool for the hardcore hacker. This is coming from someone that has Vi(m) keybindings for all his IDEs.

I am painfully aware of the

I am painfully aware of the Eclipse issues. That's why my company, Scala Solutions, and I personally make it our highest priority issue to improve things. Judge us in two months, when the new IDE will be out (first beta should be earlier than that).

After 20 years of emacs, I have now switched to the Eclipse IDE that we have in development for working on the 100K line Scala project, and I am not looking back. It's not yet as feature-rich as the Java IDE, but in terms speed and stability we are getting where we want to be.

Meanwhile, the IntelliJ Scala IDE is already very usable and it's also steadily improving.

I blame it on a kind of geek

I blame it on a kind of geek machismo where Emacs and Vim are perceived as the tool for the hardcore hacker.

That is silly.

It took me 14 years as a programmer to understand most if not all the problems involved in IDE and tooling a language. A lot of it involves knowing the right academic resources to consult and the right keywords to type into ACM Portal/Digital Library.

I am pretty sure I still have holes in my knowledge base, and also lack the depth and breadth of experience Martin has. Ergo, why I suggested he write a paper about it. I would actually like to see the meta-environment camp write a similar paper about why a language designed the way Scala was is the wrong approach. But I do not want it to be flame fodder, which it probably would be.

Not sure how your statements refute "this is silly"

My point was that there is still a tendency by many developers and language designers to this day to discount the importance of IDEs. They don't understand that an IDE like Intellij narrows the productivity gap when programming in Java vs programming in Scala. And some would say that they would be more productive using Scala vs Java, but because of smart IDEs like Intellij they're not. I agree.

Limitations?

using meta-environment tools like ASF+SDF or Stratego and so on would result in as rich of a language.

I'm curious: in what way do you think those tools impose limits or at least barriers to developing a sophisticated language?

Proof and pudding

I've only seen these tools used with fairly simple imperative or functional languages. If a meta tool was used to develop a Scala level languages without too many compromises, we'd definitely take more notice.

Okay

From his wording, I though Z-Bo might have seen some specific issues.

That's not what I intended to suggest

I am more interested in developing a doussier on language design, from the perspective of efficient prototyping.

This is kind of like the art of compiler design, like handrolling parsers, except rather than using the tricks a master compiler writer would use to handroll solutions, figuring out automated ways to achieve the same efficiencies. For example, from knowing what bottlenecks are in traditional parser generators, you can predict why you'd be better off handrolling the parser rather than generating one. I find that sort of engineering deeply interesting, even if others view it more as mush-work that they can't wait to be over with. I like empirical software engineering, especially when you marry it with automated engineering techniques like parser generators and meta environments.

I have a broader pet project that is the basis for wanting to learn about all these details.

The only limitation with meta-environments is how good their help is, and how well you understand how to use such tools, and what sort of generation abilities they provide. For example, if the meta-environment is really just something very naive and doesn't allow for equational reasoning, then that's pretty primitive and you'll have to role your own.

A few more

  • Occam: parallelism
  • Links: multi-tier
  • XSLT: aaarghhh
  • SimScript: simulation
  • N3: ontologies
  • Fortress: Fortran replacement
  • Mathematica: math
  • PGA: Program Algebra (recommended)
  • coming soon: my own extension to Scala with concepts from ACP (Algebra of Communicating Processes)

A few more

Everyone else's lists inspired me to think of a few more:

Access-oriented programming paradigm (sort of like a formal language for triggers, or modular effects, where accessing information such as for mutation has side-effects)

Ambient-oriented programming paradigm (see Tom van Cutsem's Ph.D. thesis under Theo D'Hondt).

Access oriented programming

Access oriented programming doesn't appear to be very scalable. Every read and write can trigger events. Reads and writes are not idempotent, which greatly restricts abstraction and refactoring. System behavior would seem to depend heavily upon the order in which events are injected. I can't even imagine how to remove or maintain elements at runtime. Here is a paper on this recipe for mudballs.

I'm curious. What do you find interesting about it?

I suppose it could be seen as a cousin to the various reactive and dataflow paradigms, which offer somewhat more discipline.

System behavior would seem

System behavior would seem to depend heavily upon the order in which events are injected.

This is correct. However, in a nondeterministic system, all you need to do is reliably produce a deterministic order for whatever it is that you need to be deterministic. Constraints on events determine the order in which they are injected; that is pretty much inline with current mainstream concurrency theory, such as the stuff done by Gul Agha and Montanari (separately). They might not be called 'events' necessarily, and this problem is more commonly discussed by theoreticians under the name "Brock-Ackerman Anomaly" and "Kahn's Principle".

This is like saying dynamic typing is bad compared to static types because less is decided at compile-time.

I suppose it could be seen as a cousin to the various reactive and dataflow paradigms, which offer somewhat more discipline.

I don't believe those paradigms offer more discipline, they just flip the view of things (Leo Meyerovich had a comment on here recently stating exactly that). Yeah, you cited Bobrow's paper, and, yeah, his argumentation there is a recipe for disaster and not good design advice. And, yeah, Loops is a horrible programming system.

I guess my point is to try to separate the pearls from the swine-waste.

all you need to do is

all you need to do is reliably produce a deterministic order for whatever it is that you need to be deterministic

If behavior is locally specified or otherwise constrained, that would be feasible. But the whole point of access-oriented programming is non-local specification of behavior - injecting triggers to properties visible in other modules/objects. It is not clear that, at scale, one could do as you suggest.

I understand what you are saying

But I think you are putting a small box around this idea.

By the way, functional I/O agents can still be non-modular; this is a good counter-example of why I don't buy into your argument that all we need to do is have something that looks feasible. I/O modularity is a tricky beast to tame, or it would have been solved already. I view access-oriented programming as changing the emphasis on what the programmer focuses on when programming; it's about what, not when. Brock-Ackerman just points out that Kahn Principle doesn't extend to a very degenerate case (nondeterministic agents). But even functional I/O can be non-compositional. "Ambiguity" and "meagerness" are alternative ways to understand the problem.

In concurrency theory, mathematicians use nets, because they can actually show things like how sharing a variable works or how communication works, and they tend to use global techniques to prove properties of a system such as firing rules (e.g. Petri nets) or a system of equations. I don't see the task of a programmer as any different from the mathematicians, and how we focus on describing the problem syntactically can probably be somewhat separated from how the solution works.

I think, though, what you mean by "at scale" is "Open system". And I would agree that an open system designed this way would basically be a mess.

At scale...

The fallacy of the beard is to argue that one whisker is not a beard, and two whiskers is not a beard, and three whiskers is not a beard, and to induce that no number of whiskers will make a beard - that a quantitative difference does not make a qualitative difference.

The scales programmers work at are quantitatively very different than the scales that mathematicians work at. What is in practice a relatively simple business application would be massive compared to the arguments and proofs used in mathematics. There are significant qualitative differences due to scale. I believe it fallacy to treat a programmer's task the same as a mathematician's.

Large scale closed systems look a great deal like open systems. You get to a hundred diverse software components, and deal with the reuse constraints of a few development iterations, and no developer will be able to hold the relationships in his or her puny human head. The system will be maintained by a team of developers, each with a narrow view. Even the threat and vulnerability models aren't so different - security can be compromised by buggy components or anomalies of composition (such as deadlock or glitches or infinite loops or cache coherency issues or update floods). Thinking about all large projects as 'open' seems wise, in general. But if the architecture is good, one can reason about properties by construction.

Access-oriented programming isn't interesting for in-the-small programming. We could just use getters and setters. The ability to attach ad-hoc triggers to properties only becomes unique to the paradigm at the larger scale of component configuration or composition; it's essentially an architectural constraint. Therefore, that is the scale at which access-oriented programming should prove itself.

functional I/O agents can still be non-modular

I agree. If we functionally model stateful event-stream processing elements, we're going to inherit various issues related to functions, state, and event-streams.

I don't buy into your argument that all we need to do is have something that looks feasible

I do not recall presenting such an argument.

I think you are putting a small box around this idea.

Like a coffin? ;)

I prefer to not dilute the meaning of phrases like 'access-oriented programming' by using a larger box than necessary.

I agree that some variation on access-oriented programming might not have the same problem. But it isn't clear to me what you're imagining.

I view access-oriented programming as changing the emphasis on what the programmer focuses on when programming; it's about what, not when

Could you explain your view? Similar has been claimed of functional programming, constraint programming, logic programming, et cetera in the past, and it always strikes me as grandiose. But, in this case, the view also seems entirely unjustified. The idea of triggering effectful actions upon each observation or update of a property seem to cover aspects of both 'what' and 'when'. The order-of-effects makes the 'when' rather significant.

I suppose if you control effects so they are monotonic and commutative, you could have a more declarative ('what' not 'when') model.

Yes

AmbientTalk/2 is also a beautiful example of language design for actually existing programmers, IMHO, though the syntax is a bit more conservative than I'd like.

Give Yeti a Look

I came across Yeti recently. It's a tight little ML'ish language with H/M type inference and features a nifty compact syntax and an interesting take on algebraic data types. It's implemented on top of the JVM, , so it should run most places, and seems to integrate very conveniently with Java classes. Might be worth a peek.

- Scott

Brainfuck

There's always Brainfuck. Or if you want to get really esoteric, Deadfish. :-)

Usable languages

Here are a few languages that focus on usability. Nice if you are going for breadth:

  • John Pane's Hands
  • Forms/3
  • Hancock's Flogo II
  • Microsoft's Kodu