Following up on this post suggesting that this would be a reasonable request to post ...
I am looking for candidates who would be interested in working on a JIT compiler for a custom Haskell-like programming language for a major investment bank in NYC. The project is still young, just under two years since inception, but is already actively used by several major trading applications. We use LLVM/MCJIT as a back-end and we produce code that can be integrated with C++ applications.
Our general mandate is to solve problems in the trading domain as safely, concisely, and efficiently as possible. There is significant support behind this effort; it's an unusual opportunity to bring recent ideas in programming languages to an immediate industry setting.
If you are interested in this project, and you have some experience writing compilers for functional programming languages with type systems similar to Haskell, please send an email to kthielen at gmail dot com.
How many different basic strategies in programming language persistence seem like good ideas to support? The sort of perspective I want is a summary overview, as opposed to a detailed survey of every known tactic touched in literature. A gloss in a hundred words is better than a paper listing everything ever done in persistence.
(In chess a desire to "control the center" is the sort of basic pattern I have in mind, not an encyclopedia of all chess opening variations. In the context of PL tech, the sort of basic patterns I have in mind include image, document, database, file system, etc. But I'm interested an analytical assessment of effect of choosing a pattern.)
Some products featuring a PL as the main offering come coupled with a normative scheme for managing persistent storage. For example, Smalltalk was traditionally associated with image-based storage, though the language per se does not require it. Similarly, HyperTalk came bundled with stack-document storage as files in HyperCard. Other languages might come coupled with a database, or assume cloud-based resources.
I'm mainly interested in "local" storage schemes, like in desktop software, guiding a developer using a language in managing persistent state in session or project form. Yes, this assumes desktop software isn't dead yet. If a user creates content in documents, how does a language organize support for this? That kind of thing. Preventing users from making meaningful documents filled with self-contained data sets would be weird.
This line of inquiry comes from thinking about a language with a builtin virtual file system that unifies the interface for local and distributed data streams. How does one avoid antagonizing a user's need for documents? A document might be mounted as a file system, so saving is modeled as transacting changes on that file system, and two-phase-commit can be used to arrange consistent collections of changes across disparate stores. But this strikes me as a random data point without much context. So I'm interested in what kind of context is provided by other PL approaches to persistent services. Maybe the normal thing to do is say a language defers to an operating system or storage product.
I suspect that it's against forum etiquette to post job ads here. Does anyone have any suggestions? We'd like to find somebody interested in type theory, compilers, and finance.
Pittsburgh, Pennsylvania, USA
Sponsored by ACM SIGPLAN
COMBINED CALL FOR CONTRIBUTIONS:
The ACM SIGPLAN conference on Systems, Programming, Languages and Applications: Software for Humanity (SPLASH) embraces all aspects of software construction and delivery to make it the premier conference at the intersection of programming, languages, and software engineering. SPLASH is now accepting submissions. We invite high quality submissions describing original and unpublished work.
OOPSLA Research Papers
Submissions Due: 25 March, 2015
Onward! Research Papers
Submissions Due: 2 April, 2015
Submissions Due: 2 April, 2015
Early Phase Submissions Due: 25 March, 2015
Dynamic Languages Symposium (DLS)
Submissions Due: 7 June, 2015
8th International ACM SIGPLAN Conference on Software Language Engineering (SLE)
14th International Conference on Generative Programming: Concepts & Experiences (GPCE)
22nd International Conference on Pattern Languages of Programming (PLoP)
Nobody seems be saying much about Rust, or if they are, the LtU search can't find it. So I'm starting a Rust topic.
After some initial use of Mozilla's "Rust" language, a few comments. I'm assumming here some general familiarity with the language.
The ownership system, which is the big innovation, seems usable. The options are single ownership with compile-time checking, or multiple ownership with reference counts. The latter is available in both plain and concurrency-locked form, and you're not allowed to share unlocked data across threads. This part of the language seems to work reasonably well. Some data structures, such as trees with backlinks, do not map well to this model.
There's a temptation to resort to the "unsafe" feature to bypass the single-use compile time checking system. I've seen this in some programs ported from other languages. In particular, allocating a new object and returning a reference to it it from a function is common in C++ but difficult in Rust, because the function doing the allocation doesn't know the expected lifetime of what it returns. It takes some advance planning to live within the single-use paradigm.
Rust's type system is more troublesome. Rust's has most of the bells and whistles of C++, plus some new ones. It's clever, well thought out, sound, and bulky. Declarations are comparable in wordiness to C++. Rust has very powerful compile-time programming; there's a regular expression compiler that runs at compile time. I'm concerned that Rust is starting out at the cruft level it took C++ 20 years to achieve. I shudder to think of what things will be like once the Boost crowd discovers Rust.
The lack of exception handing in Rust forces program design into a form where many functions return "Result" or "Some", which are generic enumeration/variant record types. These must be instantiated with the actual return type. As a result, a rather high percentage of functions in Rust seem to involve generics. There are some rather tortured functional programming forms used to handle errors, such as ".and_then(lambda)". Doing N things in succession, each of which can generate an error, is either verbose (match statement) or obscure ("and_then()"). You get to pick. Or you can just use ".unwrap()", which extracts the value from a Some form and makes a failure fatal. It's tempting to over-use that. There's a macro called "try!(e)", which, if e returns a None value, returns from the enclosing function via a return you can't see in the source code. Such hidden returns are troubling.
All lambdas are closures (this may change), and closures are not plain functions. They can only be passed to functions which accept suitable generic parameters. This is because the closure lifetime has to be decided at compile time. Rust has to do a lot of things in somewhat painful ways because the underlying memory model is quite simple. This is one of those things which will confuse programmers coming from garbage-collected languages. Rust will catch their errors, and the compiler diagnostics are quite good. Rust may exceed the pain threshold of some programmers, though.
Despite the claims in the Rust pre-alpha announcement of lanaguage definition stability, the language changes enough every week or so to break existing programs. This is a classic Mozilla problem; that organization has a track record of deprecating old features in Firefox before the replacement new feature works properly.
Despite all this, Rust is going to be a very important language, because it solves the three big problems of C/C++ that causes crashes and buffer overflows. The three big problems in C/C++ memory management are "How big is it?", "Who owns and deletes it?", and "Who locks it?". C/C++ deals with none of those problems effectively. Rust deals with all of them, without introducing garbage collection or extensive run-time processing. This is a significant advance.
I just hope the Rust crowd doesn't screw up. We need a language like this.
Say we have a program: F -> G. Under what situations can we generate a program which operates on *deltas* of F and G?
That is, say that dF is an efficiently stored delta value between an old and new value of F, and dG is a delta between values of G. Are there many situations where we can derive the program dF -> dG given F -> G ?
It seems like this is probably a topic with existing research, but I'm having trouble figuring out what to search for. This would be interesting in the context of a framework like React.js, where there is user code that modifies the user-interface state based on external events, then more user code which translates that state into a virtual DOM value, then finally the real DOM is modified using the virtual DOM. If we were able to derive a delta-based version of the user's code, then we could directly translate external events into real DOM manipulation, without generating an entire virtual DOM along the way.
This is a follow up from my earlier post (I will try and put a link here) about negation in Prolog. The question was if negation is necessary, and I now have some further thoughts (none of this is new, but new to me). I can now see my earlier approach to implementing 'member of' relies on evaluation order, which now I have moved to iterative-deepening no longer works. So I need a way of introducing negation that is logically sound. Negation as failure leads to unsoundness with variables. Negation as refutation seems to require three valued logic, and negation as inconsistency requires an additional set of 'negative' goals which complicates the use of negation. The most promising approach I have found is to convert negation into inequalities, and propagate 'not equal' as a constraint in the same way as CLP. This eliminates negation in goals in favour of equality and disequality. I am interested if anyone has any experience of treating negation in this way, or any further thoughts about negation in logic programming or logical frameworks.
This is an experience report on the design and usage of the Nu Game Engine, a purely-functional 2D game engine written in F# -
The Nu Game Engine certainly qualifies as 'functional reactive' in that -
However, I cannot describe it as the typical first-order or higher-order FRP system since it uses neither continuous nor discrete functions explicitly parameterized with time. Instead it uses a classically-iterative approach to advancing game states that is akin to the imperative tick-based style, but implemented with pure functions.
Thus, I gave it the name of 'Iterative FRP'. It's a purely-functional half-step between classic imperative game programming and first / higher-order FRP systems. This document discusses the major plusses and minuses to using this 'iterative' FRP style.
An aside about the document -
I released an earlier version of this PDF along with an earlier version of the engine, but had to make significant revisions due to a fundamental flaw in the engine's previous design, and thus removed it.
Hopefully this newer version will inform us about the properties of this 'iterative' style of FRP in contrast to the known styles.
While monads can be encoded in dynamically-typed language, the usual encodings are usually uglier than monads in type-class-based language because type inference is not there to help determine which monad instance we're talking about. Some use-cases of type inference can be replaced by dynamic type dispatch, but not the so-called "return type polymorphism" where it is the type expected by the context that matters, not the type carried by values. Monads are just one instance where return-type polymorphism matters (in the form of "return : forall a, a -> m a"), but this is a general problem with other structures (typically structures with a default element such as monoids/groups).
Intuitively it seems that this problem would be solveable by keeping, in the dynamic type representation, a "not known yet" marker, and using the free monad operations in this case. But there is a gap from intuition from usable code, and Tony Garnock-Jones just wrote a very nice blog post exposing this technique in Racket:
(run-io (do (mdisplay "Enter a number: ") n <- mread all-n <- (return (for/list [(i n)] i)) evens <- (return (do i <- all-n #:guard (even? i) (return i))) (return evens)))
As a programmer with no training in type theory or abstract math (beyond what I've learned on LtU), I'm curious to know if the laws of basic, middle school, algebra can help design or understand better domain specific languages.
Recently I saw a presentation on how algebraic data types don't just contain the name 'algebra,' but actually allow many of the same operations as on simple numbers. This will be obvious to people on this forum but such ideas are very surprising to programmers at large. The link describes not just products and sums, but also x to the power of y style operations. The youtube presentation in the link above then talks about how one can even take derivatives of type expressions, which result in very useful objects, such as zippers.
There have been interesting articles[pdf] on derivatives of regular expressions. Again, kleene algebra seems to define basic operations, +, *, etc. Although the derivative part is beyond me.
Modern database systems, are based on relational algebra, and this algebra beautifully simple. Although I have't seen this algebra described specifically using basic arithmetic operators, it does contain cartesian product, sums and even division.
So many of these, very practical, tools programmers use every day are based on what looks similar to school algebra (they even contain the word algebra in their name). Where can I learn more about how to design DSLs using algebraic operators? My assumption is that by starting with a domain, and being forced to come up with interpretations of +, -, *, -, etc. (interpretations which follow specific rules), I'll end up with a pretty minimal, yet expressive, language.
Finally, I actually started thinking about this after implementing Peyton-Jones & Eber's "Composing Contracts." That language, describes a couple of basic types: Contracts and Observables. Then describes operations to on these types: And, Or, Give, Scale, which map pretty easily to +, *, negate, etc. Once again, a beautifully simple language can describe a complex set of financial instruments. What's more, the constructs of this language (like so many others), seem very closely related to school algebra! What might it mean to take the derivative of a contract? What might it mean to take it's square or square root??
Active forum topics
New forum topics