LtU Forum

Logical and Functional Programming in Each Other

I can easily see how to model functional programs in a logical language. But is there a simple and straightforward way to do the reverse? And when I say "logical language", I'm not even looking for one with backtracking.

Interesting experiment in peer-review

There is an interesting experiment in public peer-review occuring over on reddit. The topic is adaptive compilation, which may be of interest to some LtU readers. Although most people read both sites the link was in the relatively high-volume /r/programming so was quite easy to miss.

Rumors in Complexity Theory

I imagine everybody already has read this, and that there's nothing to discuss really. Nothing to do but wait, but I think it should be an LtU post anyway.

For days, rumors about the biggest advance in years in so-called complexity theory have been lighting up the Internet. That’s only fitting, as the breakthrough involves comparing networks just like researchers’ webs of online connections. László Babai, a mathematician and computer scientist at the University of Chicago in Illinois, has developed a mathematical recipe or "algorithm" that supposedly can take two networks—no matter how big and tangled—and tell whether they are, in fact, the same, in far fewer steps than the previous best algorithm.

From Science Magazine.

Andrei's answer to "Which language has the brightest future in replacement of C between D, Go and Rust? And Why?"

Excellent critique of potential C/C++ successors (from the author of D, also top of /r/programming ATM):

https://www.quora.com/Which-language-has-the-brightest-future-in-replacement-of-C-between-D-Go-and-Rust-And-Why/answer/Andrei-Alexandrescu

Money quote:

A disharmonic personality. Reading any amount of Rust code evokes the joke "friends don't let friends skip leg day" and the comic imagery of men with hulky torsos resting on skinny legs. Rust puts safe, precise memory management front and center of everything. Unfortunately, that's seldom the problem domain, which means a large fraction of the thinking and coding are dedicated to essentially a clerical job (which GC languages actually automate out of sight). Safe, deterministic memory reclamation is a hard problem, but is not the only problem or even the most important problem in a program.

Google Open Sources Skynet

Well. Not that exactly. But a product interesting in its own right.

TensorFlow™is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

The site is light on information. It shows off a visual programming tool with a backend in Python/C++, so that seems in line with current interest on LtU.

I am interested in data center applications these days, not sure how it does its magic within Google's centers.

MCG: A Visual Functional Programming Language

Hi everyone,

It's been a while! Some of you might remember me posting here a few years ago about my adventures with type systems and stack-based languages. For the last couple of years I have been head's down working on a visual functional programming language at Autodesk. The language is called MCG (Max Creation Graph) and is part of the commercial 3D animation, modeling and rendering software package 3ds Max. I've written a blog post about MCG that introduces the language to a technically savvy audience.

As some of you may know visual programming languages are quite commonplace in 3D software packages. A few examples include: Houdini, Grasshopper, Softimage ICE, Fabric Engine Canvas, Dynamo, NUKE Compositing, and more. However, I can't find much mention of these languages in the literature.

Switching from designing programming languages as a hobby to doing it professionally has been very interesting! I found myself spending a lot less time consulting the literature and spending more of time responding to customer needs and feedback. Now that MCG is shipped and being used, I'm interested in reconnecting with the academic community. I am wondering if anyone has some suggestion about what aspect of MCG might be of the most interest for researchers, and what type of publication I should pursue (e.g. technical report, experience report, research paper, or simply stick to non-academic publications). Any tips or suggestions would be most welcome! I'm also interested in exploring potential collaborations with researchers so please reach out to me if this is something that might interest you.

Thanks!

How Useful is Erlang Hot-Swapping of Code?

Please discuss. I am interested.

minor lexical tokenization idea via character synonyms

A few days ago I had a tokenization-stage lexical idea that seems not to suck after going over it several times. On the off chance at least one person has similar taste, I can kill a few minutes and describe it. Generally I don't care much about syntax, so this sort of thing seems irrelevant to me most of the time. So I'll try to phrase this for folks of similar disposition (not caring at all about nuances in trivial syntax differences).

The idea applies most to a language like Lisp that over-uses tokens like parens with high frequency. I'm okay with lots of parens, but it drives a lot of people crazy, apparently hitting some threshold that says gibberish to some folks. Extreme uniformity of syntax loses an opportunity to imply useful semantics with varied appearance. Some of the latent utility in human visual system is missed by making everything look the same. A few dialects of Lisp let you substitute different characters, which are still understood as meaning the same thing, but letting you show a bit more organization. I was thinking about taking this further, where a lot of characters might act as substitutes to imply different things.

The sorts of things you might want to imply might include:

  • immutable data
  • critical sections
  • delayed evaluation
  • symbol binding
  • semantic or module domain

A person familiar with both language and codebase might read detail into code that isn't obvious to others, but you might want to imply such extra detail by how things look. Actually checking those things were true by code analysis would be an added bonus.

I'm happy using ascii alone for code, and I don't care about utf8, but it would not hurt to include a broader range of characters, especially if you planned on using a browser as a principle means of viewing code in some context. When seen in an ascii-only editor, it would be good enough to use character entities when you wanted to preserve all detail. It occurred to me that a lexical scan would have little trouble consuming both character entities and utf8 without getting confused or slowing much unless used very heavily. You'd be able to say "when you see this, it's basically the same as a left paren" but with a bit of extra associated state to imply the class of this alternate appearance. (Then later you might render that class in different ways, depending on where code will be seen or stored.)

A diehard fan of old school syntax would be able to see all the variants as instances of the one-size-fits-all character assignments. But newbies would see more structure implied by use of varying lexical syntax. It seems easy to do without making code complex or slow, if you approach it at the level of tokenization, at the cost of slightly more lookahead in spots. As a side benefit, if you didn't want to support Unicode, you'd have a preferred way of forcing everything into char entity encoding when you wanted to look at plain text.

Note I think this is only slightly interesting. I apologize for not wanting to discuss character set nuances in any detail. Only the lossless conversion to and from alternatives with different benefits in varying contexts is interesting to me, not the specific details. The idea of having more things to pattern match visually was the appealing part.

POPL 2016 Research program...

is out here: http://conf.researchr.org/track/POPL-2016/POPL-2016-papers#event-overview.

Many of the papers sound cool (Newtonian Program Analysis via Tensor Product, Transforming Spreadsheet Data Types using Examples, Program Synthesis with Noise, Memoryful Geometry of Interaction II: Recursion and Adequacy, Is Sound Gradual Typing Dead?) even if I'm not really connecting with them (maybe just in my field of work).

On type safety for core Scala: "From F to DOT: Type Soundness Proofs with Definitional Interpreters"

From F to DOT: Type Soundness Proofs with Definitional Interpreters by Tiark Rompf and Nada Amin:

Scala's type system unifies aspects of ML-style module systems, object-oriented, and functional programming paradigms. The DOT (Dependent Object Types) family of calculi has been proposed as a new theoretic foundation for Scala and similar expressive languages. Unfortunately, type soundness has only been established for a very restricted subset of DOT (muDOT), and it has been shown that adding important Scala features such as type refinement or extending subtyping to a lattice breaks at least one key metatheoretic property such as narrowing or subtyping transitivity, which are usually required for a type soundness proof.
The first main contribution of this paper is to demonstrate how, perhaps surprisingly, even though these properties are lost in their full generality, a richer DOT calculus that includes both type refinement and a subtyping lattice with intersection types can still be proved sound. The key insight is that narrowing and subtyping transitivity only need to hold for runtime objects, but not for code that is never executed. Alas, the dominant method of proving type soundness, Wright and Felleisen's syntactic approach, is based on term rewriting, which does not make an adequate distinction between runtime and type assignment time.
The second main contribution of this paper is to demonstrate how type soundness proofs for advanced, polymorphic, type systems can be carried out with an operational semantics based on high-level, definitional interpreters, implemented in Coq. We present the first mechanized soundness proof for System F<: based on a definitional interpreter. We discuss the challenges that arise in this setting, in particular due to abstract types, and we illustrate in detail how DOT-like calculi emerge from straightforward generalizations of the operational aspects of F<:.

Not only they solve a problem that has been open for 12 years, but they also deploy interesting techniques to make the proof possible and simple. As they write themselves, that includes the first type-soundness proof for F<: using definitional interpreters — that is, at least according to some, denotational semantics.

Understated Twitter announcement here.

XML feed