LtU Forum

bodyf*ck: an esoteric programming environment using camera input

A product of critical programming language design. Abstract:

bodyf*ck is a computer programming environment that translates bodily movement into computer programs. Each physical motion committed by the programmer/performer becomes an operation in the esoteric programming language brainf*ck. Theoretically, because bodyf*ck is Turing-complete, all computable operations are possible. Despite this, physical difficulty in implementing more complicated programs may prove to concede the theoretical limitations of the potentially computed output.

Expletives stared out because I'm a coward.

Julia has no dependent types

https://github.com/JuliaLang/julia/issues/6113

sorry if this has already been posted; i don't read here much and ltu was mentioned in the link above (but i can't find anything).

i want to ask some questions about my limited, practical, non-theoretical understanding of dependent types:

it seems to me that in julia i can define a datatype (something like a struct in c), that has an associated parametric type that contains information about the data. so, for example, i could put the length of an array in there. and then, as long as any code that constructs or modifies an instance of that datatype follows the contract implicit in the associated parametric type, the code will work correctly and i can dispatch on array length.

now, it seems to me that this differs from how i expected dependent types to work, in that there is no inspection of the actual data value on dispatch. when i dispatch on the array length i am using information in the parametric type associated with my datatype that i trust is correct. but there is, at the moment of dispatch on that type, no inspection. length(array) is not called.

so the correctness of my code depends on the correctness of functions that construct and modify the datatype, rather than on functions that inspect it.

so my questions are:

1 - do other dependent type systems inspect the data?

2 - is this just an implementation detail, or is it connected with what was argued about in the link?

3 - please can someone explain the link like i am five? if the above is not connected with the argument, then i don't get it... in particular, julia often (usually?) compiles code specific to particular types. it doesn't do type unification (HM), afaik, but it does propagate types from annotations, and all concrete types are final (leaf), so the compiled code is "static" in some sense.

thanks,
andrew

[edit: i don't know in detail how julia works, but my impression is that although it's called a "jit" it's basically a static compiler that is triggered only when dispatch on a new combination of function and types is encountered. wherever possible it extrapolates type information to compile static code with fixed types. this is why it is fast.]

Limitations of FRP?

I'm doing some research on programming paradigms that deal with time, like functional reactive programming, and I was wondering if anyone has explored how far one can go with FRP? From my own personal experience when doing Superglue (an OO reactive PL based on FRP-like signals), I had to abandon FRP when working on incremental compilation and language-aware editing: although FRP should be suitable for this problem (lots of consistency to maintain between code, AST, type system, and UI), it turned out that the problem required adding things to collections in distributed locations (e.g. in parse tree computations), whereas FRP seems to require everything for some state to specified centrally.

The best source I can find for pushing FRP for more complex programming domains is Antony Courtney's thesis. This still seems to be where things are at.

The last time we discussed this topic was in 2007. Are there any new developments in FRP expressiveness?

The Evolution of CS Papers

A blog post by Crista Lopes discussing the evolution of CS papers away from positions into the realm of science; excerpt:

Note this (Dijkstra's GOTO considered harmful) was not a letter to the editor, it was a full-blown technical, peer-reviewed paper published in a Mathematical journal. Seen in the light of current-day peer reviewing practices in CS, even considering the fact that the paper was proposing a clear solution for a clear problem, this paper would likely be shot down. First, the problem for which the solution was proposed was clear but not well motivated: how important was it for programs [of the time] to call routines recursively? — maybe it wasn’t important at all! If it was important, what were other people doing about it? Any paper with only 1 reference in the bibliography goes immediately to the reject pile. Second, given that the concept had already been proposed (in writing) by H. Waldburger, the author of this paper would at least be requested to express his ideas in terms of how they were different from those described in H. Waldburger’s paper rather than spending most of the paper describing how to do procedure calls using stacks. Finally, the work is borderline an “engineering issue” rather than a scientific one.

These are complaints often seen in today’s CS peer reviews.

Haxe 3.1 is here

Haxe 3.1 is here. It is a language that is sorta rooted in the bog-standard main-stream (it came out of Action/Ecma scripts), but has gradually (especially in the move from 2.0 to 3.0+) been adding some of its own ideas. I've used 2.x for making cross-platform games. I sorta love/hate it, but I'd certainly be a lot more sad if it stopped existing, having known it (not in the biblical sense or anything). There's probably too many random things to go into any detail here, but I'll try to summarize it as: A cross-platform (compiles down to other languages) statically and dynamically typed (including structural typing) language and libraries, with some nifty typing ideas/constructs and syntax of its own. Oh, and: macros. (But it has seemingly weird lapses, like I dunno that it will ever really support The Curiously Recurring Template Pattern, tho which I find personally sad).

Query Languages for doing NLP

I was wondering what kind of query languages are out there for doing NLP. This NLP lecturer mentions using Part-of-speech tagging to create Regular expressions: https://www.youtube.com/watch?v=LivXkL2DO_w&list=PLuBJa2RktQX-N0flCReMywxy1E-tsF0ZC#t=547
I think that would form a good foundation for a NLP query language.

In addition to being able to query bodies of text by part of speech, I would like some special functions for searching for synonyms and stemming. For example, "synonyms('fun') Noun" would match "enjoyable game" and "amusing clown", and "sharesStem('swimming')" would match "swimmer", "swims" and "swim". I would be interested in hearing more ideas for NLP query languages.

Syntax for reification of types

Hi all,

I've got a little language with a syntax resembling C#. It is statically typed, and performs type inference. Functions are polymorphic, and don't require the type of the arguments to be explicit. I am trying to avoid dynamic typing as much as possible. Functions are reified during function invocation based on the inferred type of the arguments. In order to pass a function as an argument, or store it in a variable, I need a convenient way of performing the reificiation.

Here is the syntax that I am considering:


var fxn1 = (int, int) -> (x, y) => x * y;
var fxn2 = (int, int) -> Multiply;
var fxn3 = (int[3][], float[3][]) -> ComputeNormals;

As in C# the "=>" operator represent the lambda operator but here the symbol "->" now represents the function reification operator. Does this make sense? Any other suggestions or examples of how it is done in other languages?

Thanks in advance!

Resilient Languages

This week there were two item in the tech blogs that were both programming language and both related; the 'Radiation resistant Quine' and the Apple SSL error caused. Both deal with a small number of characters and the resilience of the program and language to errors.

It raises a question - can we build resilience into our programming languages?

I think we can build more resilient programs, the ruby example was designed to be resilient, but as far as I know resilience is not a feature of Ruby, any more than it is C.

Any thoughts?

The year that was: 1965.

I have devoted a lot of time trying to understand why a book like "Algebraic Structure Theory of Sequential Machines" is nearly unknown. The book is written by two great Computer Scientists, Hartmanis and Stearns; one of whom, Hartmanis, started the first Computer Science program at an American university. Surely there is a story here.

The situation seems to develop out of the cultural chaos of the 1960's. The HandS book seems to be collateral damage. In 1965 students are rioting on campuses all across America. Professors are being fired if they show any sympathy for this behavior at all. Loosely allied with all this is the academic theory of information and Cybernetics. Now it just so happens that the HandS book is probably one of the best books on information or cybernetics ever written. It could be that the politically astute students and professors know enough to stay away from anything that smells like Cybernetics. It is just a theory mind you. It has been nearly 50 yerrs how. My how time flies. I think it is time to take a fresh look at this book.

Inconsistency Robustness in Foundations

Inconsistency Robustness requires repairing mathematical foundations in the face of contradictions that have arisen over and over again. In the Inconsistency Robustness paradigm, deriving contradictions are a progressive development and not “game stoppers.” Contradictions can be helpful instead of being something to be “swept under the rug” by denying their existence, which has been repeatedly attempted by the Establishment (beginning with some Pythagoreans). Such denial has delayed mathematical development. This article reports how Inconsistency Robustness has recently influenced the foundations of mathematics for Computer Science since the first International Symposium in 2011 continuing a tradition developing the sociological basis for foundations.

That mathematics is thought to be consistent by an overwhelming consensus of working professional mathematicians justifies the use of Proof by Contradiction. In addition, Proof by Contradiction can be used to infer the consistency of mathematics by the following simple proof:

The self-proof is a proof by contradiction.
In order to obtain a contradiction,
assume that mathematics is inconsistent.
Then there is some proposition Φ such that ⊢Φ and ⊢¬Φ.
Consequently, both Φ and ¬Φ are theorems that
can be used in the proof to produce an immediate contradiction.
Therefore mathematics is consistent.

Please note the following points: The above argument mathematically proves the theorem that mathematics is consistent and that it is not a premise of the theorem that mathematics is consistent. However, the proof does not intuitively increase our confidence in the consistency of mathematics because it does not rule out that there might be other proofs in mathematics that result in contradictions.

The above theorem means that the assumption of consistency is deeply embedded in the architecture of classical mathematics. The above proof of consistency can be formalized in Direct Logic [Hewitt 2010] (a powerful inference system in which computer systems can reason about their own inferences). Having a powerful system like Direct Logic is important in computer science because computers must be able to carry out all inferences (including inferences about their own inference processes) without requiring recourse to human intervention.

Self-proving consistency raises that possibility that mathematics could be inconsistent because of contradiction with the result of Gödel et. al. that if mathematics is consistent, then it cannot infer its own consistency. The resolution is to provide a proper grammar of mathematical sentences so that a fixed-point operator cannot be used to construct the kind of “self-referential” proposition used by Gödel in his results by exploiting an untyped grammar of sentences. Self-referential propositions of the kind used by Gödel do not seem to have any important practical applications. (There is a very weak theory called Provability Logic that has been used for self-referential propositions coded as integers, but it is not strong enough for the purposes of computer science.) Direct Logic makes use of a typed grammar that prevents the construction of self-referential sentences using unwanted fixed points. The bottom line is that self-referential propositions are “monsters” (in the sense of Lakatos) that are ruled about by a proper grammar of sentences for the mathematical foundations of computer science. It is important to note that not having self-referential propositions does not place restrictions on recursion in computation, e.g., the Actor Model, untyped lambda calculus, etc.

A mathematical theory is an extension of mathematics whose proofs are computationally enumerable. For example, group theory is obtained by adding the axioms of groups to Direct Logic along with the requirement that proofs are computational enumerable. If a mathematical theory T is consistent, then it is inferentially undecidable (i.e. there is a proposition Ψ such that ⊬TΨ and ⊬T¬Ψ (which is sometimes called “incompleteness” in classical terminology) because provability in T is computationally undecidable [Church 1936, Turing 1936].

Information Invariance is a fundamental technical goal of logic consisting of the following:
1. Soundness of inference: information is not increased by inference
2. Completeness of inference: all information that necessarily holds can be inferred
Direct Logic aims to achieve information invariance for the mathematical foundations of Computer Science. But that a mathematical theory is inferentially undecidable (“incomplete” in classical terminology) with respect to Ψ (above) does not mean incompleteness with respect to the information that can be inferred because (by construction) it is provable in mathematics that ⊬TΨ and ⊬T¬Ψ.

This article has become highly controversial and the Establishment has censored (without substantive justification) its publication in electronic archives that are intended to record priority dates for scientific results. Consequently, this article is itself another episode in the ongoing saga of inconsistency robustness in foundations.

The article for the above abstract can be found here: Inconsistency Robustness in Foundations

XML feed