General

CERT C Secure Coding Standard

From SC-L:

We would like to invite the community to review and comment on the current version of the CERT C Secure Coding Standard available online at www.securecoding.cert.org before Version 1.0 is published.

Hey, maybe this is also relevant for the curriculum thread.

When Are Two Algorithms the Same?

When Are Two Algorithms the Same? Andreas Blass, Nachum Dershowitz, Yuri Gurevich. February 2008

People usually regard algorithms as more abstract than the programs that implement them. The natural way to formalize this idea is that algorithms are equivalence classes of programs with respect to a suitable equivalence relation. We argue that no such equivalence relation exists.

A bit more philosophical than usual, but the issue is quite relevant to discussions in the field.

It is possible to stipulate any equivalence relation that is considered useful (e.g., equivalence up to local transformations) but the notion of a universally applicable relation is indeed problematic.

Programmers At Work

Via Scott Rosenberg (whose book Dreaming in Code I think we mentioned here before), I learn than Susan Lammers is making the interviews from her 1984 book Programmers At Work available on the Web.

Here is how she describes the goals of her new site:

Many people have urged me over the years to do a second Programmers At Work (PAW) with a new generation of programmers and I’ve sketched out the project, made lists of new folks to feature, done inquiries, thought about going back to talk to the guys in the original edition, and other variations . Now with this web site, I will make the original interviews available online, and perhaps it will become the seed for a more “thoroughly modern” approach to the PAW series. What I’m hoping we can kindle on this site is an ongoing exploration and dynamic conversation with the "connected" community of programmers on the web about the creative process in programming.

Who better than the LtU community to contribute to such a conversation?

The first PAW interview posted to the site is the interview with Charles Simonyi, a man whose views of the future of programming, and programming languages, are mentioned here often (though not always with great enthusiasm).

Arc is released

Make of it what you will, but Arc is now officially released.

This part of Graham's announcement is a gem:

I worry about releasing it, because I don't want there to be forces pushing the language to stop changing. Once you release something and people start to build stuff on top of it, you start to feel you shouldn't change things. So we're giving notice in advance that we're going to keep acting as if we were the only users. We'll change stuff without thinking about what it might break, and we won't even keep track of the changes.

I realize that sounds harsh, but there's a lot at stake. I went to a talk last summer by Guido van Rossum about Python, and he seemed to have spent most of the preceding year switching from one representation of characters to another. I never want to blow a year dealing with characters. Why did Guido have to? Because he had to think about compatibility. But though it seems benevolent to worry about breaking existing code, ultimately there's a cost: it means you spend a year dealing with character sets instead of making the language more powerful.

This sure made me smile...

Prediction for 2008

So, what are your prediction for 2008? Naturally, we are only interested with predictions related to programming languages...

Three types of predictions are thus in order: (1) Predictions about PLT research (direction, fads, major results) (2) Predictions about programming languages (whether about specific languages, or about families etc.) and (3) Predictions about industrial use of languages/language-inspired techniques (adoption, popularity).

OCaml Light: A Formal Semantics For a Substantial Subset of the Objective Caml Language

OCaml Light: a formal semantics for a substantial subset of the Objective Caml language.

OCaml light is a formal semantics for a substantial subset of the Objective Caml language. It is written in Ott, and it comprises a small-step operational semantics and a syntactic, non-algorithmic type system. A type soundness theorem has been proved and mechanized using the HOL-4 proof assistant, thereby ensuring that the proof is free from errors. To ensure that the operational semantics accurately models Objective Caml, an executable version of the semantics has been created (and proved equivalent in HOL to the original, relational version) and tested on a number of small test cases.

Note that while we have tried to make the semantics accurate, we are not part of the OCaml development team - this is in no sense a normative specification of the implemented language.

From a team including Peter Sewell (Acute, HashCaml, Ott).

I continue to believe that things are heating up nicely in mechanized metatheory, which, in the multicore/multiprocessor world in which we now live, is extremely good news.

Online Learning of Relaxed CCG Grammars for Parsing to Logical Form

Online Learning of Relaxed CCG Grammars for Parsing to Logical Form, Luke S. Zettlemoyer and Michael Collins. Empirical Methods in Natural Language Processing and Computational Natural Language Learning, 2007.

We consider the problem of learning to parse sentences to lambda-calculus representations of their underlying semantics and present an algorithm that learns a weighted combinatory categorial grammar (CCG). A key idea is to introduce non-standard CCG combinators that relax certain parts of the grammar—for example allowing flexible word order, or insertion of lexical items— with learned costs. We also present a new, online algorithm for inducing a weighted CCG. Results for the approach on ATIS data show 86% F-measure in recovering fully correct semantic analyses and 95.9% F-measure by a partial-match criterion, a more than 5% improvement over the 90.3% partial-match figure reported by He and Young (2006)

This paper isn't exactly PL, though Ehud has been okay with the odd foray into computational linguistics before. I thought it was interesting to see machine learning work make use a typed formalism like categorial grammars to handle long-range dependencies, and it leaves me wondering if it's possible to set these kinds of techniques onto program analysis problems.

One neat thing about the CCG formalism is that you have parsing, which is described more or less as a typechecking problem, and separately you have the semantic constraints, which are basically lambda-calculus terms that build up a term in first-order logic. So I can imagine doing something like writing down how you're supposed to use a library as semantic constraints that add up to a Prolog program, and then at compile time the compiler walks the typing derivation to construct the program, and then runs it to figure out if you're doing something dumb (according to the library designers). I don't know if that actually makes sense, but this work certainly prompted me to think.

Gödel, Nagel, minds and machines

Solomon Feferman. Gödel, Nagel, minds and machines. Ernest Nagel Lecture, Columbia University, Sept. 27, 2007.

Just fifty years ago, Ernest Nagel and Kurt Goedel became involved in a serious imbroglio about the possible inclusion of Goedel’s original work on incompleteness in the book, Goedel’s Proof, then being written by Nagel with James R. Newman. What led to the conflict were some unprecedented demands that Goedel made over the use of his material and his involvement in the contents of the book - demands that resulted in an explosive reaction on Nagel’s part. In the end the proposal came to naught. But the story is of interest because of what was basically at issue, namely their provocative related but contrasting views on the possible significance of Goedel’s theorems for minds vs. machines in the development of mathematics.

This is not directly PLT related, and more philosophical than what we usually discuss on LtU, but I think it will be of interest to some members of the community.

While the historical details are interesting, I am not sure I agree with the analysis. It would be interesting to here what others make of this.

To make this item slightly more relevant to LtU, let me point out that both the LC and category theory are mentioned (although they are really discussed only in the references).

Privacy and Contextual Integrity: Framework and Applications

Privacy and Contextual Integrity: Framework and Applications, A. Barth, A. Datta, J.C. Mitchell, and H. Nissenbaum. Proceedings of the IEEE Symposium on Security and Privacy, May 2006.

Contextual integrity is a conceptual framework for understanding privacy expectations and their implications developed in the literature on law, public policy, and political philosophy. We formalize some aspects of contextual integrity in a logical framework for expressing and reasoning about norms of transmission of personal information. In comparison with access control and privacy policy frameworks such as RBAC, EPAL, and P3P, these norms focus on who personal information is about, how it is transmitted, and past and future actions by both the subject and the users of the information. Norms can be positive or negative depending on whether they refer to actions that are allowed or disallowed. Our model is expressive enough to capture naturally many notions of privacy found in legislation, including those found in HIPAA, COPPA, and GLBA.

A number of important problems regarding compliance with privacy norms, future requirements associated with specific actions, and relations between policies and legal standards reduce to standard decision procedures for temporal logic.

Contextual integrity is a part of a philosophical theory of privacy developed by the philosopher Helen Nissenbaum, and it's very neat to see it being applied to develop machine-checkable access-control formalisms.

XML feed