User loginNavigation |
Type TheoryLNGen
There are really three stories here:
From the U. Penn folks who brought us the Coq tutorial at POPL '08. By Paul Snively at 2009-05-07 20:28 | Implementation | Semantics | Software Engineering | Type Theory | 2 comments | other blogs | 7772 reads
Achieving Security Despite Compromise Using Zero-KnowledgeAchieving Security Despite Compromise Using Zero-Knowledge
This is the follow-up to this story. The prior work did not account for compromised participants. This work does. I continue to be excited about the prospect of this previous story's work being applied to the type system described in this story, possibly resulting in an awesome new language for developing secure software. By Paul Snively at 2009-05-07 20:20 | Implementation | Semantics | Type Theory | 2 comments | other blogs | 7166 reads
An Interview with the Diamondback Ruby TeamOn Ruby has an interview with two members of the Diamondback Ruby team.
The interview covers DRuby's goals as well as a bit about its OCaml based infrastructure. More technical information about DRuby's type system can be found in Static Type Inference for Ruby. A Generic Type-and-Effect SystemA Generic Type-and-Effect System
Things really seem to be heating up in both the type-and-effects world and the mechanized metatheory world (but then, long-time LtU readers will expect me to claim that, won't they)? Of course, I also have my usual question as to what aspects of Twelf these researchers found helpful to their constructions vs. those of Coq. By Paul Snively at 2009-04-17 17:39 | Implementation | Type Theory | 1 comment | other blogs | 8953 reads
Practical Implementation of a Dependently Typed Functional Programming LanguageThis blog post about Idris led me to Edwin C. Brady's 2005 PhD thesis, Practical Implementation of a Dependently Typed Functional Programming Language.
What's the current state of the art in optimization of dependently typed languages? Capabilities for External UniquenessPhilipp Haller and Martin Odersky have submitted Capabilities for External Uniqueness to OOPSLA'09.
A prototype plugin for the Scala compiler can be found on the same page. By James Iry at 2009-04-09 16:26 | Object-Functional | Parallel/Distributed | Type Theory | 3 comments | other blogs | 10274 reads
Detecting Data Race and Atomicity Violation via Typestate-Guided Static Analysis
Detecting Data Race and Atomicity Violation via Typestate-Guided Static Analysis. Yue Yang, Anna Gringauze, Dinghao Wu, Henning Rohde. Aug. 2008
The correctness of typestate properties in a multithreaded program often depends on the assumption of certain concurrency invariants. However, standard typestate analysis and concurrency analysis are disjoint in that the former is unable to understand threading effects and the latter does not take typestate properties into consideration. We combine these two previously separate approaches and develop a novel typestate-driven concurrency analysis for detecting race conditions and atomicity violations. Our analysis is based on a reformulation of typestate systems in which state transitions of a shared variable are controlled by the locking state of that variable. By combining typestate checking with lockset analysis, we can selectively transfer the typestate to a transient state to simulate the thread interference effect, thus uncovering a new class of typestate errors directly related to low-level or high-level data races. Such a concurrency bug is more likely to be harmful, compared with those found by existing concurrency checkers, because there exists a concrete evidence that it may eventually lead to a typestate error as well. We have implemented a race and atomicity checker for C/C++ programs by extending a NULL pointer dereference analysis. To support large legacy code, our approach does not require a priori annotations; instead, it automatically infers the lock/data guardianship relation and variable correlations. We have applied the toolset to check a future version of the Windows operating system, finding many concurrency errors that cannot be discovered by previous tools. Typestates extend the ordinary types by recoding the state of objects and allowing the safety violations that stem from operations being invoked on objects that are in the wrong state. By Ehud Lamm at 2009-04-07 05:56 | Parallel/Distributed | Software Engineering | Type Theory | 2 comments | other blogs | 10521 reads
Ensuring Correct-by-Construction Resource Usage by using Full-Spectrum Dependent TypesEnsuring Correct-by-Construction Resource Usage by using Full-Spectrum Dependent Types
More ammunition for the importance of embedded domain-specific languages, dependent types, and correctness-by-construction. By Paul Snively at 2009-03-04 17:17 | DSL | Functional | Implementation | Type Theory | 3 comments | other blogs | 7892 reads
DanaLuke Palmer and Nick Szabo can shoot me for this if they want, but I think this is warranted, and I want to connect a couple of dots as well. Luke is one of a number of computer scientists, with Conal Elliott probably being the best known, who have devoted quite a bit of attention to Functional Reactive Programming, or FRP. FRP has been discussed on LtU off and on over the years, but, unusually for LtU IMHO, does not seem to have gotten the traction that some other similarly abstruse subjects have. In parallel, LtU has had a couple of interesting threads about Second Life's economy, smart contracts, usage control, denial of service, technical vs. legal remedies, and the like. I would particularly like to call attention to this post by Nick Szabo, in which he discusses a contract language that he designed:
In recent private correspondence, Nick commented that he'd determined that he was reinventing synchronous programming à la Esterel, and mentioned "Reactive" programming. Ding! To make a potentially long entry somewhat shorter, Luke is working on a new language, Dana, which appears to have grown out of some frustration with existing FRP systems, including Conal Elliot's Reactive, currently perhaps the lynchpin of FRP research. Luke's motivating kickoff post for the Dana project can be found here, and there are several follow-up posts, including links to experimental source code repositories. Of particularly motivating interest, IMHO, is this post, where Luke discusses FRP's interaction with garbage collection succinctly but nevertheless in some depth. Luke's most recent post makes the connection from Dana, which Luke has determined needs to have a dependently-typed core, to Illative Combinatory Logic, explicit, and offers a ~100 line type checker for the core. I find this very exciting, as I believe strongly in the project of being able to express computation centered on time, in the sense of Nick's contract language, in easy and safe ways extremely compelling. I've intuited for some time now that FRP represents a real breakthrough in moving us past the Von Neumann runtime paradigm in fundamental ways, and between Conal Elliott's and Luke's work (and no doubt that of others), it seems to me that my sense of this may be borne out, with Nick's contract language, or something like it, as an initial application realm. So I wanted to call attention to Luke's work, and by extension recapitulate Conal's and Nick's, both for the PLT aspects that Luke's clearly represents, but also as a challenge to the community to assist in the realization of Nick's design efforts, if at all possible. By Paul Snively at 2009-02-18 21:55 | Functional | General | Implementation | Lambda Calculus | Semantics | Theory | Type Theory | 17 comments | other blogs | 16517 reads
Parameterized Notions of ComputationParameterized Notions of Computation, Robert Atkey, JFP 2008.
Once you've programmed with monads for a while, it's pretty common to start defining parameterized families of monads -- e.g., we might define a family of type constructors for IO, in which the program type additionally tracks which files the computation reads and writes from. This is a very convenient programming pattern, but the theory of it is honestly a little sketchy: on what basis do we conclude that the indices we define actually track what we intend them to? And furthermore, why can we believe that (say) the monadic equational laws still apply? That's the question Atkey lays out a nice solution to. He gives a nice categorical semantics for indexed, effectful computations, and then cooks up lambda calculi whose equational theory corresponds to the equations his semantics justifies. The application to delimited continuations is quite nice, and the type theories can also give a little insight into the basics of how stuff like Hoare Type Theory works (which uses parameterized monads, with a very sophisticated language of parameters). On a slightly tangential note, this also raises in my mind a methodological point. Over the last n years, we've seen many people identify certain type constructors, whose usage is pervasive, and greatly simplified with some syntactic extensions -- monads, comonads, applicative functors, arrows, and so on. It's incredible to suggest that we have exhausted the list of interesting types, and so together they constitute a good argument for some kind of language extension mechanism, such as macros. However, all these examples also raise the bar for when a macro is a good idea, because what makes them compelling is precisely that the right syntax yields an interesting and pretty equational theory in the extended language. By neelk at 2009-02-11 21:40 | Category Theory | Lambda Calculus | Semantics | Type Theory | 16 comments | other blogs | 16242 reads
|
Browse archives
Active forum topics |
Recent comments
4 weeks 2 days ago
4 weeks 3 days ago
4 weeks 4 days ago
4 weeks 4 days ago
5 weeks 2 days ago
5 weeks 2 days ago
5 weeks 2 days ago
8 weeks 3 days ago
9 weeks 1 day ago
9 weeks 2 days ago