User loginNavigation |
archivesOn the (Alleged) Value of Proof for AssuranceEhud is about to be annoyed with me. :-) I am lately suffering from doubts about whether the value of proof is justfied by its cost. Some questions, in no particular order:
There is no question in my mind that proof processes generate more robust code. Yet the general concensus seems to be that this robustness is much more an emergent consequence of rigorously understanding the problem then a result of the proof discharge. In this view, the primary benefit of proof (in the context of assurance) is largely to keep the specification "honest". If this is in fact the goal, is proof the best way to accomplish that goal? How much of that goal can be undertaken by type systems? In a private conversation, Gernot Heiser (NICTA, OK Labs) was recently asked how use of proof impacted their QA costs. As I recall it, his answer was that they run between 1.5x and 2x the end-to-end cost of conventional development and testing, but they achieve much higher confidence. My questions:
In a second private conversation, Eric Rudder observed that one of the costs of proof-based methodology was a loss of ability to rapidly adapt software to new requirements and new demands. It follows that proof is not always appropriate, and that a more continuous cost/time/benefit option space would be desirable. My own observation is that in the end, systems need to be robust, and they include components that lie well outside our ability to prove them. In many cases, type can be usefully exploited when proof cannot, and there is a training cost to educating people in both domains. So finally my question: Once we step away from formal semantics and PL insights (which are certainly good things), what is the proper role of proof in real-world production? And what is the proper role of advanced type systems? determining subsumption of regular languagesI recently came across the concept of regular expression types, such as in XDuce. The idea seems promising, but the current implementations all seem excessively restrictive (disallowing nontrivial patterns), so I was wondering about the feasibility of further development in this area. The bottleneck for regular types would definitely be in trying to determine subsumption of these types in an automated manner, but I can't seem to find much literature on the subject. Computationally how hard is it to determine equivalence/subsumption of regular languages in general. Finite? Hard? Easy? For at least some languages (a+ <: a*) this sort of comparison seems plausible. If this comparison isn't plausible in general, is there any way to foresee which languages are easy to compare/hard to compare? |
Browse archivesActive forum topics |
Recent comments
22 weeks 3 days ago
22 weeks 4 days ago
22 weeks 4 days ago
44 weeks 5 days ago
49 weeks 4 hours ago
50 weeks 4 days ago
50 weeks 4 days ago
1 year 1 week ago
1 year 5 weeks ago
1 year 5 weeks ago