Splitting witnesses up


I have an idea for an academic paper, and am seeking help for prior art and research.

My idea is about proof witnesses, and what to do when a group of witnesses contradict each other (software bug/logical inconsistency), and repairing the proof. Without going into details on the algorithm, the metaphor is simply to split witnesses up, using a mathematical plane to draw a group of witnesses into two groups: definitely lying (the witness whose account is proven contradictory/false), and somewhat certain (witnesses with similar testimonies who have not yet been proven contradictory/false, but might once asked further questions or may be proven "good enough" given the new questions/evidence we've asked the witness about). My idea comes from witness interrogation techniques in the real world.

Since I haven't been active in PLT for several years, my knowledge of what to search for and keywords to use on ACM and IEEE search engines is pretty weak, and I could use some tips.

Thank you very much!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Fuzzy logic, probabilistic reasoning?

Working with a quantitative instead of binary assessment of truthiness is studied in fuzzy logics. I don't know much about them and I suppose it hasn't radically changed in the last "several years".

What you describe also seems related to the work on bayesian reasoning for error messages (which is an example of handling an inconsistency in a system of witnesses/constraints) in the SHErrLoc project -- of the two papers the one that probably presents the technique best, is Toward General Diagnosis of Static Errors by Danfeng Zhang and Andrew C. Myers, 2014. I know about this work because of its PL angle, but I think there are probably a lot of other relevant work in the "program analysis" literature where this kind of "inferring MAY/MUST constraints that may or may not be true" is somewhat common. A canonical reference could be Bugs as Deviant Behavior: A General Approach to Inferring Errors in Systems Code by Dawson Engler, David Yu Chen, Seth Hallem, Andy Chou, and Benjamin Chelf, 2001.


I did not consider error message research. That is an interesting angle, and why I am so thankful for a community like LtU!