User loginNavigation 
what's a useful definition of equivalent functions?Equivalence of functions is, in general, undecidable. But specific instances of it are usually decidable. Equivalence can be proven via structure analysis, or disproven via structure analysis or by example. Equality can be proven on functions whose input is limited to a finite set simply by exhaustive testing of all possible inputs. So, while it's true that there will always be an infinite set of pairs of functions for which we cannot prove equivalence or nonequivalence, it isn't really the case that we can't ever prove anything. But what notion of 'equivalence' is useful? What if we have two functions that are provably equivalent where their results are defined, but result in errors in different cases, or result in *different* errors in some cases? If I can prove, in the context of a particular program, that no possible call to a function will be a member of either errorcausing set, then the functions are equivalent ... for purposes of that program. ie, they are observationally equivalent for the particular subset of valid inputs that we're interested in. What if we have two functions that do equivalent things to members of slightly different types? For example, if I have two stringconcatenation functions, but one of them concatenates a string class on which fifteen other operations are defined and one of them concatenates a string class on which thirteen of the _same_ other operations and three _different_ other operations are defined? These concatenation functions are not exactly mathematically equivalent, but in some very important way they are exactly the same, and may even be expressed by exactly the same source code. If a mathematician said these functions weren't equivalent, as an engineer I would roll my eyes and ignore him. If my job required me to satisfy his useless notion of equivalence, I would *make* the functions equivalent by just defining the 'missing' operations in both classes. What about having 'equivalent' multiplication functions on, say, ints and rationals? If I posit that there is a multiplication function on a 'number' type that both of the other functions are subsets of (that is, they give equal results on equal inputs, for all inputs they can be given), is that a useful notion of equivalence? What if we have two functions that return exactly the same results in all cases, and we can prove it ... but one of them takes Nsquared memory and NlogN runtime, and the other takes Nsquared runtime and NlogN memory? As an engineer I'd find this semantic equivalence and resourcerequirement nonequivalence useful and swap one for the other depending on whether speed or memory use were more important. As a mathematician, I don't even know whether I'd be working with a notion of equivalence that would note the difference. In the end, 'equivalence' is any relation that's transitive, symmetric, and reflexive. There's an infinite number of choices of such relations, and people who talk about function equivalence often do so without saying which equivalence they mean. Or, importantly, which equivalence is most useful in some relevant context. And while you're chewing on the question of what 'equivalence' is most useful in a given context, what notion of proof is most useful to establish it? What if we have a pair of functions which has been observed to return the same answer for the same inputs on a randomlyselected billion possible inputs ... and monitoring both has revealed that we have achieved 100% code coverage in both functions while doing so ... and other monitoring, plus a few million notsorandomlyselected inputs, has assured that all possible combinations of taken and nottaken conditional branches in both functions have been exercised, on inputs that differ from each other by the theoretical minimum possible for that input type? As a mathematician, I couldn't claim that I know these functions are equivalent. As an engineer, I'd consider the indications of function equality to be strong enough that I wouldn't mind the absence of a proof. I'd feel comfortable swapping one for the other, especially if they are both relatively short. Why short? Shortness limits the possible semantic complexity to some wobbly notion of a boundary, making it less likely that one or both contains subtleties not revealed by the extended fuzztest. If one or both of the functions were twice as long, I'd want more fuzztesting before I'd feel as comfortable swapping them. This corresponds to some intuition about reaching a particular certainty level, but that certainty level, no matter how you slice it, is less than an absolute proof. So, here's a question. Is it useful for programmers to go with some 'heuristic' of function equality, or an engineeringgrade 'indication' of function equality, if no mathematically certain proof is available and by doing so they can achieve measurably better results? Is it worthwhile for PL theorists and researchers to seek and refine such 'heuristics' and ways of estimating the strength of such 'indications' (rather than 'proofs') of function equality? In mechanical engineering, strength of materials is always an estimate. In programming, sometimes the strength of our knowledge is also an estimate. This is a problem, but not necessarily a disaster, in terms of producing useful results. Mechanical engineers compensate by including a 'safety margin' well beyond the requirements of the particular job. How should programmers compensate? By Ray Dillinger at 20130516 18:31  LtU Forum  previous forum topic  next forum topic  other blogs  7151 reads

Browse archivesActive forum topics 
Recent comments
6 days 20 hours ago
1 week 16 hours ago
1 week 20 hours ago
1 week 20 hours ago
1 week 21 hours ago
1 week 21 hours ago
1 week 22 hours ago
1 week 23 hours ago
1 week 1 day ago
1 week 1 day ago