User loginNavigation |
archivesAre scalars "just" degenerate matrices?I'm working on a Haskell library implementing the ideas in George Hart's book Multidimensional Analysis: Algebras and Systems for Science and Engineering. In essence explains the way that physical dimensions propagate through matrix equations when different locations in a single matrix may not share the same physical dimension, which happens all the time in a bunch of useful engineering/optimization/signal-processing applications. This is all well and good, and the dimensional library is a great jumping-off point that handles all of the type-level arithmetic. My question/puzzle is in several parts, so I'll start with a motivating example. If you look at the dimensional library, or most any approach to static typing of physical dimensions (or units, whether to track dimensions or units is a separate battle which I'm not exploring today), it works by representing dimensioned quantities using a phantom type that encodes their dimensions. When you do this in a library you end up with There are two solutions to this untidyness: The first is introducing a system for user-defined coercions between types. The drawback is that inferring where to put such coercions is difficult, complicates/clutters the principle type of pretty much every polymorphic function, and doesn't mix well with a lot of other type system features that are desirable. In summary, it's good if your language already has it, but adding it to your language to solve this problem opens a large can of worms. The second is to simply declare that all numbers have a dimension, it's just coincidence that that dimension is often 1. In Haskell-speak, you move all the types like The second option seems justified for a green-field language design, even though it requires sweeping changes, because I can't think of a logical difference between a dimensionless 37 and a dimensionally naive 37 and because you can almost always erase the phantom type so there needn't be a performance problem. Question one: are there valid objections? Question two (bonus): literals that have the value 0 can be interpreted as polymorphic in their dimension, are there good objections for teaching the compiler about this rule? CSS is the only language I know of that does this, but it seems like a good idea. When you start to track matrix sizes, you run into a similar problem where scalars and vectors start to look like degenerate matrices in the same way that dimensionally naive 37s are degenerate dimensioned 37s at dimension one. Adopting this convention recovers a single multiplication operator, saving you from scalar * scalar, scalar * vector, scalar * matrix, vector * matrix, matrix * matrix, and all the flipped versions. Scalar addition, inversion, and exponentiation maintain the same meanings if you look at them as degenerate matrix addition and exponentiation (at least for integer exponents?), and so do all the common trigonometric functions. When you start to track matrix types for the non-uniformly dimensioned matrices I mentioned in the first paragraph, you end up realizing that matrix size checking falls out naturally from checking matrix dimensions (in the sense of physical dimensions) because you end up with an index type like This motivates the third question: what are the objections to treating all scalar numbers everywhere not only as degenerate dimensionless quantities, but as degenerate dimensionless matrices? I have three objections so far:
TL;DR: Is How much power should programmers have?True or False? Programmers should not be required or allowed to specify any choice of algorithm or representation when a clearly better choice having equivalent results can be found by analysis, profiling, and consultation of a large knowledge base. Compilers should for example treat any code describing a bubblesort, once detected, as a call to a better sorting algorithm. If this requires extensive analysis to figure out exactly how to map the bubblesort onto the better sort, then damnit that analysis ought to be done! The minimal "compiling" tests used by most compilers are only enough to find the most simple and obvious optimizations and bugs. Working through the whole knowledge base may take weeks of time on all of your company's thousand servers, so it should be a background task that runs continuously. Automated agents distributed with the IDE should participate in a distributed network to gather and share knowledge about code equivalents and optimizations as they are discovered. If any pattern of code existing in the local codebase is proven semantically equivalent to some class of more efficient implementations, by any agent anywhere in the world, then the news ought to come to the local agent via the network, so that next time, and every subsequent time, that local code gets compiled, the compiler will know that it should be compiled as though it were one of those more efficient implementations. Likewise, any analysis technique which can, statically or dynamically, detect unambiguous bugs (such as 'malloc' being called while a call frame for a destructor is on the stack in C++, or detecting race conditions in threaded code, or detecting a side effecting function used in an "assert", etc) should, once discovered, be distributed over the network and, on receipt and periodically thereafter, run against the local codebase. These tests ought to be running continuously, in round-robin style, in the background whenever any workstation has the IDE or the headless agent running, and the work ought to be shared out to all IDE's and agents accessing the same codebase so as to minimize the total time to complete testing of the whole codebase in its current version. These tests and optimizations should remain in the knowledgebase until a later test subsumes their functionality, or until expired by local policy or size limits on the local knowledgebase. If expiry becomes common, then they should be redistributed at intervals. The Daily WTF should be triaged as a source of outrageous examples of wrong or wasteful programming practices, and 'proper' implementations for each correct but horribly-inefficient or wasteful example found there ought to be part of the knowledge base. Likewise, anything there which is clearly a bug no matter what the circumstances, ought to be part of the knowledge base and if static tests can be devised to detect similar bugs, they ought to be. Is this an extreme position? I think it is, but I also believe it has merit. Why not do this? Ray |
Browse archivesActive forum topics |
Recent comments
36 weeks 1 day ago
36 weeks 2 days ago
36 weeks 2 days ago
1 year 6 weeks ago
1 year 10 weeks ago
1 year 12 weeks ago
1 year 12 weeks ago
1 year 14 weeks ago
1 year 19 weeks ago
1 year 19 weeks ago