User loginNavigation |
How much power should programmers have?True or False? Programmers should not be required or allowed to specify any choice of algorithm or representation when a clearly better choice having equivalent results can be found by analysis, profiling, and consultation of a large knowledge base. Compilers should for example treat any code describing a bubblesort, once detected, as a call to a better sorting algorithm. If this requires extensive analysis to figure out exactly how to map the bubblesort onto the better sort, then damnit that analysis ought to be done! The minimal "compiling" tests used by most compilers are only enough to find the most simple and obvious optimizations and bugs. Working through the whole knowledge base may take weeks of time on all of your company's thousand servers, so it should be a background task that runs continuously. Automated agents distributed with the IDE should participate in a distributed network to gather and share knowledge about code equivalents and optimizations as they are discovered. If any pattern of code existing in the local codebase is proven semantically equivalent to some class of more efficient implementations, by any agent anywhere in the world, then the news ought to come to the local agent via the network, so that next time, and every subsequent time, that local code gets compiled, the compiler will know that it should be compiled as though it were one of those more efficient implementations. Likewise, any analysis technique which can, statically or dynamically, detect unambiguous bugs (such as 'malloc' being called while a call frame for a destructor is on the stack in C++, or detecting race conditions in threaded code, or detecting a side effecting function used in an "assert", etc) should, once discovered, be distributed over the network and, on receipt and periodically thereafter, run against the local codebase. These tests ought to be running continuously, in round-robin style, in the background whenever any workstation has the IDE or the headless agent running, and the work ought to be shared out to all IDE's and agents accessing the same codebase so as to minimize the total time to complete testing of the whole codebase in its current version. These tests and optimizations should remain in the knowledgebase until a later test subsumes their functionality, or until expired by local policy or size limits on the local knowledgebase. If expiry becomes common, then they should be redistributed at intervals. The Daily WTF should be triaged as a source of outrageous examples of wrong or wasteful programming practices, and 'proper' implementations for each correct but horribly-inefficient or wasteful example found there ought to be part of the knowledge base. Likewise, anything there which is clearly a bug no matter what the circumstances, ought to be part of the knowledge base and if static tests can be devised to detect similar bugs, they ought to be. Is this an extreme position? I think it is, but I also believe it has merit. Why not do this? Ray By Ray Dillinger at 2012-09-12 18:44 | LtU Forum | previous forum topic | next forum topic | other blogs | 8381 reads
|
Browse archives
Active forum topics |
Recent comments
20 weeks 1 day ago
20 weeks 1 day ago
20 weeks 1 day ago
42 weeks 2 days ago
46 weeks 4 days ago
48 weeks 1 day ago
48 weeks 1 day ago
50 weeks 6 days ago
1 year 3 weeks ago
1 year 3 weeks ago