archives

Open wiki-like code repository

Here is a crazy thought experiment:

Take your favorite programming language and create a project in your favorite IDE (say C# and Visual Studio, but Scala and Eclipse or Haskell and Emacs would also work). Now imagine that this project is shared by you and a thousand of your closest programming friends. Everyone can edit the project, and you are pushed contributions in real time without any vetting. The idea is to collaborate building a library in an organic way that precludes a centralized maintainer/gatekeeper/project manager. Rather, you and everyone else could review code changes by other people, revert when vandalism occurs, or ensure that standards are met from people you don't know very well and may just be casual contributors. Yes, like a wiki.

Now, here are some of the questions that I can think:

  • Could a useful/usable library ever result from large-scale de-centralized collaboration?
  • If someone added functionality to the library that would be useful to me, how could I be made aware of that functionality? Likewise, how could I add functionality so that others could find it (answer might be language specific).
  • Can community review enough to ensure quality and say..security; e.g., that someone doesn't insert a virus into codebase. Also, that contributed code is not copied from somewhere else with an incompatible license?
  • Would certain programming languages work better than others in a code wiki? For example, would strong static typing hinder to massive collaboration because it requires too much pre-planning, or help because it ensures some consistency between contributions?
  • If the library is continuously changing (no static releases), how would it be feasible to take a dependency on the library?

Not really expecting answers, but thoughts and other questions.

most water-tight real-world language implementation?

As a long-time developer, I've created more bugs than is even remotely funny, so I'm not trying to make this a complaint or a witch hunt. Rather, I would really like to know how realistic it is to avoid bugs in the 'fundamentals' (relative term) of a programming language, and still have a language that is use/able in the real world? Testing & Modeling are the answer, I assume, but who has really applied them rigorously in the development of a programming language? It scares me that the tools we're supposed to build on might not be hallowed ground from a quality standpoint. (Yes, I know there are good reasons to trade off quality vs. speed.) I feel like there's something about "obviously no bugs" vs. "no obvious bugs" here, mixed in with issues of how the programming language will affect the program language; the simpler the programming language, the more we're just shifting the bugs out to all of the programs?

Metrics or formailizations of "local reasoning"?

Often in modularity discussions the concept of "local reasoning" is discussed, and it's always assumed we want to make it easier, as a tactic for in turn making it easier for programmers to keep programs in their head. 'Local' to me implies that we have some notion of distance, and I'm curious what work if any has been done on measuring that distance. I've seen lots of claims about X or Y language feature reducing the ability to engage in local reasoning, but no attempts at quantification, without which I don't understand how the languages can be properly compared. Papers, links to prior discussions, etc. appreciated.