'Information and Computation' Open Access

I found this in my mail.

August 12, 2005

The Publisher and Editorial Board of Information and Computation are pleased to announce that for one year, effective immediately, online access to all journal issues back to 1995 will be available without charge. This includes unrestricted downloading of articles in pdf format. Journal articles may be obtained through the journal's web site http://theory.csail.mit.edu/~iandc or Elsevier's Sciencedirect at http://www.sciencedirect.com/science/journal/08905401

At the end of the year, the retrieval traffic during the open access period will be evaluated as future subscription policies are considered.

Albert R. Meyer, Editor-in-Chief, MIT Computer Science & AI Lab
Chris Leonard, Publishing Editor, Elsevier
Moshe Y. vardi, Associate Editor, Rice University

Those of a more theoretical bent will find lots of interesting articles there.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More online journals

The "Information and Computation" site is open for a limited time, though presumably if the traffic stats show lots of downloads, they will consider opening it up for perpetuity.

You can find many more open access journals at the Directory of Open Access Journals. The most interesting sections for readers of LtU will be Computer Science and, perhaps, Mathematics.

Academic Economics

Interesting... this is a refreshing change from the money-mills that many journals try to be. Almost all journal subscriptions are expensive, most editors are volunteers, and to top it off, some journals even charge authors for getting published, or at least charge their institutions.

So, if one is nearly ready to submit a paper, does one opt for one of the aforementioned journals that has an established reputation, or a free electronic journal with an excellent editorial board?

While I applaud the Open Access...

I applaud the Open Access, I actually looked at that Journal for possibly the first time in my life.

Sigh! Does it have to be all so useless?

I read the titles and abstracts for 15 issues. And I didn't spot one paper that will change the way I (or anybody I know) will do computing now, or in twenty years time.

Perhaps there was. But it was so obscured by jargon as to be utterly useless.

I would love to be proved wrong.

Well then.

This situation holds for 99% of science.

Sadly, "science" has long become an industry that turns jargon into money.

Inflection Point

I guess my reaction to this is that you never know where an inflection point is going to arise. Academic papers on binary space partitioning trees coming out of the SIGGRAPH conference basically had no impact on desktop personal computing until John Carmack got hold of them and wrote DOOM, and LtU's own Tim Sweeney did something similar with the notion of a "portal engine," "procedural textures," and other technologies that were novel, as far as the mainstream is concerned, when Unreal first shipped. The world of computer gaming was changed irrevocably by both events. It appears to be impossible to predict how and when the next such transition will occur. In the meantime, it seems foolhardy to dismiss technical writing as "useless" or "jargon-laden," as the track record of such efforts would seem to counterindicate such a reaction.

The necessity of jargon

Come on, what do you expect? This is a scientific journal, presenting articles directed at scientists. You cannot expect it to be on the level of a better "XYZ for dummies" publication. How can you judge it's "useless" just because it's opaque to you?

Regarding jargon: to discuss something effectively and accurately you need effective and accurate terminology. Any science that developes over decades inevitably becomes divided into more and more specialised subdisciplines, with more and more specialised terminology. It's not the terminology that makes something oqaque for outsiders, it's the highly specialised nature of a field. Terminology is effect, not cause.

When did you last use terminology like "compiler", "declaration", "protected virtual destructor", "remote method invocation", "XML", "UML", "TCP/IP" or "template template argument"? Don't you think that is completely opaque to somebody who is not into IT? And do you really believe you could communicate technical content without it? What would you answer to somebody claiming that all IT is useless because it relies on "obscure jargon"?

The problem is deeper.

The problem is that "obscure jargon" is used precisely because it is needed to "obscure" the fact that the original paper holds absolutely no scientific or practical value.

Stop trolling

If you really claim that this holds for 99% of science (like you say above), or even the majority of papers in the above journal, then I can only take this as a blunt act of trolling, and insult towards all people in science.

I'm sorry.

I made that number up.

But still, the point holds: significant (or even useful) discoveries are rare, while a career scientist needs to publish something several times a year if he or she doesn't wish to starve to death.

Multiply the number of journals by the number of career scientists (and there are lots) and you get what you see in result.

Off-topic

Regardless of your views on the quality of academic research, what does this have to do with programming languages?

Hmm. Was I trolling? No, a constructive suggestion then

I was seeking a reaction. (I want people to change what they are doing or point out what I'm missing.)

I was making an obviously contraversial statement (I said the papers were useless vs the authors bothered to publish them)

My statement insulted the substantial efforts of a large group of people (but did not attack any of them personally.)

My statement was unfortunately also true. So not a troll, by certainly challenging.

So let's be a little more constructive in this post.

What does it take for a paper to be useful?

* It needs to be accessible. (ie. If your paper is in a pay for subscription Journal read by a limited number of people in your speciality, it's impact is going to small.)

* It needs to be readable. If it is only understandable with great effort by someone already a specialist in that narrow field, it's impact is going to be a lot smaller.

* It should address and resolve one of the numerous problems actively pissing off the hordes of poor sod doing computing. (It's called prioritization.) ie. Get off butt, go see why software is so incredibly expensive and buggy. Fix it.

* This is a BIG one. It should have a freely available, productized (documented, usable, packaged, easily deployable, robust) reference implementation and/or example. ie. Paper's don't change things, productized code does.

Have you ever seen a useful macromedia "flash" animation?

I have see some (tutorials), but very very very very few useful "flash" animations.

Why?

Because by design, they are explicitly not allowed to do anything, they cannot use anything real, they cannot change anything real.

If you don't follow the guidelines I mentioned above, your paper is like an irritating flash animation whirring away saying look at me, look at me and DOING NOTHING!

The engineering department is in the next building

Research papers are written for experts because experts build on each other's work. There can be a long, long distance between important research ideas and "productized code". Not every important idea will directly resolve current practical problems. That's particularly true in programming language research, because the ideas likely to have most impact involve changes to programming languages which can't necessarily just be bolted on to the next version of C# or Java.

Restricting research only to ideas which have immediate practical relevance would cripple progress in just about any field. In general, these suggestions seem to be confusing research with engineering, theory with practice, invention with innovation, etc.

Observation...

I was making an obviously contraversial statement (I said the papers were useless vs the authors bothered to publish them)
One man's trash is another man's gold. Perhaps they are useless to you because you are unable to see their full value. In the same way, a paper on relativity would be fairly useless to a first century Pythagorean.
My statement insulted the substantial efforts of a large group of people (but did not attack any of them personally.)
So is it ok to make racial slurs as long as I don't name anybody?
My statement was unfortunately also true. So not a troll, by certainly challenging.
Was it true by fiat? Because you certainly didn't provide a compelling argument for it.
* It needs to be readable. If it is only understandable with great effort by someone already a specialist in that narrow field, it's impact is going to be a lot smaller.
Does that mean that Physical Review Letters should be published at a level that high school physics students can understand? If microprocessor researchers were forced to define what a 4-way associative cache is every time they write about a processor improvement, they would either never get anything useful done, or they would choose to quit writing about their work. If you can't decompress the file, either don't try to read it, or educate yourself until you can.
* It should address and resolve one of the numerous problems actively pissing off the hordes of poor sod doing computing.
That's what engineers are for. Scientists pursue theories to advance the state of the art. Engineers exist to solve real-world problems. Some people need to be a little of both, but certainly not everybody. Charles Babbage wasn't solving a pressing issue that was on everyone's mind when he conceived the Difference Engine and the Analytical Engine. You would have him put aside something that was literally impossible to build in his day to pursue something more practical? That may well have set back computing by decades. If people don't think ahead, progress will always be limited by the latest roadblock. That's no way to do science.
* This is a BIG one. It should have a freely available, productized (documented, usable, packaged, easily deployable, robust) reference implementation and/or example. ie. Paper's don't change things, productized code does.

That's funny. I don't believe Alan Turing built a single one of his infamous machines, and yet they are used to measure just about anything of interest within computer science. Are you going to tell me that because Claude Shannon didn't build any information system prototypes, that his papers on information theory didn't change anything?

The problem is that you sound very much like an engineer who is annoyed by the very existence of scientists. That is somewhat understandable, but a very narrowminded point of view. Yes, practicality has its time and place, but that is not every time and every place. Sometimes it is useful to engage in theoretical flights of fancy--gendanken experiments--in order to advance the state of knowledge. One could argue that the tools built by engineers without regard to theory are fairly inferior to those built by theorists. Lisp vs. C, TeX vs. ? Word?, etc. Expedience may solve problems quickly, but not always the most effectively. That's why "hacks" are both admired and shunned. Cleverness is a quick fix, but rarely a long-term solution.

If auto mechanics were forced to eliminate jargon in their work, do you think it would be cheaper for them to fix your car? "Uhh...Fred, could you pass me that tool that you use to uhh, loosen the axle-thing on the front of the car that's connected to that engine-thingy?" I don't think you want 500-page shop manuals written in a "readable" and "accessible" way. DSLs exist not because humans are spies trying to hide information from each other, but because they allow for more efficient communication. "Shop talk", whether it's an auto shop or an IT shop or a computer science shop is useful within its context and population. If you want to be a member of that population, work to learn the language. But telling biologists to stop using Latin isn't going to help anybody.

That's what LtU is for

One of the primary purposes of LtU is to highlight work that LtU's audience might find interesting. It provides this filtering function precisely because not every published work is interesting to every person on the planet.

When it comes to the judgement of any given work, anyone who thinks they're qualified to assess the uselessness or otherwise of a work that they haven't actually taken the time to understand is kidding themselves. Even if you do make the effort to understand a work, there's no guarantee you'll be successfully able to judge its significance.

Who thinks that if they had picked up a copy of Church's "Calculi of Lambda Conversion" when it was published in 1941, that they would have immediately concluded that it described a formalism which would one day be used to formalize much of computing, providing a powerful means for reasoning about computing, a useful way of formalizing any programming language, and a direct basis for an important family of programming languages, compilation techniques, etc.?

If anyone could tell us how to make progress in science and mathematics without having to explore all the many dead ends and low-yield areas, it would revolutionize the human acquisition of knowledge. The same goes for avoiding highly specialized jargon.

Rather than complaining about something whose functioning isn't clearly understood, I recommend learning more about it instead. A nice introduction to how mathematics is done can be found in The Mathematical Experience. Although it focuses specifically on mathematics, its lessons are more broadly applicable. Here's a quote from the introduction by Gian-Carlo Rota:

"Making mathematics accessible to the educated layman while keeping high scientific standards has always been considered a treacherous navigation between the Scylla of professional contempt and the Charybdis of public misunderstanding. Davis and Hersh have sailed across the strait under full sail. Watching from the stern of their ship, we breathe a sigh of relief as the vortex of oversimplification recedes into the distance."

"Useless" ideas

When it comes to the judgement of any given work, anyone who thinks they're qualified to assess the uselessness or otherwise of a work that they haven't actually taken the time to understand is kidding themselves. Even if you do make the effort to understand a work, there's no guarantee you'll be successfully able to judge its significance.

Indeed. Number theory was widely considered to be the most useless branch of mathematics for centuries. Then it became a subject of high prestige practically overnight, thanks to the widespread availability and growing demand for cryptography. Number theory is likely to be high among the interests of any young aspiring mathematician, which simply wasn't true 30 years ago.

By contrast, people discovered the utility of Church's work immediately.

How do you find existing ideas?

I agree, LTU is a great way to become aware of literature that may be of interest. Finding your way around the large body of literature available is a daunting task. But I also often wonder if something is looming on the horizon that might supplement the traditional methods of expertise and community.

Google is great for finding a particular person or business. It works well because you already have the noun for what you are looking for, and Google is good at finding nouns and exact phrases. The same is true for Citeseer or MathSciNet.

However, what if you don't have the noun for what you are looking for? All you have is the idea. You don't know what this idea may already be called. This idea might be out there, with a useful body of work behind it, but how do you find it?

Finding a body of work relevant to an idea that is original to you seems to be a very hit-or-miss proposition. An expert is pretty good at guessing what these ideas would be called, but if these educated guesses are incorrect, then a helpful body of work can easily be missed.

I like to imagine that a few decades from now, there will be a large body of research in computer science and mathematics that's in a form that a theorem prover can digest. Ideally, much of this work would be cross-referenced with traditional literature, to aid both conceptual and historical understanding.

Perhaps then an effective Google for ideas, without the nouns, could exist. SImply put in a few definitions, statements, and/or theorems, and get out a listing of applicable ideas from the database.

Through interaction...

I think the solution to the problem you state is exactly communities like LtU. I learned more about physics from a chat room than I ever did from a textbook. The textbook gives you one author's perspective and analogies and topics, but a community gives you a spectrum of ideas. The odds of one person doing an exhaustive bibliographic search is probably not great. But the odds of an entire community missing out on a real connection is much smaller.

I believe it's called the "Small World Effect" in some circles. It's been discovered that networks which occur in the real world tend to be very highly connected in a special way. That is, networks tend to be a bit fractal-like in their nature. They feature clustering at different scales, but high-level connections to distant nodes tend to make the shortest path between any given pair of nodes fairly short. Hence, games like Six Degrees of Kevin Bacon, or mathematicians' Erdos Number. My intuition is that knowledge networks like online communities such as LtU have similar properties. There are clusters of experience connected by high-level links which tend to connect any two ideas over a surprisingly short distance.

The knowledge of LtU as a collective is far, far greater than the knowledge of any of its participants, and it's fairly accessible compared to other knowledge systems. It's robust, distributed, gracefully degrades...it's a high-level brain. ;> And the diversity of its community is exactly what makes it so powerful. I dare say LtU is smarter than your average CS dept. exactly because academia tends to be a little inbred. I think it goes without saying that it's much smarter than your average IT dept. because those typically exist to solve a specific set of problems.

Simply by posting an idea on LtU, one is sending an input into a highly associative network that will tend to activate some nodes and perhaps inhibit others. The activated nodes will then respond, inciting further cascades of activation until the network settles into a stable state or a set of oscillating states. I can't imagine Google coming close to the level of power in the LtU "brain". Interestingly enough, the network never flies off on a positive feedback trajectory that causes the state to diverge, mainly because it has monitor processes that exist to check that type of pathological behavior. Anyway, my point is that I would trust LtU long before I'd trust the USPTO to determine whether an idea has already been explored. ;>

It's finding other Communities

Yes, but... often my interests stray from LTU. Sometimes they stray into an area where I have no guides, and little knowledge of the existing terrain. It is these cases where the Google of Ideas would be considerably more helpful than the Google of Nouns.

The Google of of Ideas would also help communities interact more effectively, which would reduce the academic inbreeding to which you refer.

And as an aside... I'll grant that LTU is way smarter than your average IT department. As for CS departments, that's really pushing it. While I like to read a number of articles that are posted, I have noted that highly technical articles don't generate that much discussion, mostly because they aren't as accessible to the general LTU population. Or when Phil Wadler posts an article, bless his heart, he tends to scare off discussion.

It's the less erudite articles that generate the most discussion, simply because they are the easiest articles to relate to and comment on. It's sorta like Slashdot, except the average level of computer knowledge on LTU is way higher.

Community IQ

As for CS departments, that's really pushing it. While I like to read a number of articles that are posted, I have noted that highly technical articles don't generate that much discussion, mostly because they aren't as accessible to the general LTU population.

And I'd be willing to wager that your average CS dept. does not have a lot of intra-departmental discussion, either. Why not? For the same reasons that LtU has limited discussions sometimes...because the members are specialists and can only interact on lower common denominator topics. Really, the only difference between LtU and a really big CS/IT dept. is the fact that it includes both theorists and practitioners.

While LtU probably does not publish as much as the average CS dept. or produce as much software as the average IT dept., the fact that it does both gives it a richer set of interactions. Some theorists are simply out of touch with what goes on "in the trenches", just as some programmers could not analyze the time/space complexity of bubble sort.

As far as the Google of Ideas goes, I don't see how you can really improve upon what already exists without some major leap in technology. Usually, the best way to explore a new idea is to seek out the community most likely to have knowledge of the problem domain. Google can certainly help do that. Only by finding that community and learning its jargon and patterns can you hope to find prior art or discover that you have a novel idea.

Wishful Thinking

Your average CS department holds general-interest colloquia to increase awareness and interest in topics that other people are working on. A good presentation in this setting is broad, not deep, and so is the resulting discussion. Moreover, certain departments or sub-departments go deeper on certain topics.

You seem to be stereotyping CS departments quite a bit, stereotypes I don't buy into much. LTU is filled with smart people, and so are CS departments.

The Google of Ideas is wishful thinking, for sure. I believe that it is a geniune possibility, and I'm under no illusion that it is something we can accomplish today, or even tomorrow. Given the technological and sociological issues that need to be solved first, even before starting work on the search engine itself, the Google of Ideas is likely 20 years away, at least.

Interesting.

I like to imagine that a few decades from now, there will be a large body of research in computer science and mathematics that's in a form that a theorem prover can digest. Ideally, much of this work would be cross-referenced with traditional literature, to aid both conceptual and historical understanding.

IIRC, wasn't this the original goal of Algol?
It didn't work out the first time this was tried; I'm thinking the original premise is flawed.

Nope. Not even close.

Algol is a programming language, not a theorem prover, and quite to the contrary, Algol was a smashing success.

You are getting confused with UNCOL, a universal intermediate language for compilers. This specific project was a disaster... but intermediate languages are at the core of almost every modern compiler.

Thereom provers are an entirely different beast, though the term is a bit of a misnomer. They consist of a "proof checker," a small program that verifies a formal proof is correct, and a "proof assistant", which aids humans in creating formal proofs.

Many good ideas didn't work the first time. Take the light bulb, for instance.

I misstated.

Algol was supposed to be the ideal end-all method for specifying algorithms; not quite a theorem prover, as people didn't know of "theorem provers" as such back in the day. Still, the goals are very similar, and likely to fail for the same reasons. (Goedel, &c.)

I like to imagine that a fe

I like to imagine that a few decades from now, there will be a large body of research in computer science and mathematics that's in a form that a theorem prover can digest. Ideally, much of this work would be cross-referenced with traditional literature, to aid both conceptual and historical understanding.

One of my friends is working on something like this, as part of the Logosphere project. The idea is that many people prove theorems in different theorem provers, but you can't port a proof from one system to another, so formalized mathematics remains fragmented. If you have a common intermediate format, then a proof in any system becomes a proof usable in all systems.

Logosphere

Interesting... this is a difficult project, yet if it works and people adopt it, it's will be a very important project. The fragmentation of formal mathematics is only one of several hurdles to overcome to really revolutionize how math is done.

The server appears to be down, and I've tried a couple times over the last few days. I may have heard of the project elsewhere, but I had no idea what it was about.

weakly useful

Some articles even claim to be only weakly useful. :-)

That's not the point.

The point is that a very significant portion of published papers are useless by design. Moreover, these papers usually just restate old truisms with new jargon.

Sadly, this is what you get when you try to turn science into a profession.

Really?

Can you back this up? For example, can you point to a specific reputable journal and pick out for us the "significant portion" of papers that try to fool us by repeating old results in new notation?

This is a troll

You've just been trolled.

Re: troll

Yeah, I guess I knew that. Sometimes you just can't resist. (Your link is a stunning example!)

You lose points...

...for not understanding science properly.

Sigh.

Re: You lose points...

I certainly don't understand it the way you do.

There is only one way...

...of understanding science.

But then again, for most people it is a business or a metaphysics, whereas what it really should be is a tool.

Sigh. My rant falls on deaf ears anyways.

Re

My rant falls on deaf ears anyways.

No, rather on ears that are capable of filtering signal from noise.

You are very cute.

Also, you have the whole name-calling thing down pat.

Congratulations, science should be proud.

Uhm.

Namecalling is fun, but the fact is that my point still holds.

I'm not railing against science in general, but there is an inherrent design flaw in trying to merge science with capitalism.

Back on topic -- give me any journal that suits your fancy, and I'll show you the papers that are simply jargon filler.

Nature

I'm calling your bluff.

Huh?


The latest issue
, and not a single article of scientific value.

Though perhaps you meant something else, so I'll give you the benefit of the doubt and assume you actually meant something else.

(Or do you seriously think that "Warming debate highlights poor data" and "Kansas backs lessons critical of evolution" are scientific papers?)

You win

I am obviously unequipped to counter an argument of such forceful persuasion. I mean, after looking at the link, I realized that everything I know is wrong, and that you are right. Thank you for correcting the grievous error of my ways.

Perhaps you remember saying like so:

...and I'll show you the papers that are simply jargon filler...
Now maybe you're just too lazy to demonstrate how the articles you mentioned qualify as "jargon filler", or maybe I'm too stupid to realize that you implicitly defined the attributes that objectively lead any intelligent person to decide that an article is "jargon filler". But apparently I'm missing something. Since Nature is not one of the journals that generously provides its content free to the public, you have quite the challenge to prove your points without posting the text of the articles (assuming you even have a subscription). But you didn't mention any such caveats when you made your claim, did you?

Also, you seem not to be able to make a distinction between an article that reports news and an article that discusses research.

I'll restate.

Every single article in the issue, as far as I can tell, consists of "jagon filler". (Or worse, pop-science "news".)

If you disagree, I'd very much like to see the articles which you consider to be a significant scientific contribution.

Ehud is on Holidays

... but I have the feeling that this is somewhere around the point where Ehud would say. Uhm, take it somewhere else...

Agreed?

Yes, off-topic

Agreed, we're not going to get a productive discussion out of this, and as Neil Madden already pointed out, it's off-topic anyway.

Nature is an example infested with lice

Nature, or Scientific American, are popular magazines. You won't find scientific "new" or "ground-breaking" articles in it, just stuff which appeals to the average academic.

And, yes, of course there are a lot of articles written which aren't "new" or "ground-breaking" in science. I personally don't find that any dramatic news; what I do find annoying often is the way they are oversold. But yeah, that's Nature, right?

I find it immensely interesting

that some people will read a bunch of stuff that they don't understand and assume that the people who wrote it are idiots trying to pull a fast one.

I admit I didn't understand anyhing that was being discussed in volumes 201, 200, and 199 (or barely anything, 'Efficient instance retrieval with standard and relational
path indexing' was not a total loss). However, interestingly enough if I went all the way back to the articles in 95 I could at least follow some of the articles and arguments presented. Maybe in another ten years what seems hopelessly jargon filled now will still be hard slog but not impossible.

What about Moggi's paper then?

I'ts not part of the free bunch, but it's really part of the journal index: Notions of computation and monads. This should appeal to many LtU's readers. By the way, in the old days, when it was called Information and Control it did publish quite some ground-breaking stuff---not that it doesn't anymore, just that the range of subjects has slightly changed and stopped including what I'm most interested in.

Access no longer open?

Does anyone know why free access to the 'Information and Computation' journal is no longer available? It isn't August 12, 2006 yet so I was expecting it to still be open, but apparently not.