Computing is a Natural Science

In another thread the idea of computing as a "natural science" came up. I decided to see if I could find anything and came up with this from "Communications of the ACM". The idea has more currency than I realized.
Edit: also referenced in the paper: Great Principles of Computing.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More books on computing as natural science

Two books on the subject:

The Computational Beauty of Nature, by Gary Flake, MIT Press, 1998.

An Introduction to General Systems Thinking, by Gerald Weinberg, originally published in 1975.

Computing as a natural science is also true in another sense: succeeding in building large programs (millions of lines) that work means that we have uncovered some true principles of how to build complex systems. In that sense, programming languages are not just arbitrary constructions of the human mind but touch on reality. Designing programming languages is also studying nature.

I wonder whether a

I wonder whether a naturalistic or system theoretic metaphoric is already sufficient to qualify computing as a "natural science"? If anything indicates a proximity between computing science and the natural sciences I'd rather expect a shift in the methodological foundations of research towards the hypothetical deductive method. But I don't expect it to be very widespreaded because CS research can rely on the assumption of total knowledge or a complete specification: the model of a system and the system are identical. There is no logical gap as like between a mathematical hypothesis being confirmed or rejected by experimental data and nature. In idealist mainstream research there is only a gap between the spec and the implementation and the hope it will be closed some day.

Metaphors

Is natural science the metaphor or is software the metaphor for natural science? A better understanding of natural science might lead to better software. For example what if we really understood commonsense or natural language? Would that help us write better software?

Programming languages of artificial brains

I have no idea. Would transhuman artificial brains that perform natural language and are instantly consciencenous improve our software? I guess so, but I don't believe this would be our main concern.

Maybe I start recovering interest in AI and AL when I notice they move beyond algorithms for searching, inference, clustering and optimization. There might also be interesting new tendencies in distributed artificial agents but I don't care as long as they don't enter the hype cycle and there is at least some indication that this research goes somewhere.

Finally there are the self-healing and self-repair ideas of Richard Gabriel at al. This might be more close to what you have in mind when you are talking about the resurrection of cybernetics. I consider the conscientous software paper more as a brainstorming juggling with concepts than a consistent theory but it is inspiring.

The System

Well, basically the reason for posting this article is that I heartily agree with the thesis: "Computing is a Natural Science". In other words it is not about the a medium such as software, hardware, or biology to mention only a few. It is more fundamental and the article does a good job of explaining that. Also it is closely related to the older notions of systems theory formerly known as cybernetics. The cybernetic era was a multimedia age. There were mechanical computers sitting along side electro-mechanical computers next to electrical computers. Software had not been invented yet. Computing actually was seen as something separate from the medium, often referred to as "the system". I worked on a project once that started off as analog, but was converted to digital by the end. We had no problem with these medium shifts.

Today almost everything is software. I know this is vague, but I get the feeling that we are loosing sight of "the system". Does any of this make sense?

A better understanding of

A better understanding of natural science might lead to better software.

An old attempt at this sort of link is Kornfeld and Hewitt's Scientific Community Metaphor.

Computing is the vanguard of systems science

Does any of this make sense?

Yes, I think it does. See slide 7 in the talk Reflections on Self Management in Software Design. It shows a figure that I think is a good summary of the point you are making.

Cybernetic Software Engineering

O.K. I see. You are working on the next-big-thing.

Now it's up to me making sense of it. Are you working on new chapters of CTM or a separate introduction into Cybernetic Software Engineering?

Programming with feedback loops

Actually, this work might lead to a new book in a few years, when we understand it better. I think that any software that is fragile (written at a large scale or on a failure-prone distributed system, or even any software written without decent testing) has to live inside feedback loops. We have been getting away with writing software on the edge of a knife because hardware is reliable and because there's always a human in the loop to fix things. But we can't rely on these crutches forever.

I think that this ties in neatly to computing as a natural science, where fragility is a normal state of affairs that has to be taken care of, and where feedback loops are very frequent. Our "artificial" software development will become more "natural".

Science is reusable

This discussion actually began over here with a long discussion of code reuse. The conclusion was that none of these reuse schemes were really successful and the answer could be found in the "natural science of computing" with indirect allusions to cybernetics. "Code" reuse will always fail because code is about a particular medium (or machine type). Only "systems" (ie science) in the abstract sense are reusable because they are independent of implementation. A solution that is sure to make no one happy, but at least we won't need to change any code.
edit: The link was wrong there at first. Fixed it.

Towards definitions

"Code" reuse will always fail because code is about a particular medium (or machine type). Only "systems" (ie science) in the abstract sense are reusable because they are independent of implementation.

The terminology here is too vague — an example of a system that is independent of implementation would help.

The sense of the word "system" that seems relevant to me here is of a set of definitions and rules (speaking generally) which give meaning to descriptions given in terms of that system. This definition is intended to be general enough to cover formal/axiomatic systems as well as scientific theories, legal systems, natural languages, and many even less well-defined systems.

Such systems can't escape the fact that their artifacts are "about a particular medium". (By artifacts, I mean anything which can be defined within a system, which in some cases may be called "code".) Whether the system is an Intel CPU, the Java VM, the lambda calculus, quantum mechanics, the Italian language, EU Law, or literary criticism, the system itself is a medium that cannot be escaped unscathed by the artifacts of that system. Any artifact that works within one system is not going to be reusable in some other system, without conversion.

By the same token, "independent of implementation" is a fuzzy phrase. In practice, it usually means that you have a system whose artifacts can be more or less easily translated into other systems that are considered more about "implementation", usually because they're closer to physical machines. This can be a useful phrase when discussing, say, a high-level language or a virtual machine, but more ambitious aims require a more precise definition.

In any case, computing science uses mathematical models to describe the systems it studies, taking the same basic approach as used in many other sciences. These appeals to natural science as holding some sort of unspecified solutions need more substance if they are to be at all convincing.

Definitions

The paper "No Names, Just Software Reuse" seems to end in an unanswered question, basically: what is the relation between software reuse and computing as a natural science. I was just supplying the "obvious" answer that they didn't provide.

In any case, computing science uses mathematical models to describe the systems it studies, taking the same basic approach as used in many other sciences.

Agreed. And the best, most modern version of systems theory available is published as theoretical computer science. See for example J. Rutten, Bart Jacobs, or Alexanddr Kurz, to mention only a few.

These appeals to natural science as holding some sort of unspecified solutions need more substance if they are to be at all convincing.

What they are trying to do is broaden out "computer science" so that it is "about" more than code or register machines.

But this is only my interpretation, and I don't speak for this movement!

Already there...

What they are trying to do is broaden out "computer science" so that it is "about" more than code or register machines.

Does anyone who actually knows anything about CS think it is nothing more than code and register machines?

That's like asking us to believe that some people think chemistry is only about beakers and bunsen burners.

For something completely the same...

I was just reading an interview with Joel Spolsky in the July/August ACM Queue where he says [regarding recruiting],

...I don't have that capability that used to exist that would weed people out of computer science if they couldn't make it through learning how to program in Scheme and doing data structures with pointers in Pascal. Now when I ask somebody to demonstrate knowledge of recursion in an interview, they may fail to be able to do that, not because they're dumb or because they can't do recursion, but because they've never been taught recursion.

I'm not going to say much about pointers, but recursion is so fundamental to computer science as a field at all that I'm left to wonder what the foggy heck is going on up there in N'York.

The terminology here is too

The terminology here is too vague — an example of a system that is independent of implementation would help.

The answer to this is obvious from systems theory but I recognize that there is a lot going on here so I will answer for the sake of discussion in case anyone wants to.

The answer to this is any properly defined "system". A system in systems theory is a pure mathematical abstraction. Sets of differential equations are often used in cybernetics as well as automata. Such a thing in itself is simply math and has only its mathematical properties. We give the system meaning by mapping or morphing it onto some implementing device. There are many implementations of a given system. A system is independent of implementation by design or construction. This is closely related to the concept of a state space.
Edit: A system is independent of implementation because it represents "all" the possible implementations. A definition does involve rules or laws of a domain in addition to the math. Equations of constraint may form the tranformation into a particular implementation. Please don't try this at home.
Another Edit: "The system" is really a definition of a domain. To say it is "pure" math is an exageration. Implementations are specific things in the domain.

Reuse

Thanks for the response. These definitions don't resolve the inconsistency that was bugging me, but they might help to focus on the problem. To take a prominent example, I assume the lambda calculus would qualify as a system by these definitions. One can write "code" in the lambda calculus. But it's not clear how such code fits into the statement that I responded to:

"Code" reuse will always fail because code is about a particular medium (or machine type). Only "systems" (ie science) in the abstract sense are reusable because they are independent of implementation.

Afaict, the only medium which lambda calculus code is "about", in the sense used in the quote, is the lambda calculus itself. Both the lambda calculus and its code fits the definition that it "is simply math and has only its mathematical properties. We [can] give the system meaning by mapping or morphing it onto some implementing device." (I added "can" because we can also give the system meaning by mapping it onto another well-defined mathematical abstraction, as done in PL semantics.)

By this definition, lambda calculus code is independent of implementation (unless the point is that it's dependent on the lambda calculus, which is true of the artifacts of any system). Being independent of implementation, is it therefore exempt from the mentioned reuse limitation ("code reuse will always fail")?

Further, even machine code is an abstraction which can be defined formally and have many possible implementations. It's possible to build various implementations of a lambda calculus CPU just as it's possible to build various implementations of a SPARC CPU. In this context, there's no significant difference between SPARC machine code and lambda calculus, and that's what Turing equivalence is about: an equivalence between systems.

The problem of reuse in this sense was solved in the 1950s, with the invention of compilers and interpreters for high-level languages, which automatically translate code between systems. The reuse problem that's still significant today is at least as significant within a single system as it is between systems. That's not to say that something like the "natural science of composed software" proposed in the reuse paper might not eventually help with that, but that seems unrelated to whether code is about a particular medium, or whether only systems are reusable.

Domains

By this definition, lambda calculus code is independent of implementation (unless the point is that it's dependent on the lambda calculus, which is true of the artifacts of any system). Being independent of implementation, is it therefore exempt from the mentioned reuse limitation ("code reuse will always fail")?

Agreed. Lambda calculus is a domain similar to automata theory. In fact there seems to be an easy conversion. A system in automata theory based on input/output machines is already in a functional form, but the functions might be closures containing states. Most of the closures can be reduced by SP partition to single state partitions or simple functions. I don't know enough lambda calculus to know what happens to the left over closures?

The reason the word "code" is used in lambda calculus and not "algebra" probably has to do with the fact that it is a "calculus" and not an algebra? The reuse problem in the paper is about implementation. Call it what you will, code, whatever.

Does any of this help?

Languages are programs too

Agreed. Lambda calculus is a domain similar to automata theory. In fact there seems to be an easy conversion.

That's true. But this can be taken further: the same is true of almost any high-level programming language. Theoretically, it's even true for low-level languages, as I mentioned in the grandparent comment. So where does this leave us in terms of the dependency on implementation and the relationship to reuse?

...what happens to the left over closures?

See this paper for a nice treatment of the conversion from a functional form to an abstract machine, including the necessary closure conversion and defunctionalization. The paper deals specifically with converting evaluators (pointing out that traditional functional evaluators map directly to well-known abstract machines), but the same approach can be applied to any program.

The reason the word "code" is used in lambda calculus

The word "code" is only usually used in the context of applied and extended lambda calculi, such as Haskell, ML or Scheme. However, an expression in lambda calculus (or a term, etc.) amounts to the same thing.

The reuse problem in the paper is about implementation. Call it what you will, code, whatever.

Does any of this help?

Part of the problem I'm having is that not only are programming languages "systems" by just about any definition, so are programs. And a programming language can either be implemented as a program or, as in the case of denotational semantics, mathematically defined in terms of a kind of functional program. So to draw a distinction somewhere (the exact nature of which I'm not yet seeing) and to say that reuse fails in some of these cases, but not others, doesn't make sense to me, without much more specific criteria and reasons for the differences.

Of course, one can say that programming languages are more reusable than most programs, almost by definition based on their respective purposes. However, unless that's an argument in favor of DSLs (in which case I won't argue ;) it doesn't seem to lead to any particularly promising solutions to the problems of reuse.

Theory or practice

Part of the problem I'm having is that not only are programming languages "systems" by just about any definition, so are programs.

This might be another context problem. I my experience a "system" is some sort of mathematical model. If I were standing in front of some machine and used the word "system" I would be referring to the model of the machine not the machine itself. There is always a difference, mathematical theories never match reality exactly. I don't equate a running program in it's environment with its theory. That would apply to scheme or Haskell which have good theories or to C which needs a theory to give it structure.

So to draw a distinction somewhere (the exact nature of which I'm not yet seeing)

The distinction I want to draw is between theory and practice.

So where does this leave us in terms of the dependency on implementation and the relationship to reuse?

The reuse of implementations has little to do with theory or correctness. The reuse paper was mostly about practical, environmental issues, as much as the "code of the implementation" itself. But also the relation of the code to the environment and its ultimate use.

Pax

I[n] my experience a "system" is some sort of mathematical model. If I were standing in front of some machine and used the word "system" I would be referring to the model of the machine not the machine itself.

I don't see how what I wrote is at odds with this. Programs are certainly mathematical models (some more explicitly so than others), and are also systems, even in a systems theory sense — as we've discussed, you can map a program directly to an automaton.

Programming languages are therefore a general mechanism for defining systems, but they themselves are defined in the same way that programs are defined, i.e. are themselves programs and systems. These relationships hold true whether you're interested in the models or in their expression on physical machines, i.e. theory or practice.

In any case, we don't seem to be converging much, so I'll leave it at that.

Soft-ware

Well it is interesting that this "argument" seems to come down to the distinction between theory and practice. Science as defined by Galileo introduces a distinction between theory and practice. This was a radical idea at the time and Galileo spent a lot of time trying to explain it. See Morris Kline, "Mathematical Thought from Ancient to Modern Times". Chapter 16, "The the Mathematization of Science".

In hard-ware it is easy to separate the theory from the practice, but the idea of soft-ware fuzzes out the boundary between theory and practice. A Haskell program is almost as much theory as the category theory that inspires it. A C program is more distant from any theory, but still soft. But this comes back to the issue of the reuse paper and the notion of a natural science. Even software is a "real" thing in the sense that reality is introduced when it is used, reused, or composed into an existing system (implemented). This is the point where extraneous, outside effects not included in the theory are introduced, and perhaps the point where software becomes a natural science.

Science as defined by

Science as defined by Galileo introduces a distinction between theory and practice.

??????????????????????????????????????????????????

Galileo was the first experimental scientist ever. He teared down the distinction between theory and practice. After Galilei you had to become a technician ( manipulate nature ) in order to do sound theory.

?????????

This means I don't need to edit my original post.

Galileo was the first experimental scientist ever.

Yes, he was both a mathematician and an experimenter. Newtons laws never work exactly in practice. When Galileo dropped objects of the tower of Piza they didn't always fall at the same speed. A feather fell a lot slower. This was a big problem for Galileo. In order to get to the theory he had to be able to weed out extraneous effects like air resistance. A big step indeed. A good theory unifies a domain. The theory is always right about its domain. A useful theory is close enough to reality that the deviations in practice are small, or can be seem as exceptions, requiring a more precise theory.

In terms of computer science "the system" or theory should be broad enough to "unify" and to always be "right" as far as it goes. It seems to be a matter of good abstraction, Newtons laws can be combined with other law to handle the case of the falling feather.
????????????????
Right.

A bit more "clarification".

A bit more "clarification". I am always surprised at what I can find on the Wikipedia. Here is a list of "systems theories". The wiki "systems thinking" is consistent with the Kline reference I mentioned. My expression "independence of implementation" is equivalent to the notion of the separation of theory and practice, a basic notion in systems theory. Edit: The separation of theory and practice is not some willful ignorance of one or the other, but a proactive process of abstraction.

of course, the real reason

of course, the real reason why computing is a natural science is because we're nothing but a simulation running inside a computing device... :)

Are we a simulation?

You have to be very careful when jumping to this conclusion. It's just too easy to step over infinities.

are we not?

What is reality?

A simulation is a limited, rough, domain-specific modelling of real processes. We know it's a simulation because we've built it and know it's rough compared to the real thing. But what makes us sure our "reality" is not a bunch of loose artificial constraints not actually matching a larger "reality"?

well, that's turtles all the way down, so we'd better stop by now because I agree with you it doesn't lead anywhere anyway...

Old News...

Logical positivism was debunked quite some time ago.

Please explain

Please explain what you mean and what the connection is.

you can not know very much a

you can not know very much

a simulation can be known

ergo, a simulation is not very much

Is mathematics a natural science?

Don't natural sciences have to have some nature in them? What is the distinction between natural sciences and the other sciences?

Maybe to a Platonist ...

If you think of mathematical concepts and proofs existing independently, waiting to be discovered, then maybe mathematics is a natural science. I don't think of it this way myself. If by "natural science" one means "empirical science" in the usual sense, then I'm not sure I buy CS as a natural science either.

I think of my colleagues in chemistry and biology and how different their methodology is. A lot of their time is taken up with gathering data: relatively straightforward, time-consuming but rote work that (to some extent) can be off-loaded to undergraduate "research" workers.

This is rather different from a lot of CS research, especially e.g., theoretical work or theory-informed design work in programming languages. I suppose that one could view compiler implementation as "rote" once a language design is done, but frankly it's often not that cut and dried, it can feed back into the design, and it isn't exactly empirical (just more applied). Perhaps writing lots of programs in some language until one is inspired to create a new language feature could count as empirical, but to me (at least) the act of designing a new feature feels rather different than "discovering" a law of nature. (Plus, I don't think people go about it this way in actual practice, sending bunches of undergrads off into the lab to program, then doing statistical analysis on the results until a new language feature suggests itself.)

Of course, depending on your particular brand of philosophy of science, "discovering" a law of nature might be seen more as a creative act (I am not a philosopher of science). Nevertheless, philosophy aside, there seem to be significant differences in actual methodology.

Some parts of CS do more resemble the natural sciences

And some resemble things like psychology. :)

PLT is highly mathematical it's nature--much of what we do *is* math, especially the parts where we exclude syntax. (Which is why syntax is so fun to argue about). On the other hand, there are many parts of CS which do involve experimentation and measurement, followed by analysis of the results--much performance analysis of complex hardware/software systems (where algorithmic analysis of performance is likely intractable) is in this category. Of course, the grunt work in this sort of CS research is mostly done by computers themselves, rather than by lab assistants, but still... computational systems can exhibit highly complex emergent behavior. Perhaps not as complex as biological systems (the behavior of which we don't know how to fully derive from the underlying chemistry; which we in turn don't know how to fully derive from the underlying quantum mechanics), but it is there.

Computing is not the same as mathematics

There is a fundamental difference between computing and mathematics: computer programs run on real machines. That is the connection with reality. That is why computing is a natural science and mathematics is not. Mathematics is (arguably!) something different: it is the science of clear thinking (as a famous mathematician once said). Mathematics does not depend on an embodiment in the real world.

The discipline of computing has theoretical branches and empirical branches, just like any other natural science. Computing has its theorizing and its data gathering. After designing and proving an algorithm correct, we implement it and measure it. Sometimes the data gathering is hard, especially in software engineering, where some branches are more akin to social science than to physics.

the act of designing a new feature feels rather different than "discovering" a law of nature: Both are creative acts whose results must pass the test of reality to survive.

Irrelevant

computer programs run on real machines.

The difference is that in a natural science, the goal is to find and use a formal system that matches nature, but in computing the goal is to build a machine in nature that matches a formal system. These are entirely different goals. In practical terms, if your formal system fails to match nature, there's something wrong with your formal system. In computing if your formal system fails to match nature, there's something wrong with the machine. There is such a big difference in emphasis between these two things that I'm at a loss to know how anyone can confuse them.

(Of course there are phenomena like "the web" which mix both formal and natural phenomena and span both computer science and natural science.)

Science and engineering

The two directions you describe are science (discovering the laws) and engineering (applying the laws to build artifacts). Computing clearly has both of these directions. Just because we are developing technology, though, is no reason to denigrate the other side of computing which is scientifically studying information processing. Computing is not the only science that has a big technology-driven part. Astronomy, for example, also heavily depends on technology. Computing, like physics, has a synergy with its technology side: the science enables new technology, which allows to advance the science. That does not make them less as sciences.

CS encompasses both of your

CS encompasses both of your categories. Consider region inference: it is known that "optimal" region inference is not possible given current techniques, but there are various region optimizations that approach optimality in some cases, but fail in others. In this case, the formal notion of optimality is the "nature" being explored, but our techniques are not yet sophisticated enough to achieve it in the general case; all we have are pieces of the puzzle which sometimes don't fit together.

To me, this has a perfect correspondence to the development of physics theories; some region inference approaches fit together as well as general relativity and quantum theory even (UHL and TT systems).

I think ultimately that science is a process, not a category. CS uses that process in many branches [1], while other branches more closely resemble mathematics [2]. I think CS has fingers spread into science, mathematics and engineering. Perhaps it should be broken up along those branches to provide more focus to students, and we are seeing that happen to a certain extent, ie. the emergence of software engineering courses and standards, vs. more theory-oriented course loads.

[1] garbage collection, and other memory management techniques for instance
[2] cryptography, compression, randomness, etc.

Measurement

Do we really measure the algorithm, or the compiler + the OS + the hardware? After all, the algorithm is designed and proven for a particular abstract machine, and there's nothing to measure. If you have an emulator of the abstract machine, then you can run the algorithm for various inputs. But, that is just double-checking the proof.

In the real world with concrete machines, compilers and OS', we again double-check the proof (sort of; we also submit bug reports against the compiler and/or hardware) and also measure how much the compiler + OS + hardware preserve the algorithm's properties. So, what we're really after is whether the concrete system is an acceptable instance of the abstract machine.

I don't think CS deserves the S.

What we're really after...

in most cases, is not verifiyng that the concrete system is an acceptable instance of the abstract machine.

What practicing programmers (and their bosses) really want to know is whether or not the system (concrete or abstract) is an acceptable instance of the user's requirements.

Given that user requirements are often under-specified (or ill-specified, or not at all), this is a Very Hard Problem. Much of PLT research is designed to assist here--to make it easier to a) translate user requirements to a system, b) verify that the resulting system, abstract or otherwise, is consistent, and c) make sure that it agrees with the user requirements. Oh, and d) allow someone else who may come along to inspect and mutate the system as needed (and verify that their changes are also consistent, and congruent to possibly-revised user requirements).

To put it another way--when my programs don't work as intended, it's usually my fault--not the compiler's. :)

There's a fair bit of science involved in all of this--along with some math (if one excludes non-empirical things like math and logic from the realm of science), and a fair bit of human factors and economics--the sort of things that drive scientists batty due to their resistance to experimental analysis. And practical PL development--just like practical OS development or practical word processor development--involves a fair bit of engineering as well, given that the result is frequently a creative artifact, rather than simply expanded knowledge.

So what is CS properly called? I really don't know, but it's interesting nonetheless.

Who is the user in CS?

Or, by analogy, who is the user in biology or physics? There is no such entity. Once you're in the business of selling software, drugs or cameras to people, then the user enters the picture. Then, the human process you described that involves underspecified or ill-specified requirements, pointy haired bosses, marketing, etc. enter the picture.

However, CS grads aren't formally trained in those aspects, nor are biologists or physicists, or even engineers. We don't have scientific theories about clueless users or pointy haired bosses. A lot of work may have gone into improving requirements gathering and validation, but what are our theories about them?

What is the most famous theory (i.e. known by every practitioner) of CS that involved a hypothesis about the nature of CS, collection of data and a conclusion that stood the test of time? Ok, the field is very young, but if it already became a "field" without theories, then it's not a scientific field imho.

What is the most famous

What is the most famous theory (i.e. known by every practitioner) of CS that involved a hypothesis about the nature of CS, collection of data and a conclusion that stood the test of time?

Turing machines and turing completeness.

Church-Turing Thesis

We seem to disagree on terminology.

Church-Turing Thesis

Church-Turing Thesis

I conjecture that Turing's thesis is more well-known than Church's thesis, or their equivalence. I understood the Turing machine before I understood the lambda calculus, and I hear everyday techie-people talking about "Turing complete" more than I hear them talk about recursive computable functions (even being an LTU member which biases me). Thus, the Turing thesis is computer science's E=mc^2.

We seem to disagree on terminology.

I don't follow you. If you mean the names, I used Turing machines and Turing completeness because most techies don't use the formal name, just the names of the "device" and the "result", respectively.

The name is thesis for a reason

Besides differences in usage of the word theory in science and mathematics, Turing's machines are not a statement about computation in all of nature (without also becoming a statement about physics at least). Primarily, Turing's machines are about human abstract thought, depending on the mathematical axioms of set theory. It's about logic and philosophy.

Besides differences in usage

Besides differences in usage of the word theory in science and mathematics, Turing's machines are not a statement about computation in all of nature (without also becoming a statement about physics at least). Primarily, Turing's machines are about human abstract thought, depending on the mathematical axioms of set theory. It's about logic and philosophy.

It is also a statement about nature since it defines the limits of classical computing machines. It was falsified to a certain extent (or perhaps has yet to be?), by quantum computing which, as of current majority opinion, transcends classic limits.

QC

My understanding is that quantum computing can do some things faster but can't do "more" that classical computers. You can't solve the Halting Problem with quantum computing.

My understanding is that

My understanding is that quantum computing can do some things faster but can't do "more" that classical computers.

Given current understanding this is correct (although there is speculation on hypercomputation). But would you consider a computation that would take more than the lifetime of the universe on a classical machine as "doable"? The set of realizable computations on classical machines is a strict subset of the realizable computations on quantum machines, assuming the latter can be scaled of course, which is the real stickler. The following interesting article on the physicality of the Church-Turing thesis is relevant, and is a good indication of how CS research can influence other sciences.

I don't see how CS can not be considered a science by even strict definitions, even though some activities within CS do not necessarily need to gather empirical data (though in reality, they eventually do require empirical data to ensure physical realizability).

Consider the wise words of Knuth:

"I have only proven this program correct; I have not tested it."

Or however it went. :)

Yes, CS is not just mathematics, but ...

Can we agree to restrict our attention to language design, as informed by theory, i.e. roughly the realm of PLT? (This seems fair given the venue: to some extent disciplinary boundaries at any level are artificial, e.g., between chemistry and physics, so this restriction is probably as fair as any).

Consider now something like the concept of a monad (in PLT) versus, say, the inverse square law for gravity in physics. Of course the invention (or discovery) of each of these is a creative act, but I don't think of the monad as being a refutable fact about how computation naturally "is". It may or may not be useful, expedient, revealing, etc., but it doesn't seem like it could be "false" or even inaccurate (and subject to refinement a la relativity, etc.) in the same way the gravitational law could be.

When somebody then refines or extends monads to get monad transformers, or to involve a co-product style for more flexibility, I still don't think of those refinements as constituting a better fit with the underlying reality. Rather they are (or may be) better tools for building systems, systems which are not necessarily models of reality. (Is a good sorting algorithm a model of reality? A nifty graph traversal? A dynamic programming technique?).

And we do make empirical measurements to see if a given design, feature or algorithm is efficient, but that's rather different from measuring whether a physical theory models reality accurately. We might even make more qualitative measurements, e.g., in terms of code clarity, expressive conciseness or breadth of applicability. But I wouldn't be inclined to reject a language construct as "untrue" if it didn't rise to these occasion of such tests.

Mind you, I don't intend any criticism of PLT (or CS/Informatics/whatever) by virtue of these arguments. It just doesn't seem to have the same kind of goals, criteria or methodology as empirical science. Of course, it's also different from poetry, literary and film criticism, and lots of other things. Engineering might be a closer fit (though I'm sure my engineering friends would blanch), but even engineering operates under different kinds of constraints, physical rather than logical, and I think that makes a big difference.

Science

The distinction is that real sciences don't have the word "science" in their title.

For example, "physics", "biology" and "chemistry" are real natural sciences. In contrast, "computer science", "social science" and "creation science" are knock-offs.

Informatics

The claim that computer science is not a science because it has 'science' in its name is just a sound bite. Better terms for computer science are informatics and computing. In French, the term informatique has been the name of the discipline since its early years (at least since 1962).

Computer science is a science by any measure

and any argument otherwise strikes me as specious.

"Social science" doesn't refer to a single discipline, but rather to a collection of them. Many practitioners and researchers in the social sciences *do* practice the scientific method in their work; however social sciences by their nature are studying humans (or complex systems of humans); and humans can be difficult research subjects--who are subject to phenomenah like the placebo affect, who alter their behavior when someone is watching, who refuse to participate altogether, or who may actually lie to investigators.

Creation science, I'll agree--but that's widely acknowledged to be repackaged theology, designed to maneuver around the church/state separation mandated in US constitutional law.

Besides--suffixs like "-ic(s)" (physics, economics), "-try" (chemistry, psychiatry), "-sophy" (philosophy), and "-ology" (biology, psychology) all refer to a body of knowledge; there are questionable or pseudoscientific disciplines ending in these suffixes as well; i.e. chiropractic, astrology, anthroposophy.

But mathematics?

But is mathematics a science at all? Computer science and mathematics share the 'imaginariness' of their objects.

if it's not a science, God

if it's not a science, God helps us all! :)

Both math and computing are auxiliary tools in other sciences and if they are not able to pass the scientific method, those sciences are screwed in their results and predictions.

deep thoughts

May I take a stab?

The article linked in the post says something very simple and obviously true but that people only recognized within the past few decades: *things with behavior* that are comprised of many small parts, all parts behaving according to some simple rules-for-parts, will, when the rules reach some threshhold of complexity and initial conditions are favorable, compute. Things that we can say about computing in general are also things that we can say about the behavior of *stuff* in general. The linked article makes the weak claim that the computational view often produces insights. Wolfram's New Kind of Science (as I understand it) makes much stronger claims that the computational view is essential in various ways.

Mathematics is a natural science in that it is essentially conversational -- it is very much human centric. We "do math" by making progressions of formal statements to one another. Our statements are judged in part by their mechanical conformance to axioms of inference but there is more to the conversation there. From preceeding statements in a mathematical conversation, by the rules of inference alone, there tend to be infinities of possible next statements. When we "do math" I must pick one -- or you must guess the one I would next pick -- and we agree that the next statement "follows what we are talking about." What is that "extra" structure beyond just the inference rules? That question has a good answer:

The "sense" of, say, a mathematical proof is gained not from inference rules alone, but by "imagining", internally, what each step means and what the trajectory of the proof is pointing at. How do we imagine? We build complex structures out essentially high-level sensory perceptions: some basic geometric intuitions, some basic intuitions about limited forms of counting, some basic intuitions about continuous motion, limits, and other spacial transforms, etc. That is, we exploit our "built in" cognative capacity to process sensory input and apply it to "imaginings" of a complex landscape described by the mathematical proof. That landscape that we imagine "puns" -- it describes two things at once. On the one hand, it is a cognitive structure on which we can organize memory and planning -- for example, the features of the landscape half-way through a proof can be associated in our memory with rules of thumb (like, "try using induction for the next step"). On the other hand, the imagined landscape has to be objective enough that when reified as a proof, it provokes a similar response in others --- so the landscape has to actually point to objective properties of the game-tree of "possible next inference rule to apply". [This point of view on the nature of mathematics is *adapted* (so blame me but praise them) from the book "Where Mathematics Comes From. How the Embodied Mind Brings Mathematics into Being" by George Lakoff and Rafel E. Núñez.]

A cybernetic understanding of computation is alive and well as part of the discipline of network design (the discipline responsible for planning, provisioning, building, maintaining, and extending the components of a computer network). Pushes such as "Software as a Service" show that.

Another place that a cybernetic understanding is alive in well is, for example, in the work on sensor nets going on at the University of California, Berkeley. (Berkeley also has the RAD project which could be described a cybernetic approach to a certain class of hard network design problems.)

Computer science is the empirical study of purpose built computing devices, yielding general ideas about how they may be configured (and empirical results about what to expect) as well as ideas about how to improve such devices.

Software engineering is the professional art of configuring computing devices.

Programming language design is an example of the craft of advancing the practice of software engineering.

Programming language theory is the computer science characterization of the design space for programmnig languages. (The empirical component of PLT is not often emphasized but I think it is there.)

To ask if there is a "Platonic Ideal Plane" to reality on which "mathematical objects" purely exist is commit a category error. The illusion of an ideal plane is assured by the cognitive capacities we use to imagine abstract landscapes while "doing math". The ontological question of that plane is a linguistic error [c.f. Wittgenstein].

Finally, it is worth mentioning that there is no "forward march of progress" in any of these fields (maths, cs, sweng, etc.). Examined in retrospect, it is all starts and fits, and interruptions, and stalls -- a lurching, chaotic kind of progress. Are the timings of the abrupt breakthroughs or long lulls random? Recall from the discussion of math that in these conversations, we pick and choose our statements with reference to our internally imagined landscapes. Those landscapes include, sure, the abstract world of math but imagination is continuous -- the same landscape, over various horizons, leads to our imaginings of what our job is supposed to be, or of office politics, or of... Our whole social being is entangled in math, not in the formal constructs, but in our *choices* of formal constructs at each point in history. Examination can reveal that those choices can often be traced to broader social structure considerations of how we behave in general, what we collectively agree to treat as the truth, and how we materially manipulate and are manipulated by our social environment. (So, always remember to stop imagining math abstractions from time to time, and look around where you are.) [*adapted* from Foucault]

-t

Where Mathematics Comes From

I also mentioned this book about a year and a half ago here.

I don't have much else to say other than that and what I said in the linked to comment.

Being a contrarian

I want to say that computer science is actually a branch of Logic, rather than a natural science or part of mathematics. (That means that we fall under the philosophy department on most campuses).

Classical logic is your grandpa's formalism

Surely computer science is more like a branch of mathematical logic? That's usually either in the math department, or even associated with the computer science department.

Philosophy gets a bit of a raw deal because as soon as it results in anything really concrete, it tends to get moved out into another department. That's where many of the sciences came from ("natural philosophy").

Formal science, natural computation

It would complement this discussion to ask to what extent computer science is a formal science. My answer, which I have the idea I've given here on LtU before, is that computer science is in the first place a natural science, since it studies the natural phenomenon of computation, but is at the same time largely a formal science, because of the massive and fundamental, though incomplete in both directions, ties between computation and formalisation.

Some points to provoke thought:

  1. The impressive Grzegorz Rozenberg has been pushing a line that is makes sense to study natural and formal computation side by side, where the science of natural computation is trying to understand computational phenomena in biology. I am very keen on this avenue of thought; I think it has potential to help make computer science scientific again... His essay The Nature of Computation and Computation in Nature is a good place to start.
  2. Alan Turing was, as is widely known, very interested in these questions, particularly in trying to understand how cells organised themselves during mitosis.
  3. Biological systems are especially interesting to study from the point of view of the science computing machinery because of the complex information and goal-directed behaviour there, but there is foundational interest in studying physics-as-computation. David Deutsch's work on the relation between computation and quantum mechanics is well known; Robin Gandy and John Tucker's work examining the nature of mechanical computation in classical mechanics deserves wider appreciation.