Sunday, January 20, 2013

The "Science" in Computer Science

When I was an undergraduate, it was very common for many of us to view the "science" in Computer Science as an oxymoron. The proof was that all the "real" sciences had names like "Physics", "Chemistry", "Astronomy", and "Biology", while we were lumped in with "Political Science", "Social Science", and of course, "Military Science". Many took the position that Computer Science should be considered a branch of mathematics, while those of us who were liberal arts majors (I count myself and Jonathan Blow as the major advocates of this) considered Software Engineering a branch of the literary arts as the goal was ultimately to produce code that was easy for humans to read, since machines didn't care what your code looked like.

If you read Thomas Kuhn's The Structure of Scientific Revolutions, however, there is a sense in which Computer Science is a science. Consider the construction of a program to be a sociological construction of a theory about how best to approach a problem. You start out with version 1, which solves some portion of a problem. Later on, as the problem is better understood through the lens of your theory (i.e., your users start using your program and start providing you with feedback), you tinker with your theory to make it better fit the evidence (user feedback or market feedback). As a result, your program becomes more complicated and your program's structure (theory) starts to show it's datedness. When things go to a head, however, you either refactor or rewrite all the offending crufty code, throwing it away and replacing it with a new program (theory )that accommodates all the evidence to date. This is equivalent to perhaps relativity supplanting Newtonian physics. Note that the analogy even holds here --- old versions of your program continue to work, but the newer program (better theory) is more elegant, and fits better with the problem space. If your rewrite fails, the result is less useful than the previous version and society refuses to adopt your new program. For instance, Vista was not widely adopted and most users stayed on Windows XP instead.

There's even space in there for unit tests and systems tests: those tests are the empirical experiments by which you attempt to prove that your theory (program) works. In effect, when writing tests, you're trying to prove that your theory about how the problem should be solved is wrong. If you have the resources, you might even want such experiments be run/written by a third party, so they have no cognitive biases with which to approach the problem.

Obviously, this view of software engineering as actually "doing science" can only be carried up to a limit, but I find it to be an interesting analogy, and would be interested in hearing your thoughts.
Post a Comment