22C:096
Computation, Information, and Description

Department of Computer Science
The University of Iowa

Lecture Notes




Last modified: 20 January 1997


Introduction

Motivation: what's the point?

There are 3 pretty good reasons to study the theory of computation:

  1. so you can do theory yourself;
  2. so you can apply the theory to computing practice;
  3. so you can understand the structure of real computation better.
I am teaching this course because there is also one great reason to study the theory of computation: Many natural and mental phenomena that have nothing to do with modern high-speed automatic electronic digital computers (to focus attention on the highly contingent nature of these thingumajigs, I'll call them ``MHSAEDC''s for a while) are best understood as computations. Some of these phenomena seemed so mysterious a few decades ago that they were the subject of intense metaphysical investigations by philosophers, and mystic evasions by those whose less philosophical activities depended on computational phenomena. Many excellent mathematicians, for example, were intensely puzzled by the ``unreasonable effectiveness of mathematics in the physical sciences,'' and treated it as an unexplainable happy circumstance.

Over the centuries, philosophers should eventually have figured out the nature of computation by pure cogitation. But, the invention of MHSAEDCs attracted so much practical attention to computation that we learned its basic structure in the few decades from about 1935 to 1950, the years in which Alan Turing performed his research on computing. And, practical experience with MHSAEDCs has allowed many people, even people with no explicit philosophical aspirations, to develop an excellent intuition for the nature of computation. The concrete existence of explicit computation by MHSAEDSs, encountered every day of our lives, makes it much easier to accept the abstract concepts that determine the structure of computation hidden in other parts of our experience.

In a purely logical development of our topic, the word ``computation'' would be much less prominent than it is in these notes. I use ``computation'' as my hook phrase, because it is the most concretely familiar among the many that refer to our topic. In the title of the course, I mention ``Computation, Information, & Description.'' These are not so much three different topics, as they are three different approaches to studying the same basic stuff. I mentioned them in that particular order because it corresponds to the most likely order in which we will discuss the terms, and because the abbreviation ``CID'' is the code for the Cedar Rapids airport. The real stuff at the bottom of the course is systems of discrete symbols, intended to carry information, and manipulated according to unambiguous rules that refer only to the symbols and not to the things that they stand for. In mathematical logic, this stuff is called Formal Systems. Other words and phrases that refer to our stuff include ``derivation,'' ``formal proof,'' ``calculation,'' ``information processing,'' ``symbol manipulation,'' ``calculus'' (as a generic concept, not the differential and integral calculus in particular), ``formal grammar,'' ``automaton,'' ``syntactic system.'' If you think of more such words and phrases, please post them to the class online discussion.

Outline

Foundations and philosophy

After familiarizing ourselves with actual computations, we will investigate the philosophical foundations of the concept of computation. Although computations occur in nature, the concept of computation is the result of centuries of evolutionary design by people who were mostly not aware that they were members of a standards committee. Nonetheless, these centuries of mostly subconscious design effort led in the late 1930s to a very precise consensus on the meaning of computation. It is arguably the most precise consensus on the definition of an intuitive concept that has ever been achieved. We will understand the concept of computation as if it were the product of a briefer conscious design effort, whose goal was to represent complex and powerful behavior by systems whose operation is as objective, certain, and reliable as possible. Many qualities of computation, particularly in the formal systems of mathematics, seem a lot less mysterious when viewed as the fruit of this design effort. In particular, the ``effectiveness of mathematics in the physical sciences'' seems a lot less ``unreasonable,'' and a lot more the result of successful and well directed effort over many centuries.

The search for objective, certain, and reliable facts is key in the design of computation, and we will relate it to Descarte's search for certainty in all knowledge, and Hilbert's somewhat less presumptuous search for certainty regarding the basic correctness of all mathematics. We will find that there are absolutely inherent limits to what we may accomplish with computational certainty, and that although the limits are liberal enough for lots of practical computation, they prohibit the security that Hilbert sought. We will also discover that, while computing systems have many interesting inherent structural qualities, some of the ones that seem most natural in practical computing are not at all objective, and that philosophers and scientists have often erred in basing fundamental definitions on nonobjective computational distinctions.

Probability

With our understanding of the basic concepts of computation and its philosophical grounding, we will go on to look for phenomena that are natural, or at least not designed with computational issues in mind, that turn out to be essentially computational. The familiar concept of probability, which pervades applied science as well as gambling, is remarkably slippery to define. What does it really mean that a potential event has probability 0.163517? The best known mathematical explanations of probability explain how the probabilities of complex events depend on those of primitive events, but they merely postulate a probability measure for all primitive events, and give no insight into the genesis of that measure. Description theory suggests a new foundation for probability, in which the probability of an event is determined by the complexity of its description.

Genetics

The genetic code is a programming language, run on a computer that makes random errors called ``mutations.'' This insight leads immediately to new sorts of questions about genetics. None of these questions have been answered definitively so far, but we will explore the latest speculations from the Artificial Life community on the qualities of a programming system that allow interesting evolution. We will also see that Kolmogorov's and Chaitin's description theory provides a plausible way to discuss issues such as when a particular mass of material should be considered as an organism.

Physics

I've left physics to last because I need to study some more before I can express some of the interesting points. We will at least mention the well known applications of information theory to thermodynamics, and the possibilities of improving these applications through description theory. We will also at least sketch an understanding of the definition of chaos through description theory. Finally, we will investigate a proposal for quantum computing that links computational complexity theory to physics. Depending on the results of experiments, quantum computing proposals may lead to more efficient computing techniques, or they may lead to new information about quantum physics described in computational terms.


Maintained by Michael J. O'Donnell, email: [] odonnell@cs.uiowa.edu