Subject: Re: Question on Information Theory Newsgroups: talk.origins Date: February 1, 2001 Author: Mark C. Chu-Carroll Message-ID: m3g0hy5ris.fsf@taliesin.watson.ibm.com
woodian@my-deja.com writes:
> I'm trying to get a handle on where creationists and "intelligent
> design-ists" are trying to go with this "information theory and design"
> crap.
>
> Are there any good online resources that would give me a handle on what
> information theory is about (not too technical but not too dumbed-down,
> either!)?
They're actually talking about two distinct notions of information theory: one is Shannon's information theory; and one is Kolmogorov-Chaitin information theory.
For a nice description of Kolmogorov-Chaitin, I suggest The Limits of Mathematics by Greg Chaitin.
For Shannon's info theory, take a look at http://oak.cats.ohiou.edu/~sk260695/skinfo.html, which has a lot of links.
The creationist trick is to mix the two theories in a nonsensical way to create a false conclusion.
Shannon's theory deals with communication. He was working for Bell Labs, and his fundamental interest was in communication over noisy links. (The fundamental paper that first proposed Shannon IT was titled "Communication in the Presence of Noise".) In Shannon theory, entropy is randomness introduced by noise: communication over a noisy channel always adds entropy, but one can never add information - because the point is to correctly transmit information from a source to a destination. Noise along the channel can not add to the information content - because by definition, the only information was provided by the transmitter, and anything that occurs during transmission can, at best, not harm the information content of the transmission.
Shannon theory is thus the root of the creationist claim that "randomness cannot add information to a system".
Kolmogorov-Chaitin information theory is a totally different topic (and one that I know more about than Shannon). K-C is a version of information theory derived from computer science. It studies what it calls the information content of a string. In K-C information theory, one defines the information content of a string in terms of the randomness of the string: a string with lots of redundancy has low information content; the more random a string is, the less redundancy it has, and thus the more information each bit of it contains. K-C information theory is interesting in that it considers the size of the "decoding machine" used to interpret a string to be a part of the measure of information content of that string. K-C also has a definition of entropy as a measure of information content: entropy is a measure of the randomness of a string, and thus, of the information content of that string.
K-C information theory is absolutely fascinating, and has been used fairly widely in a lot of interesting ways. Greg Chaitin has been using it as a tool to study some very deep properties of mathematics; it's been used by theoretical computer scientists to analyze the intrinsic algorithmic complexity of computable problems; and it has been used to discuss the information content of DNA (because with DNA, the information content is not determined solely by the gene sequence, but by the machinery that processes it).
The creationist trick is to say that the term "entropy" means the same thing in both Shannon and K-C information theories. If that's true, then you can take a measure of the information content of DNA, using K-C terms, and then argue that on the basis of Shannon theory, the information content of the DNA can never increase.
The flaw here is actually pretty subtle. K-C says nothing about how information content can change. It simply talks about how to measure information content, and what, in fact, information content means in a mathematical/computational sense. But Shannon is working in a very limited field where there is a specific, predetermined upper bound on information content. K-C, by definition, has no such upper bound.
Adding randomness to a system adds noise to the system. By Shannon theory, that means that the information content of the system decreases. But by K-C theory, the information content will likely increase by the addition of randomness. K-C allows noise to increase information content; Shannon doesn't. Mix the two, you get something nonsensical, but you can create some very deep looking stuff that looks very dazzling to people who aren't trained in either form of information theory.
-Mark
-- "There's nothing I like better than the sound of a banjo, unless of course it's the sound of a chicken caught in a vacuum cleaner." Mark Craig Chu-Carroll (mcc@watson.ibm.com) IBM T.J. Watson Research Center
[Return to the 2001 Posts of the Month]
Home Page |
Browse |
Search |
Feedback |
Links
The FAQ |
Must-Read Files |
Index |
Creationism |
Evolution |
Age of the Earth |
Flood Geology |
Catastrophism |
Debates