Friday, December 23, 2005
Friday, December 09, 2005
Cognitive-Theoretic Model of the Universe
Cognitive-Theoretic Model of the Universe
This is a theory everyone should check out. It can be used as an isomorphic or correspondence theory for any and all fields of thought and ideas. I urge you all to thoroughly read the entire 65 page pdf several times, before drawing false conclusions. Being educated to the extreme, myself, I find it to be the most serious advance in tautological thought in history. I am not steering you wrong. Check it out at: C.T.M.U.
All material at this website © 1998-2005 by Christopher Michael Langan
The Cognitive-Theoretic Model of the Universe:
A New Kind of Reality Theory
Christopher Michael Langan
Paper Published September 2002 in Progress in Complexity, Information and Design, the journal of the International Society for Complexity, Information, and Design.
Introduction
Among the most exciting recent developments in science are Complexity Theory, the theory of self-organizing systems, and the modern incarnation of Intelligent Design Theory, which investigates the deep relationship between self-organization and evolutionary biology in a scientific context not preemptively closed to teleological causation. Bucking the traditional physical reductionism of the hard sciences, complexity theory has given rise to a new trend, informational reductionism, which holds that the basis of reality is not matter and energy, but information. Unfortunately, this new form of reductionism is as problematic as the old one. As mathematician David Berlinski writes regarding the material and informational aspects of DNA: “We quite know what DNA is: it is a macromolecule and so a material object. We quite know what it achieves: apparently everything. Are the two sides of this equation in balance?” More generally, Berlinski observes that since the information embodied in a string of DNA or protein cannot affect the material dynamic of reality without being read by a material transducer, information is meaningless without matter.
The relationship between physical and informational reductionism is a telling one, for it directly mirrors Cartesian mind-matter dualism, the source of several centuries of philosophical and scientific controversy regarding the nature of deep reality. As long as matter and information remain separate, with specialists treating one as primary while tacitly relegating the other to secondary status, dualism remains in effect. To this extent, history is merely repeating itself; where mind and matter once vied with each other for primary status, concrete matter now vies with abstract information abstractly representing matter and its extended relationships. But while the formal abstractness and concrete descriptiveness of information seem to make it a worthy compromise between mind and matter, Berlinski’s comment demonstrates its inadequacy as a conceptual substitute. What is now required is thus what has been required all along: a conceptual framework in which the relationship between mind and matter, cognition and information, is made explicit. This framework must not only permit the completion of the gradual ongoing dissolution of the Cartesian mind-matter divider, but the construction of a footworthy logical bridge across the resulting explanatory gap.
Mathematically, the theoretical framework of Intelligent Design consists of certain definitive principles governing the application of complexity and probability to the analysis of two key attributes of evolutionary phenomena, irreducible complexity and specified complexity. On one hand, because the mathematics of probability must be causally interpreted to be scientifically meaningful, and because probabilities are therefore expressly relativized to specific causal scenarios, it is difficult to assign definite probabilities to evolutionary states in any model not supporting the detailed reconstruction and analysis of specific causal pathways. On the other hand, positing the “absolute improbability” of an evolutionary state ultimately entails the specification of an absolute (intrinsic global) model with respect to which absolute probabilistic deviations can be determined. A little reflection suffices to inform us of some of its properties: it must be rationally derivable from a priori principles and essentially tautological in nature, it must on some level identify matter and information, and it must eliminate the explanatory gap between the mental and physical aspects of reality. Furthermore, in keeping with the name of that to be modeled, it must meaningfully incorporate the intelligence and design concepts, describing the universe as an intelligently self-designed, self-organizing system.
How is this to be done? In a word, with language. This does not mean merely that language should be used as a tool to analyze reality, for this has already been done countless times with varying degrees of success. Nor does it mean that reality should be regarded as a machine language running in some kind of vast computer. It means using language as a mathematical paradigm unto itself. Of all mathematical structures, language is the most general, powerful and necessary. Not only is every formal or working theory of science and mathematics by definition a language, but science and mathematics in whole and in sum are languages. Everything that can be described or conceived, including every structure or process or law, is isomorphic to a description or definition and therefore qualifies as a language, and every sentient creature constantly affirms the linguistic structure of nature by exploiting syntactic isomorphism to perceive, conceptualize and refer to it. Even cognition and perception are languages based on what Kant might have called “phenomenal syntax”. With logic and mathematics counted among its most fundamental syntactic ingredients, language defines the very structure of information. This is more than an empirical truth; it is a rational and scientific necessity.
Of particular interest to natural scientists is the fact that the laws of nature are a language. To some extent, nature is regular; the basic patterns or general aspects of structure in terms of which it is apprehended, whether or not they have been categorically identified, are its “laws”. The existence of these laws is given by the stability of perception. Because these repetitive patterns or universal laws simultaneously describe multiple instances or states of nature, they can be regarded as distributed “instructions” from which self-instantiations of nature cannot deviate; thus, they form a “control language” through which nature regulates its self-instantiations. This control language is not of the usual kind, for it is somehow built into the very fabric of reality and seems to override the known limitations of formal systems. Moreover, it is profoundly reflexive and self-contained with respect to configuration, execution and read-write operations. Only the few and the daring have been willing to consider how this might work…to ask where in reality the laws might reside, how they might be expressed and implemented, why and how they came to be, and how their consistency and universality are maintained. Although these questions are clearly of great scientific interest, science alone is logically inadequate to answer them; a new explanatory framework is required. This paper describes what the author considers to be the most promising framework in the simplest and most direct terms possible.
[more...]
This is a theory everyone should check out. It can be used as an isomorphic or correspondence theory for any and all fields of thought and ideas. I urge you all to thoroughly read the entire 65 page pdf several times, before drawing false conclusions. Being educated to the extreme, myself, I find it to be the most serious advance in tautological thought in history. I am not steering you wrong. Check it out at: C.T.M.U.
All material at this website © 1998-2005 by Christopher Michael Langan
The Cognitive-Theoretic Model of the Universe:
A New Kind of Reality Theory
Christopher Michael Langan
Paper Published September 2002 in Progress in Complexity, Information and Design, the journal of the International Society for Complexity, Information, and Design.
Introduction
Among the most exciting recent developments in science are Complexity Theory, the theory of self-organizing systems, and the modern incarnation of Intelligent Design Theory, which investigates the deep relationship between self-organization and evolutionary biology in a scientific context not preemptively closed to teleological causation. Bucking the traditional physical reductionism of the hard sciences, complexity theory has given rise to a new trend, informational reductionism, which holds that the basis of reality is not matter and energy, but information. Unfortunately, this new form of reductionism is as problematic as the old one. As mathematician David Berlinski writes regarding the material and informational aspects of DNA: “We quite know what DNA is: it is a macromolecule and so a material object. We quite know what it achieves: apparently everything. Are the two sides of this equation in balance?” More generally, Berlinski observes that since the information embodied in a string of DNA or protein cannot affect the material dynamic of reality without being read by a material transducer, information is meaningless without matter.
The relationship between physical and informational reductionism is a telling one, for it directly mirrors Cartesian mind-matter dualism, the source of several centuries of philosophical and scientific controversy regarding the nature of deep reality. As long as matter and information remain separate, with specialists treating one as primary while tacitly relegating the other to secondary status, dualism remains in effect. To this extent, history is merely repeating itself; where mind and matter once vied with each other for primary status, concrete matter now vies with abstract information abstractly representing matter and its extended relationships. But while the formal abstractness and concrete descriptiveness of information seem to make it a worthy compromise between mind and matter, Berlinski’s comment demonstrates its inadequacy as a conceptual substitute. What is now required is thus what has been required all along: a conceptual framework in which the relationship between mind and matter, cognition and information, is made explicit. This framework must not only permit the completion of the gradual ongoing dissolution of the Cartesian mind-matter divider, but the construction of a footworthy logical bridge across the resulting explanatory gap.
Mathematically, the theoretical framework of Intelligent Design consists of certain definitive principles governing the application of complexity and probability to the analysis of two key attributes of evolutionary phenomena, irreducible complexity and specified complexity. On one hand, because the mathematics of probability must be causally interpreted to be scientifically meaningful, and because probabilities are therefore expressly relativized to specific causal scenarios, it is difficult to assign definite probabilities to evolutionary states in any model not supporting the detailed reconstruction and analysis of specific causal pathways. On the other hand, positing the “absolute improbability” of an evolutionary state ultimately entails the specification of an absolute (intrinsic global) model with respect to which absolute probabilistic deviations can be determined. A little reflection suffices to inform us of some of its properties: it must be rationally derivable from a priori principles and essentially tautological in nature, it must on some level identify matter and information, and it must eliminate the explanatory gap between the mental and physical aspects of reality. Furthermore, in keeping with the name of that to be modeled, it must meaningfully incorporate the intelligence and design concepts, describing the universe as an intelligently self-designed, self-organizing system.
How is this to be done? In a word, with language. This does not mean merely that language should be used as a tool to analyze reality, for this has already been done countless times with varying degrees of success. Nor does it mean that reality should be regarded as a machine language running in some kind of vast computer. It means using language as a mathematical paradigm unto itself. Of all mathematical structures, language is the most general, powerful and necessary. Not only is every formal or working theory of science and mathematics by definition a language, but science and mathematics in whole and in sum are languages. Everything that can be described or conceived, including every structure or process or law, is isomorphic to a description or definition and therefore qualifies as a language, and every sentient creature constantly affirms the linguistic structure of nature by exploiting syntactic isomorphism to perceive, conceptualize and refer to it. Even cognition and perception are languages based on what Kant might have called “phenomenal syntax”. With logic and mathematics counted among its most fundamental syntactic ingredients, language defines the very structure of information. This is more than an empirical truth; it is a rational and scientific necessity.
Of particular interest to natural scientists is the fact that the laws of nature are a language. To some extent, nature is regular; the basic patterns or general aspects of structure in terms of which it is apprehended, whether or not they have been categorically identified, are its “laws”. The existence of these laws is given by the stability of perception. Because these repetitive patterns or universal laws simultaneously describe multiple instances or states of nature, they can be regarded as distributed “instructions” from which self-instantiations of nature cannot deviate; thus, they form a “control language” through which nature regulates its self-instantiations. This control language is not of the usual kind, for it is somehow built into the very fabric of reality and seems to override the known limitations of formal systems. Moreover, it is profoundly reflexive and self-contained with respect to configuration, execution and read-write operations. Only the few and the daring have been willing to consider how this might work…to ask where in reality the laws might reside, how they might be expressed and implemented, why and how they came to be, and how their consistency and universality are maintained. Although these questions are clearly of great scientific interest, science alone is logically inadequate to answer them; a new explanatory framework is required. This paper describes what the author considers to be the most promising framework in the simplest and most direct terms possible.
[more...]
Subscribe to:
Posts (Atom)