Symbolic Logic:Analysis:Axiom Theorem Systems

From Knowino
Jump to: navigation, search

Learning applied using a learning technique (such as Symbolic Replacement) to the object model obtained by perception of the world gives rules.

These rules are only "probably true" and the exact probability is too hard to calculate. But the data compression given by the rule gives evidence for the rule.

Once the rule has been formulated further evidence is gained for it and its range of applicably is extended or refined.

In some situations rules may be idealized or abstracted from the real world origin. In this case the rules are no longer directly related the real world situation. The set of rules becomes of interest for its own sake. A set of rules then becomes an axiom system.

The system searches for interesting theorems. A theorem is interesting if it is simpler (may be expressed with less data, or if the shortest proof is long). A theorem that is simpler than the axiom may be a new candidate axiom.

The axioms and theorems may be arranged in order of increasing complexity. While searching for interesting theorems the system also searches for proofs of more complex axioms from simpler axioms. If proofs are found axioms become theorems.

Also the search will look for proofs that the negation of a more complex axiom is true. The negation is implied when there is a paradox such as Russel's paradox. The detection of a paradox may lead to the refinement of an axiom to remove the paradox.

Failure to find a proof or paradox does not prove consistency, because the search may never end. A independence proof proves an axiom is independent of a core set of axioms. This means that the axiom, or the negation of the axiom, may be added to the axiom set without creating a paradox.

The ideal axiom system has independent axioms.

[edit] Self reference, and the limitations on axiom systems

Clearly proofs about the consistency and independence of axioms are difficult. But the situation is worse than that. Godel proved that there are always true statements that are not provable within an axiom system. What this actually means is quite technical.

The laymans version is that any statement that is self referencing is suspect. Unfortunately this self reference may be hidden.

An intelligence must be flexible enough to deal with problems that are unsolvable. An intelligence cannot allocate all its resources to solving one problem and must use heuristics to measure the progress and value of each task.

[edit] Links

Personal tools
Variants
Actions
Navigation
Community
Toolbox