Are we, as far as our thinking processes go, nothing but 'Computers made of Meat'? Is it possible that Scientific Laws will be discovered that totally explain human behaviour? Lucas' Theorem, due to the great Oxford Philosopher John Lucas FBA, explains why both these hypotheses are impossible.
Note for the rigorous/pedantic: if by computers you mean 'any entity that can do a computation' then of course we are, in that rather vacuous sense, computers. (My mother was actually employed as a "computer" in the Admiralty Research Lab in the 50s) What Lucas' Theorem shows is that, unlike artificial computers (or anything equivalent to a Turing Machine), we are not automata.
But whilst equations like 3x + 5 = 7 could be solved by rational numbers (x = 2/3) other proved more intractable. Consider x2 = 2. x = 7/5 is fairly close: 49/25 is only slightly less than 2 - 141/100 is even better. Clearly, by using larger and larger denominators you can get closer and closer to the answer.
Then a pesky mathematician showed that, however closely you could approximate the answer by rational numbers, you can never get there. For suppose that there was a rational number a/b such that (a/b)2 = 2. Then there must be a way of expressing the fraction in its lowest terms, as c/d say, where c and d have no common factor. But if (c/d)2 = 2, then c2 = 2d2, and thus c2 is an even number. But the square of any odd number is an odd number, so c must be an even number: 2f say. Hence (2f)2 = 2d2 so 4f2 = 2d2 so d2 = 2f2 so d2 is even, and hence d is even: 2g say. But this is impossible because c/d was a fraction in its lowest terms. Hence there is no rational number which solves the equation x2 = 2.
Note 1 it is not enough to say "ah well, let's take f/g as the new solution" because the proof can also be applied to the 'new fraction' f/g. Because this proof applies to any possible fraction it shows that no fraction has the required property. Changing the fraction half way through to 'avoid the contradiction' does not work.
It is said that the first mathematician to prove this was put to death for his un-orthodoxy, and it was only many years later that other mathematicians brought the result to the attention of the educated public, so that it became an undoubted result.
Now all the 'arguments' for scientific determinism and "the brain is a computer" go along the lines of "of course we don't have anything like a complete theory yet, but we are continuing to make advances and so eventually we will have the full picture". You might have thought that the advocates of this line, who are all (presumably) aware of the above proof, must realise that this argument is completely fallacious, even if the gullible public to whom they offer the pronouncements does not. But in fact, Lucas's Theorem shows that, not only are the 'arguments' for Scientific Determinism and The Brain is a Computer fallacious, but the hypotheses are logically impossible. (In addition, it is now known that even if the components of the brain were completely deterministic, which is far from certain, the brain as a whole would not be. See here for details).
John Lucas FBA was an Oxford Philosopher until his retirement in the
1990s. In the early days of Artificial Intelligence he proved a general
result that has fundamental importance to any discussion of the distinction
between computers and human beings, which was announced first in a paper
and then in his book The Freedom of the Will (OUP).. Investigators
of computers have been making great strides in their ability to do calculations
which mimic the results of the human intellect. Decision making, calculation,
symbolic manipulation, speech synthesis, expert systems, neural networks
and the like provide closer and closer approximations to given aspects
of human behaviour. Similarly, much trumpeted advances in neuroscience
provide increasingly deeper understanding of the mechanisms of the brain.
Surely it is "obvious" that eventually computers will be able to mimic
human behaviour totally, and that scientists will be able to explain all
aspects of the brain, and hence the mind. Far from being uniquely made
in God's image, humans are 'nothing but' computers made of meat?
Define a Deterministic Logical System as a set of (at least two) states S, a set of inputs I and a logical system L which uniquely defines, given a state s in S and some inputs i in I, what the next state s' of the system will be.
There is a non-empty set of human beings HFree such that, for any member h of HFree, there does not exist a Deterministic Logical System which would accurately predict all h's actions in all circumstances.
Proof: Let HLog be the non-empty set of mathematical logicians capable of understanding Godel's theorem (see Defn 7 for a formal definition), and let h be a member of HLog. Suppose there exists a specific Deterministic Logical System (S1,I1,L1) which would accurately predict all h's actions in all circumstances. Since h can do elementary arithmetic, L1 must be rich enough to contain elementary arithmetic and thus by Godel's theorem there exists a proposition GL1 in L1 whose interpretation is "GL1 cannot be decided by L1". Consider now the circumstance where h (while rational) is asked: "is GL1 true?". h will rationally answer "yes", by reasoning along the following lines: "L1 cannot be contradictory, because if L1 were contradictory then (since in a contradictory logical system, if you can prove P you and prove not-P) the next state of the system would not be uniquely defined. But if GL1 were false, then it would mean that "GL1 cannot be decided by L1" can be decided by L1, which is a contradiction. Hence GL1 is true." However, by definition, L1 cannot decide GL1 and thus will answer "don't know". QED.
Note 2: Some people try to avoid this proof by arguing that "You've just given me a procedure that allows me to program a computer to answer appropriately" and so all that would be necessary would be to extend L1 with this procedure (making L2 say). This is exactly analagous to the logical mistake explained in Note 1. Because this proof applies to any possible Deterministic Logical System it shows that no Deterministic Logical System has the required property. Changing the Deterministic Logical System half way through to avoid the contradiction does not work.
Note 3: Clearly the term "capable of understanding Godel's theorem" is a trifle informal. The specific property we need is that h is capable, for any Deterministic Logical System (S,I,L) of recognising (and verifying, if necessary with the aid of a computer) that GL is a Godel proposition in L, and reasoning rationally about it along the lines indicated. If you want a formal proof, one is given here.
Corrollary 1: All rational human beings are members of HFree. Proof: it is absurd to suppose that there is anything so epistomolgically unique about mathematical logicians, and in principle all rational human beings are capable of being taught to understand Godel's theorem, .
Corrollary 2: No computer/automaton can competely model the mind of a human being. Proof: a computer/automaton is a Deterministic Logical System, and even if there are random factors in the program, these are ultimately either 'pseudo-random' or depend on random inputs, which can be made inputs to the Deterministic Logical System.
Corrollary 3: No scientific theory can ever be discovered that completely accounts for human behaviour. Proof: Any scientific theory that can be discovered which completely accounted for anything must be a Deterministic Logical System. But we have shown that no Deterministic Logical System can completely account for human behavior. Hence QED.
Note 4: none of these corrollaries deny that you can make increasingly accurate models or theories: they only show that the process will never finish. Similarly, you can make increasingly accurate approximations to the solution of the equation x2 = 2 in rational numbers, but you will never get there completely.
Note 5: It seems that the connection with freewill is fundamentally that a human, unlike an automaton, is able to choose in which logical system to work. The human logician, faced with a contradiction if they were limited to the logical system L, chooses to step outside it to decide the question. An automaton cannot do this: even adding a "rule" which says " when faced with a contradicion in a logical system then step outside it" does not rescue the automaton from the proof, because it simply extends the automaton's Logical System by this rule, making a new logical system L' - and recall that the proof applies to any logical system.
Amazingly The Freedom of the Will is out of print. But similar arguments were made by the outstanding mathematician Sir Roger Penrose FRS in The Emperor's New Mind and Shadows of the Mind. There is no doubt that the proof is correct, however uncomfortable and counter-cultural it may be to many in the Western intelligensia.
(for reasons of character sets we write E for "there exists", =/= for "not equal to" and x for the cross product)
Defn 1 A relation R:X->Y defined on (possibly
infinite) sets X and Y is automatic if there exists a logical system
LR with a finite number (n(LR)) of axioms and terms and a term TLR(x,y)
such that R(x,y) iff TLR(x,y) is a theorem in LR.
Obs 1 The join of two automatic
relations is automatic.
Proof Let R and S be automatic relations defined on X, Y and
Y, Z resp. The logical system LRS formed by uniting LR and LS,
re-labelling terms as necessary to avoid conflict of axioms or terms, and
the term TLRS (x,z) = (E y)(TLR(x,y) and TLS(y,z))
has at most n(LR) + n(LS) +1 axioms and terms, and (RoS)(x,y) will be true
iff (E
y)(TLR(x,y) is a theorem in LR and TLS(y,z)
is a theorem in LS), in which case, since all theorems about TLR and TLS
are theorems in TLRS (with appropriate re-labelling) TLRS(x,z)
will be a theorem in LRS. Suppose now that TLRS is a theorem
in LRS. Then (E y)(TLR(x,y) is a theorem in LR and TLS(y,z)
is a theorem in LS) and hence R(x,y) and S(y,z). Hence QED.
Note 6 Not all relations are automatic, since the number of
finite sets of axioms + terms is countable whereas the number of relations
defined on even N x N is un-countable.
Defn 2 An (abstract) Actor is a relation
R defined on S x C x A where:
S is a set of States
C is a set of Circumstances
A is a set of Actions
If R(s,c,a) is true we say that R in state s and circumstance c would
take action a.
Note S, C and A may be infinite.
An actor is non-trivial if E(s1,c1,a1,s2,c2,a2)
(R(s1,c1,a1) and R (s2,c2,a2)
and s1 =/= s2, c1 =/= c2 and
a1 =/= a2.
Defn 3 An Automaton is an actor for
which R is automatic.
Defn 4 An actor R2 predicts an actor
R1 iff there exist automatic ‘prediction relations’
Scan12: S1 x C1 -> S2 x C2 and Map21: A2 -> A1 such
that: R1(s1,c1,a1) iff E(s2,c2,a2)
such that R2(s2,c2,a2) and Scan12(s1,c1,s2,c2) and
Map21(a2,a1) (ie R1 = Scan12 o R2 o Map21)
Obs 2 The relation predicts
is reflexive and transitive.
Pf R predicts R with Scan and Map being equality relations. And
if R1 predicts R2 and R2 predicts R3 then Scan13 = Scan12
o Scan23 and Map31 = Map32 o Map21
show that R1 predicts R3, since by Obs 1 Scan13 and
Map31 are automatic.
Obs 3 Any Actor predicted
by an automaton is an automaton.
Proof If R2 predicts R1 then R1 = Scan12
o R2 o Map21 and hence by Obs 1 R1 is automatic.
Defn 5 An (abstract) Mathematical Logical
Question (MLQ) is a pair (P,L) where L is a finite logical system and
P is a proposition in L. The answer to a MLQ is known-true or not-known-true
if P is true, or false/undecided (resp.)
Defn 6 An (abstract) Mathematical Logician
is an actor M:P x L->Answer whose set of circumstances
C is all Mathematical Logical Questions (MLQs) and
whose Actions A are giving the answers to MLQs and:
(a) M(P,L) is known-true only when P is true in L. NB it may
also give the answer not-known-true if P is true in L and
(b) If M(P,L) is known-true then M(not P, L) is not-known-true
and
(c) If A is an axiom of L then M(A,L) is known-true
Defn 7 A Mathematical Logician M is "Capable
of understanding Godel’s Theorem" if for any Mathematical Logical Question
(GL1,L1) where GL1 is a Godel proposition in L1 whose interpretation is
"GL1 is undecidable in L1" and L1 is the logical system of a non-trivial
Automaton, M(GL1,L1) = known-true (which is correct, because L1 cannot
be contradictory, since if L1 were contradictory then the automaton not
be non-trivial. But if GL1 were false, then it would mean that "GL1
cannot be decided by L1" can be decided by L1, which is a contradiction.
Hence GL1 is true.).
Thm 1 (Lucas' Theorem) No Mathematical
Logician Capable of Understanding Godel’s Theorem (MLCUT) is an Automaton.
Proof: Suppose M =R1:P ´ L->A were a Mathematical Logician
Capable of Understanding Godel’s Theorem (MLCUT). If M is an Automaton
then R is automatic, so (by Defn 1) there exists a logical system LR1 in
which R1(P,L) = known-true iff TLR1((P,L), known-true) is a theorem in
LR1. Now by Godel’s theorem there exists a Godel proposition GLR1
in LR1 whose interpretation is "this proposition is undecidable in LR1".
Consequently TLR1((GLR1,LR1), known-true) is not a theorem in LR1.
But by the definition (Defn 7) of MLCUT R1(GLR1,LR1) = known-true hence
But this is a contradiction. Hence no such automatic R1 can exist.
QED.
Thm 2 No Mathematical Logician
Capable of Understanding Godel’s theorem can be predicted
by an Automaton.
Proof: By Obs 3 it would then be an automaton.
QED
Defn 8 An Actor R:S x C->Answer is said
to be A Mathematical Logician Capable in principle of Understanding
Godel’s Theorem with the aid of a sufficiently powerful computer when
in a suitable state if there exists (logically) an automaton PC: P
x L -> C and a state s1 in S such that the Actor PC o
R|s1 defined as [PC o R|s1] (P,L) = R(s1,PC(P,L))
is a MLCUGT.
Thm 3 No Mathematical Logician
Capable in principle of Understanding Godel’s Theorem with the aid of a
sufficiently powerful computer when in a suitable state is, or can
be predicted by, an automaton.
Proof: If R is such an actor then R predicts
an MLCUGT (using the automatic relation PC o R|s1
as Scan and the identity as Map). Consequently by Thm
2 R is not an Automaton. But by Obs 3 it
cannot be predicted by an Automaton
either. QED
|
|
|
|