While I find John Taylor's argument (色盒直播S, March 3) interesting and although I share an interest in neural network techniques, I cannot agree that we are on the verge of a breakthrough in understanding consciousness. One basis for claiming that neural networks will eventually give rise to consciousness is the belief that it is an emergent property of sufficiently complex systems. While Taylor does not explicitly make this claim, his description of the "self-monitoring relational mind" uses the analagous argument that consciousness is essentially epiphenomenal.
He suggests the sheer weight of "relations" in a person's mind and the comparison of the current contents of the mind with earlier contents gives rise to the feeling of self. I might accept the argument that a computer program is essentially "self"-monitoring and that it uses associative memory, as do neural nets in some cases, but this in no way explains how subjective experience arises. A program cannot have "conscious and unconscious processing", only processing. I find the use of language in parts of the article unnecessarily emotive. I have no time for anyone worried about "the desecration of the hallowed ground of human psyche". However, I do not believe that critics of the "strong" approach to artificial intelligence are generally members of any such camp.
They are asking questions which any theory of consciousness must be able to answer before it can be considered to be a proper scientific theory.
The argument that the human mind can never be modelled by a machine cannot be reduced to the simplistic presentation given in the article and still poses a serious challenge that needs to be answered. We cannot simply choose to ignore some of the difficult questions, or try to trivialise them just because they are difficult.
D. W. Salt
Head of division of computing
School of maths and computing