We could start by examining the classical cognitive thesis for determining the mind, at a determined level of analysis, as a material symbol system. Two principal theses have been furnished. The first one addresses intentionality, particularly to the semantic states of mental categories that are involved in the production of behavior and that we believe are envisaged in the mind. The second one addresses complexity, in particular to the structural complexity of the mind.
In his books "Computation and Cognition" (1984) and "Things and Places" (2007), Pylyshyn talks about how computational descriptions must be considered in relation to the measuring supported by two interpreting operations, the semantic function (SF) and the instantiation function (IF). On one hand, The semantic function codifies from distributed functional states onto some spheres of specific interpretation. This process is necessary for formal procedures as a calculation because it is a rule-governed modus operandi characterized beyond semantic aspects (Fodor et al. 2002). For this reason, the first assumption of Searle’s thesis, that computation is described only at the syntactic level, isn’t very correct within the ambiance of the symbol processing procedure.
On the other hand, the instantiation function connects from material categories to computational categories.
This is the interpretation function which is in accordance to the interpretation of syntax: its mission is, therefore, to demonstrate how syntactic elements are physically constructed. Unfortunately, Pylyshyn’ s analysis of the instantiation function doesn’t supply many elucidations about this matter because the syntactically different elements calculated in the evaluation aren’t easily ascribed to the structure, but intrinsic in it. The question becomes more complex when it is said that computational elements are independent from any determined physical medium; however, multiple realizable material productions of a computational function, as stated by Pylyshyn, are principally open-ended.
So multiple realizability is usually recognized as a positive element in the computational metaphor of the brain, for it supplies a representation of how not strong token-token identities are required between brain cases and material cases.
Searle answers, per contra, that the multiple realizability of computational cases is only an evidence that the mental properties are not inherent to the structure, but derived from an external analysis. He observes that a differentiation should be given between tools whose purposes are multiply realizable (like, thermostats) and are, however, described as the creation of identical material effects (like, the regulation of temperature), and tools whose multiple realizability is expected from their essentially formal characteristics (for example, Turing machines).
This differentiation between what could be denominated functional multiple realizability and formal multiple realizability signifies the existence of another important idea. This is the idea of digitality. One argument that is usually furnished for explaining why material symbol structures could be multiply realizable is that they are digital structures. It is not very clear among cognitive scientists or philosophers how to describe digital structures and their functions as, e.g., processing information (Pylyshyn 1984; Chomsky 2004), but for our purposes, a digital structure is one whose actions are part of a limited number of categories that are always determined; however, for any given category, a case is either of that sign or it isn’t, and difference among the cases that appertain to a certain type isn’t significant. In material symbol structures, the irrelevant differences appear at the material level and the well determined types harmonize with the syntactic characters of the structure. In this manner syntactic characters accord freely with a lot of material properties and syntactic case-transitions could freely include many material causal rules. However, material symbol structures are, as a group, unconnected to any determined physical medium.
The other argument, that we will explain, concerns the organizational complexity of the mind.
Melanie Mitchell (2009) explains the thesis of organizational complexity in her book: "Complexity: a guided tou r". Mitchell says that computational grades of description are required for structures whose elements associate combinatorially. In the computer, the status of each element (for example binary switches) is free from the statuses of the other elements; however, the activeness of larger elements in the machines isn’t a simple sum, but counts on the state of each element and their intrinsic combinatorial characteristics. The complexity of the combinatorial characteristics is basically accountable for the medium-independence of machine structures.
The precursor for this thesis about the mind is of course the crucial paper by Warren McCulloch and Walter Pitts (1943) entitled "A Logical Calculus of the Ideas Immanent in Nervous Activity". By considering the neurons as the functional elements of the nervous system and by interpreting neuronal activeness as binary, digital, and simultaneous (all change status at equal distinct time-steps), McCulloch and Pitts were capable to demonstrate that the organizational complexity of a certain number of neural networks is adequate for the computing of the formal logic processes (propositional calculus). This development has been very important for the entire area of cognitive sciences because it explains how the processes of the nervous system could be defined, at certain levels, using formal logic and , therefore, how the mind can be described as a symbolic machine; moreover, following recent studies (Volterra & Meldolesi 2005; Allen & Barres 2009) also glial cells could have a relevant role within the brain working and thus in cognition; they usually are smaller than neurons and outnumber them by five to ten times; they include about half the whole mass of the brain and spinal cord, there are various kinds of glial cells, for example, astrocytes, oligodendrocytes and so on; in the past their role has been undervalued but, probably, they have a key role in organizational complexity of the brain and it would be useful a development of a "gliascience" (glial cells differ from neurons) in biopsychology and a "computational glia-neuroscience" in AI (integrating artificial neural networks and artificial glial networks).
The organizational complexity allows the beginning of a more efficient analysis of the syntactic interpretability themes. Assume, for example, that McCulloch and Pitts’s idealized neural networks attested the real functional structure of the mind. It would still be true that in designing a computational structure of the mind, we would have to reproduce the all-or-none activity of the neuron onto 0′s and 1′s in binary script, but the structure would be, in a case, non-arbitrarily based on the intrinsic characteristic of the mind, that is, the function that neuronal activity performs in the actions of the nervous system and the production of behavior. Searle considers this question, but he simply declines it by repeating his statement that "syntax is not intrinsic to physics", avoiding the entire issue.
My reference to McCulloch and Pitts should not be considered as an affirmation about how computational operations are really based on mind operations. To the contrary, it is recognized that real neurons aren’t only binary switches, and even though neurons are the material units of the nervous structure, the essential functional units are presumably relatively unvarying patterns of activeness in neuronal aggregations (Edelman et al. 2000). However, by citing McCulloch and Pitts I would like to show the point that, against Searle, there is nothing discordant in assuming that computational mechanisms could be based on the organizational complexity of nervous structures. It’s for this reason that I thought the issue of organizational complexity supports the development of a way to syntactic interpretability: It critically replies to Searle’s theoretical thesis, and, at the same time, leaves open the empirical questions about the syntactic interpretability of complex structures like the mind. The approach to the thesis that I have followed could have theoretical connections for these empirical questions. We have comprehended that for the ground properties of syntactic interpretability it isn’t sufficient to only invoke the ideal correlation of the syntactic to the material, such as Pylyshyn’s IF; we should also refer to the organizational complexity of the producing structures. For this reason, I believe that Fodor and Pylyshyn (1988) are wrong when they say that philosophers such as Hofstadter, Dennett and connectionists such as Rumelhart and McClelland (1986) should not invoke the complexity as a line of differentiating between cognitive and non cognitive structures. Instead, it’s the organizational complexity of determined structures that authorizes the idea that there are other syntactic grades of description for their behavior.
The new research field recognized as emergent computation is very hopeful (Bertelle et al. 2006). Emergent computation is the analysis of complex structures having three general characteristics: (1) they are made of a group of agents each of which replaces explicit rules; (2) the agents interact following the rules and thereby generate implicit, emergent general designs; and (3) there is an interpretation function that connects the general designs onto computations. In these emergent computational structures the low-grade agents are themselves tools that have only a formal specification, but since they are usually plain—e.g., the on-off cells of a cellular engine—we could clearly conceive biological analogues.
A good biological example could be the mirror neurons, recently discovered by Rizzolatti and his research team (Rizzolatti et al. 2004). These neurons fire both when an animal acts and when the animal looks at the same action performed by another one. Thus, the neuron "mirrors" the behavior of the other, as though the observer were himself acting. Such neurons have been directly studied in primate and other species. In humans, mirror neuron activity has been recognized in the premotor cortex, the supplementary motor area, the primary somatosensory cortex and the inferior parietal cortex (Kohler et al. 2002). I will call this sort of mechanism "Neuro-Motor Analogical System" (NEMOANSY) because it is situated almost in the motor cortex and it is based on a analogy-making model that is considered a very important tool of intelligence: the cognitive scientist and philosopher D.R. Hofstadter said analogies are the core of cognition (Hofstadter 2001). Moreover, to understand the actions of others it is necessary not only a inner syntax but also the sharing of it between agents (humans and non-humans).