Brains, Bytes, And Boundaries: Evaluating The Computational Model Of The Mind.

Characterise The Computational Approach To The Mind. What Are Its Limits?

The Computational Theory of Mind (CTM) views cognition as an information-processing system, akin to a Turing Machine, where mental processes are reducible to computation (Colombo and Piccinini, 2023). Influenced by Turing’s model of computation, CTM underpins much of cognitive science, artificial intelligence and neuroscience (Vestberg, 2017). The following essay will first outline CTM’s core principles and variations, including classical computationalism. Next, it will discuss CTM’s strengths, particularly its role in cognitive modelling and AI. However, the essay shall assert that the CTM has significant limitations, drawing primarily on Godel’s Incompleteness Theorem & Turing’s Halting Problem, all of which challenge the claim that cognition is entirely computational. Ultimately, the essay shall assert that, while CTM is a powerful explanatory tool, it fails to capture consciousness, meaning and human reasoning, indicating the need for a broader, non-computational approach for understanding the mind.

The Computational Theory of Mind (CTM) posits that cognition functions through computational processes, akin to how a computer processes information. Mental states are viewed as symbolic representations manipulated by computational rules, providing a structured framework for explaining perception, reasoning, memory, and problem-solving (Colombo and Piccinini, 2023). CTM's foundations trace back to Alan Turing, whose concept of the Turing machine demonstrated that cognition could be understood as algorithmic processing (Rescorla, 2015). His work influenced Jerry Fodor, who developed the classical symbolic model, and Newell & Simon, who applied computational principles to artificial intelligence (AI), reinforcing the view that thought operates through symbol manipulation and rule-based transformations (Horst, 1999). Over time, CTM evolved into distinct variants, including classical symbolic processing, connectionist neural networks, and computational neuroscience, each offering different perspectives on cognition (). While CTM has shaped cognitive science and AI, debates persist over whether human thought can be fully reduced to computation (Gibbons, 2015).

Building on Turing’s work, Fodor’s Language of Thought Hypothesis (LOTH) proposed that mental representations function like a symbolic language, governed by structured rules (Fodor, 1975). This aligns with the representational theory of mind, which sees cognition as the computational manipulation of symbolic tokens (Aydede, 2010). Similarly, Newell and Simon’s physical symbol system hypothesis argued that any system capable of symbol manipulation can exhibit intelligence, reinforcing the view of cognition as symbolic processing (Newell, 1980). These contributions shaped the computational paradigm in cognitive science (Knyazev, 2023). While classical computational theories—linked to Fodor, Newell, and Simon—conceptualized cognition as explicit symbol manipulation, alternative models like connectionism and computational neuroscience have since emerged (Knyazev, 2023). Classical computational theories of mind (CTM), treating thought as rule-based symbol processing, remain influential in AI, cognitive psychology, and reasoning theories (Colombo & Piccinini, 2023).

Connectionism challenges explicit symbol manipulation, proposing that cognition emerges from activation patterns across neural networks (Kaplan et al., 1990). Inspired by neuroscience, it uses artificial neural networks to simulate learning, adaptation, and distributed processing, adjusting internal structures based on experience rather than fixed rules (Kaplan et al., 1990). This approach excels in modeling perception, pattern recognition, and associative learning, where symbolic systems often struggle (Colombo & Piccinini, 2023). Computational neuroscience integrates symbolic and connectionist models, viewing cognition as a hybrid process involving both digital and analog computations (Gebicke-Haerter, 2023). It links high-level cognitive theories with low-level neural mechanisms, suggesting cognition is neither purely symbolic nor purely connectionist but operates through layered computation, combining discrete symbol-like processes with continuous neural dynamics (Marblestone et al., 2016).

Overall, the CTM offers a powerful framework for understanding cognition as a computational process, shaped by foundational contributions from Turing, Fodor, Newell, and Simon (Colombo and Piccinini, 2023). Different variants of CTM, including classical symbolic computation, connectionism, and computational neuroscience, provide distinct models of how cognitive processes are implemented (Colombo and Piccinini, 2023). Marr’s levels of explanation further clarify how cognition can be studied at different levels, emphasizing the functional nature of computational processes (Shagrir and Bechtel, 2017).

 

One of the key strengths of the Computational Theory of Mind (CTM) is its structured and systematic approach to understanding mental processes (Vestberg, 2017). By conceptualizing cognition as computation, CTM has been instrumental in explaining reasoning, perception, and problem-solving, providing a theoretical foundation for both scientific research and artificial intelligence (AI) (Vestberg, 2017). CTM has significantly influenced computational models of human cognition, shaping connectionism and deep learning (Rescorla, 2015). Its strength lies in formalizing cognitive functions for precise modeling and empirical testing (Rescorla, 2015). Additionally, its impact on AI and machine learning underscores its real-world applicability, demonstrating how computational principles can replicate and enhance cognitive processes (Rescorla, 2015; Colombo & Piccinini, 2023).

One of CTM’s greatest strengths is its ability to provide a formalised, systematic framework for studying cognition (Colombo and Piccinini, 2023). By treating mental processes as computational operations, it enables precise modeling of reasoning, perception, decision-making, and memory (Colombo and Piccinini, 2023). Unlike abstract philosophical theories, CTM grounds cognitive science in measurable, testable principles, allowing for empirical research and hypothesis-driven studies (Colombo and Piccinini, 2023). In reasoning, CTM explains logical inference, problem-solving, and decision-making through rule-based symbol manipulation (Colombo and Piccinini, 2023). Classical CTM models thought using if-then rules, propositional logic, and syntactic transformations, illustrating how cognitive agents - both biological and artificial - derive structured conclusions (Colombo and Piccinini, 2023). This approach has been foundational in formal logic, linguistics, and cognitive psychology, providing insights into argument construction and rational decision-making (Rakova. 2006; Colombo and Piccinini, 2023). Similarly, CTM has significantly advanced perception studies by treating visual and auditory processing as computational tasks. David Marr’s computational theory of vision, for instance, describes perception as a sequence of computational transformations from raw sensory input to structured visual representations (Vestberg, 2017). This model has been widely applied in neuroscience, computer vision, and robotics, demonstrating how computational principles can replicate aspects of human perception (Rakova. 2006).

Beyond its theoretical impact on cognitive science, CTM has profoundly influenced artificial intelligence (AI), particularly in machine learning, neural networks, and deep learning (Colombo, M. and Piccinini). Many AI models are rooted in computational principles described by CTM, demonstrating its practical effectiveness in understanding cognition (Colombo, M. and Piccinini). One major AI approach inspired by CTM is connectionism, which models cognition through artificial neural networks (Sprevak and Colombo, 2019). Unlike classical symbolic AI, which relies on explicit rule-based computation, connectionist models simulate cognition via distributed processing across networks of simple units (neurons). These models learn and adapt by modifying connection weights, mimicking how biological neural systems strengthen or weaken synapses in response to stimuli (Sprevak and Colombo, 2019). The success of connectionist architectures in replicating human learning supports CTM’s claim that cognition involves computation over structured representations or neural activations (Sprevak and Colombo, 2019). A key breakthrough driven by CTM principles is deep learning, which extends connectionism through multi-layered neural networks (Rescorla, 1015). These models have revolutionized computer vision, natural language processing, and autonomous systems, achieving human-like performance in image recognition, language translation, and decision-making (Rescorla, 1015). Convolutional neural networks (CNNs) for visual processing and transformer models (e.g., GPT, BERT) for language tasks exemplify how computational models can replicate complex cognitive functions, reinforcing the idea that cognition - biological or artificial - operates computationally (Sprevak and Colombo, 2019).

Computational models of cognition have advanced cognitive neuroscience by simulating brain activity and providing insights into disorders like Alzheimer’s, schizophrenia, and autism. CTM has facilitated breakthroughs in brain-computer interfaces, neuroimaging analysis, and AI-driven cognitive therapies, demonstrating its real-world impact in medicine and technology. Its integration into AI and neuroscience underscores its practical utility and explanatory power. By providing a rigorous framework for learning algorithms, neural models, and intelligent systems, CTM has shaped modern AI and cognitive modeling. Its strengths lie in its structured, systematic, and empirically testable approach, making it a dominant framework in cognitive science and psychology. Beyond theory, CTM has driven advancements in AI, connectionism, and deep learning. While it faces criticism - particularly regarding its ability to explain consciousness, meaning, and subjective experience - its influence on scientific inquiry and technological innovation remains profound.

 

However, while CTM has various strengths as outlined, including the empirical basis for cognition, which has influenced and led to the development of artificial intelligence models, the theory presents significant limitations that challenge its ability to fully explain human cognition and the mind. Specifically, CTM struggles to account for aspects of reasoning, self-reflection, and understanding that appear to transcend algorithmic processing.

One of the most significant limitations pertaining to the notion of computational theory of the mind, is that of Godel’s Theorem. Kurt Gödel’s incompleteness theorem, published in 1931, is a landmark discovery in mathematical logic with profound implications for mathematics, the philosophy of mind, and artificial intelligence (Sabinasz, 2017). Gödel demonstrated that any formal system capable of basic arithmetic contains true statements - known as Gödel sentences - that cannot be proven within the system itself (Goldstein, 2006). This means no formal system can be both complete (able to prove all truths) and consistent (free of contradictions) (Goldstein, 2006; Sabinasz, 2017). Gödel achieved this by constructing a self-referential statement that essentially asserts, “This statement is not provable within this system.” If the system could prove the statement, it would create a contradiction, rendering the system inconsistent (Penrose, 1994). Conversely, if the system cannot prove the statement, the statement is true but unprovable within the system (Penrose, 1994; Sabinasz, 2017). Thus, Gödel’s theorem reveals that sufficiently complex formal systems always contain true but unprovable statements and cannot prove their own consistency (Goldstein, 2006). This discovery highlights the inherent limitations of formal systems, challenging the notion that they can fully capture all truths or provide absolute reliability in fields like mathematics and computation (Goldstein, 2006; Penrose, 1994; Sabinasz, 2017).

Gödel’s Incompleteness Theorem consists of a central finding. Specifically, any consistent formal system F capable of performing basic arithmetic is incomplete, meaning there are true statements in its language (for example, mathematics) that cannot be proven or disproven within the system (Penrose, 1994; Sabinasz, 2017). These findings demonstrate that absolute consistency proofs are impossible for axiomatic systems capable of arithmetic. Such systems will always contain statements that cannot be resolved through their formal rules but are nevertheless true (Goldstein, 2006). Thus, Gödel’s Theorem reveals that any sufficiently strong formal system has inherent limitations, with truths that lie beyond its scope of provability. Gödel’s theorem can be challenging to grasp, but it becomes clearer by examining its implications. Consider the analogous statement, “This formula is unprovable in the system.” If it were false, it would mean the statement is provable, leading to a contradiction: a provable statement that claims to be unprovable (Lucas, 1996). In a consistent system, anything provable must be true, so the statement must be true but unprovable to avoid inconsistency (Lucas, 1996). This self-referential logic demonstrates that in any consistent system capable of basic arithmetic, there will always be true statements that cannot be proven within the system itself (Penrose, 1994; Lucas, 1996; Sabinasz, 2017). Gödel’s rigorous proof establishes this without flaw, showing that these unprovable truths exist, and from outside the system, we can recognise them as true (Lucas, 1996; Penrose, 1994).

Therefore, through Godel’s theorem, this showcases the limitations of the Computational Theory, because if the mind were purely computational, it should be limited in the same way as a formal system – yet human cognition seems to go beyond these limits. Thus, the argument can be outlined as such: if cognition is computational, then there should be truths that human minds cannot recognise. But since humans do recognize such truths, this suggests the mind operates beyond computation. Therefore, Gödel’s theorem implies that human reasoning includes elements that cannot be captured by formal computational systems, meaning CTM is at best incomplete, if not fundamentally flawed

 

To parallel Godel’s Incompleteness Theorem, Turing’s Halting Problem also presents a fundamental limit on computation, showing that some problems are inherently undecidable by any computational system. If human cognition is purely computational, then it too should be subject to such limitations. However, the ability of humans to understand why the Halting Problem is undecidable suggests that the mind goes beyond computation.

The Halting Problem is a valuable parallel to articulate this point clearly.  The Halting Problem shows that no machine can universally determine whether a given computer program will eventually stop running (halt) or run forever, highlighting a fundamental constraint of computation (Penrose, 1994). Consider a hypothetical machine H designed to solve the Halting Problem. H takes two inputs: a description of a program P and its input data D. H outputs “halts” if P stops running on D and “does not halt” if P runs indefinitely (Burkholder, 1987). While this seems plausible, Turing proved it is impossible (Lucas, 1996). The proof involves creating a program P′ that uses H as an input and behaves as follows: If H predicts P′ will halt, P′ enters an infinite loop. If H predicts P′ will not halt, P′ halts immediately. This setup creates a contradiction: if H says “halts,” P′ loops forever, making H’s prediction incorrect. If H says, “does not halt,” P′ halts, again contradicting H (Penrose, 1994). This paradox demonstrates that H cannot solve the Halting Problem for all possible programs, revealing an inherent limitation of machines: they cannot resolve problems involving their own predictions (Prokopenko et al., 2019; Lucas, 2021; Penrose, 1994).

Vitally, humans, unlike machines, can recognise why the Halting Problem is unsolvable (Penrose, 1994). The contradiction arises because program P′ is designed to act contrary to H’s prediction, and humans can understand this self-referential issue without needing to compute an answer within H’s framework (Penrose, 1994; Lucas, 1996). This ability to "step out" of the formal system and analyse its structure is uniquely human, allowing us to identify why the contradiction occurs and understand the system’s limitations (Lucas, 1996; Penrose, 1994). Machines, constrained by their formal rules, cannot reflect on their own limitations in the same way (Lucas, 1996). This distinction mirrors Gödel’s incompleteness theorem, where a Gödel sentence - a true statement a formal system cannot prove - reveals similar limitations of machines governed by formal rules (Sabinasz, 2017; Goldstein, 2006). While machines are bound by their programming and unable to recognize such truths, humans can reason about the system as a whole and understand why the Gödel sentence must be true without leading to contradiction (Lucas, 1996; Sabinasz, 2017).

Thus, the Halting Problem illustrates that computational machines are fundamentally limited by the rules of their formal systems. They cannot solve certain problems or recognise specific truths (Lucas, 1996; Penrose, 1994). Humans, however, engage in meta-reasoning, using intuition and conceptual understanding to transcend these limitations (Penrose, 1994). This capacity to “step out” of formal systems supports the argument that human reasoning is non-computational and fundamentally distinct from machines (Lucas, 1996).

 

To conclude,  Godel’s Incompleteness Theorem and Turing’s Halting Problem highlight the inherent limitations of formal computational systems (Prokopenko et al., 2019; Penrose, 1994). Gödel’s theorem demonstrates that any formal (computational) system capable of arithmetic will contain true statements that cannot be proven within its rules, while the Halting Problem shows that no algorithm can universally determine whether a program will halt or run indefinitely (Penrose, 1994; Sabinasz, 2017). Thus, if human cognition is purely computational, then it too should be subject to such limitations. However, the ability of humans to understand why the Halting Problem is undecidable suggests that the mind goes beyond computation. This indicates the limitations of the CTM as through Godel’s Incompleteness Theorem and the Turing Halting Problem this indicates that there must be more to the mind than merely computation. For if the mind was merely computation, then Godel Sentences would not be recognisable to the human mind, yet there are. This highlights CTM’s limitations, as it fails to fully explain human reasoning, which exceeds the constraints of formal computational systems. If cognition were purely computational, it would be subject to the same undecidability and incompleteness, yet humans recognize truths beyond these limits. This suggests cognition involves more than computation, requiring alternative or complementary models.

 

1)      Aydede, M., 2010. The language of thought hypothesis.

2)      Burkholder, L., (1987). The halting problem. ACM SIGACT News, 18(3), pp.48-60.

3)      Bringsjord, S. and Xiao, H., (2000). A refutation of Penrose’s Gödelian case against artificial intelligence. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), pp.307-329.

4)      Bory, P., Natale, S. and Katzenbach, C., (2024). Strong and weak AI narratives: an analytical framework. AI & SOCIETY, pp.1-11.

5)      Colombo, M. and Piccinini, G., 2023. The computational theory of mind. Cambridge University Press.

6)      Vestberg, M.E., 2017. A Compatibilist Computational Theory of Mind (Doctoral dissertation, Oxford Brookes University).

7)      Edelman, S., (2008). Computing the mind: How the mind really works. Oxford University Press.

8)      Flowers, J.C., (2019), March. Strong and Weak AI: Deweyan Considerations. In AAAI spring symposium: Towards conscious AI systems (Vol. 2287, No. 7).

9)      Fodor, J., 1975. The language of thought.

10)   Feferman, S., (1995). Penrose’s Gödelian argument. Psyche, 2(7), pp.21-32.

11)   Goldstein, R., (2006). Incompleteness: The proof and paradox of Kurt Godel. WW Norton & Company.

12)   Gebicke-Haerter, P.J., 2023. The computational power of the human brain. Frontiers in Cellular Neuroscience17, p.1220030.

13)   Gebicke-Haerter, P.J., 2023. The computational power of the human brain. Frontiers in Cellular Neuroscience17, p.1220030.

14)   Gibbons, K., 2015. Penrose against Computational Theories of Mind. Diss, The Ohio State University.

15)   Horst, S., 1999. Symbols and computation a critique of the computational theory of mind. Minds and Machines9(3), pp.347-381.

16)   Newell, A., Shaw, J.C. and Simon, H.A., 1959, June. Report on a general problem solving program. In IFIP congress (Vol. 256, p. 64).

17)   Newell, A., Shaw, J.C. and Simon, H.A., 1957. Problem solving in humans and computers. Carnegie technical21(4), pp.35-38.

18)   Newell, A., 1980. Physical symbol systems. Cognitive science4(2), pp.135-183.

19)   Prokopenko, M., Harré, M., Lizier, J., Boschetti, F., Peppas, P. and Kauffman, S., (2019). Self-referential basis of undecidable dynamics: From the Liar paradox and the halting problem to the edge of chaos. Physics of life reviews, 31, pp.134-156.

20)   Penrose, R., (1994). Shadows of the Mind.

21)   Knyazev, G.G., 2023. A Paradigm Shift in Cognitive Sciences?. Neuroscience and Behavioral Physiology53(5), pp.892-906.

22)   Kaplan, S., Weaver, M. and French, R., 1990. Active symbols and internal models: Towards a cognitive connectionism. AI & SOCIETY4, pp.51-71.

23)   Marblestone, A.H., Wayne, G. and Kording, K.P., 2016. Toward an integration of deep learning and neuroscience. Frontiers in computational neuroscience10, p.215943.

24)   Mureithi, V., (2024). Functionalism, Algorithms and the Pursuit of a Theory of Mind for Artificial Intelligence. Critical Humanities, 3(1), p.2.

25)   Rescorla, M., 2015. The computational theory of mind.

26)   Vestberg, M.E., 2017. A Compatibilist Computational Theory of Mind (Doctoral dissertation, Oxford Brookes University).

27)   Rakova, M., 2006. Philosophy of mind AZ. Edinburgh University Press.

28)   Rescorla, M., 2015. The computational theory of mind.

29)   Rogers Jr, H., (1987). Theory of recursive functions and effective computability. MIT press.

30)   Rescorla, M., 2015. The computational theory of mind.

31)   Searle, J., (1980). Minds, Brains, and Programs.

32)   Sabinasz, D., (2017). Gödel's Incompleteness Theorem and Its Implications for Artificial Intelligence. [online] Available at: https://www.sabinasz.net/godels-incompleteness-theorem-and-its-implications-for-artificial-intelligence/ [Accessed 18 January 2025].

33)   Sprevak, M. and Colombo, M. eds., 2019. The Routledge Handbook of the Computational Mind (pp. 175-191). London, UK:: Routledge.

34)   Shagrir, O. and Bechtel, W., 2017. Marr’s computational level and delineating phenomena. Explanation and integration in mind and brain science, pp.190-214.

35)   Vestberg, M.E., 2017. A Compatibilist Computational Theory of Mind (Doctoral dissertation, Oxford Brookes University).

 

 

 

 

Previous
Previous

An Investigation: The Relationship Between Metaphysics & Politics.

Next
Next

The Biological Basis Of Mental Illness: To What Degree Is Mental Illness A Social Phenomenon?