The Non Computational Nature Of Language & The Impossibility Of True Artificial Intelligence.

Any definition of a non-quantitative concept is best compared to the partial superimposition of several overlapping patches of clouds — perhaps six or seven — whose contours fade and dissolve at their presumed boundaries.

The meaning of such a concept corresponds to the area where the patches overlap, but this area is inherently indefinite and imprecise. From a distance, this overlap may appear to have a distinct edge, much like a cloud seen from afar, creating the illusion of clarity and definition.

This explains why non-quantitative concepts often work well in everyday discourse, where rough approximations of meaning are sufficient for communication. However, when these concepts are subjected to rigorous analysis, they begin to dissolve and evade precise articulation, frustrating the search for exact definitions.

This difficulty lies at the heart of the intellectual discomfort produced by positivism. Logical positivism was built on the assumption that all meaningful reasoning should resemble scientific reasoning — clear, precise, and strictly defined.

But this presumption ran into trouble when applied to non-quantitative concepts, which resist strict definition because they are more like shifting clouds than solid building blocks.

A. J. Ayer himself eventually conceded that almost everything was wrong with positivism, but the deeper reason for its failure was that it sought a precision in language that is only achievable in the quantitative realm.

Language, especially in its most foundational and abstract form, does not operate with the neatness of mathematical symbols or scientific measurements.

Instead of finding sharply defined conceptual atoms, the logical positivists encountered only foggy and ambiguous patches of meaning — linguistic contours that blur and dissolve under close scrutiny.

Words, unlike numbers, do not have crisp edges. To use a word is often like gazing at a forest from a distance: you can clearly discern the shape of the forest as a whole, but when you attempt to focus on an individual tree, it becomes difficult to isolate and define it with precision.

The fact that we can still communicate effectively despite this vagueness suggests that language operates on a different level from scientific reasoning — one that draws on intuitive (non-computational) structures rather than precise logical form.

This brings us to the idea of “Gödelian terms” — fundamental terms such as “God,” “Truth,” and “Reality” — which form the foundation of human language and thought. These terms are Gödelian in the sense that they cannot be fully reduced to smaller constituent parts or fully explained through logical decomposition.

Just as Gödel’s incompleteness theorems demonstrated that certain truths within a mathematical system cannot be proven from within the system itself, these foundational terms seem to point to meanings beyond the reach of formal definition. They are echoes from a deeper, more elusive source of understanding.

The real mystery, then, lies not only in the fact that these terms resist definition but in the very fact that we have an intuitive grasp of them at all.

Why do we have a natural, seemingly innate understanding of concepts like "truth", "reason", and "God?"

Where does this intuitive resonance come from?

It suggests that the mind is not merely computational — that human reasoning operates on a plane that transcends algorithmic or mechanical processes.

A computer, however advanced, could never engage with these foundational concepts in the way that the human mind does because they are not reducible to computational structures.

The fact that we can even contemplate these terms, that we are drawn toward them and seek to understand them despite their elusive nature, suggests that they reflect a deeper metaphysical reality - an “echo from afar” that calls to us from beyond the reach of logical and scientific reasoning.

Based on this, the nature of language and human cognition implies that true artificial intelligence — in the sense of replicating human thought and understanding — is fundamentally impossible.

Unlike numbers, which are discrete and well-defined, words possess fuzzy and shifting contours. Their meanings are not fixed or algorithmically determined but emerge through context, intuition, and experience.

Words are not precise symbolic entities that can be processed like numbers; they operate within a realm of ambiguity and subtlety that resists reduction to computational logic.

Therefore, human beings do not learn or understand language through algorithmic processes. Language acquisition and understanding draw upon faculties that are qualitative rather than quantitative, intuitive (non-computational) rather than mechanical.

This fundamental difference between quantitative and non-quantitative concepts creates a gap that AI cannot bridge.

AI systems are built on computation, which thrives on mathematical precision and the manipulation of discrete symbols. But the human mind moves seamlessly between the quantitative and non-quantitative realms — between logic and intuition, between measurable facts and subjective meaning — without any friction.

A human being, for example, can simultaneously grasp the mathematical symmetry of a piece of music or an artwork while also appreciating its emotional and aesthetic resonance.

In the same act of perception, the human mind integrates the quantitative and the qualitative into a single act of understanding.

This reveals that knowledge comes in two distinct forms: the realm of quantities, where precision and calculation govern, and the realm of non-quantities, where ambiguity, metaphor, and intuition reign.

These two realms are not reducible to one another, and any attempt to collapse one into the other creates confusion rather than clarity.

This is why logical positivism ultimately failed — it sought to reduce all human knowledge to the quantitative model of scientific reasoning, ignoring the irreducible nature of the qualitative.

Yet this tension between the quantitative and the qualitative is not a flaw in human cognition; it is a feature.

The human mind is more than computation.

Previous
Previous

Perception Through Narrative: How Stories Construct Our Reality.

Next
Next

The Luciferian Pride of Science: The Paradox of Science’s Limitlessness and Limited Scope.