AI and the Brain | Why Can't Machines Understand Humor and Metaphors?

Defining the Cognitive Gap Between Humans and AI

What cognitive functions are involved in understanding humor?

Understanding humor is a complex process that involves multiple, distinct stages of neural processing, not a single cognitive function. The initial stage, incongruity detection, is primarily managed by the prefrontal cortex, the brain's executive control center. This region is responsible for identifying a mismatch between an expected outcome and the actual punchline of a joke. Following this, the brain must resolve this incongruity in a way that is surprising yet coherent. This resolution process activates the temporo-parietal junction, an area critical for shifting perspectives and making sense of unexpected connections. Finally, the limbic system, particularly the nucleus accumbens and amygdala, generates the feeling of amusement and pleasure associated with "getting the joke." This emotional reward response is crucial for the experience of humor. AI systems can often detect incongruity at a semantic level but lack the embodied emotional and social frameworks to resolve it in a contextually meaningful way, which is why they cannot replicate the affective experience of humor.
notion image

How does the brain process metaphorical language?

The brain processes metaphors not as linguistic errors but as a form of conceptual mapping, primarily engaging the right hemisphere. While the left hemisphere is dominant for literal language processing (syntax and semantics), the right hemisphere excels at interpreting context, nuance, and non-literal meanings. When a metaphor like "the lawyer is a shark" is heard, the brain doesn't just analyze the words; it activates sensory and motor cortices associated with the source domain (shark), retrieving attributes like "aggressive" and "predatory," and maps them onto the target domain (lawyer). This requires access to a vast network of semantic and episodic memories, allowing the brain to draw abstract parallels between dissimilar concepts. This process, known as conceptual blending, is a sophisticated cognitive function that relies on embodied experience—understanding what a shark is from a sensory and experiential perspective, not just a dictionary definition. Current AI models lack this embodied grounding, processing metaphors as statistical correlations in text rather than as rich, multi-modal conceptual maps.

Q&A: The Neurological Basis of Abstract Thought

Why is "common sense" so difficult for AI?

From a neuroscientific standpoint, "common sense" is not a single database of facts but an emergent property of the brain's integrated architecture. It relies on implicit knowledge gained through physical interaction with the world (embodied cognition) and social learning (theory of mind). For instance, knowing that a glass of water will spill if turned upside down is understood not just from reading about it, but from motor memories and sensory feedback stored and processed across the parietal and motor cortices. AI models, trained on text and image data, lack this direct sensory-motor experience. They learn statistical patterns but do not possess the underlying causal or physical models of the world that the human brain constructs effortlessly from infancy.
notion image

Does this mean AI can never truly be creative?

Current AI demonstrates generative capabilities, not genuine creativity. It excels at pattern recognition and recombination within its training data, which allows it to produce novel but derivative works. True human creativity, however, often involves conceptual leaps and the deliberate violation of established norms, driven by an understanding of cultural and social context. This requires a "theory of mind"—the ability to model the mental states of an audience to surprise, delight, or provoke them. This function is associated with specific neural networks, including the medial prefrontal cortex. AI lacks this subjective, intentional framework; it generates content without comprehension or purpose, distinguishing its output from the intentional, context-aware creativity originating from the human brain.

Q&A: The Future of AI and Human Cognition

How do neurological disorders affect the understanding of humor and metaphors?

Clinical neuroscience provides compelling evidence that understanding humor and metaphors is tied to specific brain structures. Patients with damage to the right cerebral hemisphere or the prefrontal cortex often exhibit deficits in processing non-literal language. For example, an individual with a right-hemisphere stroke may understand the individual words of a joke but fail to grasp the punchline because they cannot integrate the different pieces of information to resolve the incongruity. Similarly, individuals with certain forms of dementia or frontal lobe damage may interpret metaphors in a strictly literal sense. This condition, known as concretism, demonstrates that abstract thought is not a generalized function of intelligence but depends on the health and integrity of specific neural circuits. These clinical cases underscore that these cognitive abilities are biological functions that AI, in its current form, does not possess. They are not merely about processing data but about how that data is integrated within a complex, embodied, and emotionally resonant biological system.
notion image