The Uncharted Territories of Human Cognition in AI
We’ve all heard the bold claims: AI will conquer the world; AI will replace us. With the stock of Nvidia and four other tech giants soaring, now accounting for a staggering 96% of the S&P 500 in 2023, it’s challenging to find genuinely objective perspectives on this new wave of AI. Many experts either possess a narrow scope of knowledge or face significant conflicts of interest. They are not to blame; we all have this feeling of being left behind by the swiftly moving train of modern technological advancements. These new AIs are not even developed by large companies; instead, they have given rise to new companies spearheaded by entities like OpenAI and Mistral, not to mention numerous startups working on the advancement of this type of attention-based models.
These attention-based models demand vast amounts of data and computing power, like their predecessors. Yet, fundamentally, our approach to AI hasn’t shifted; we’ve merely augmented our ability to process and analyze data with superior computational resources. I’ll steer clear of the algorithmic intricacies, given their volatile nature and the difficulty in pinpointing a single algorithm as the primary catalyst. Instead, we’re looking at a collective of algorithms fueled by an ever-increasing supply of data and computational power.
But is this truly the path to achieving intelligence? Clearly not. Human intelligence operates on sparse data, filling gaps with coherent reasoning, abstracting complex concepts, and relying on deeply internalized processes. We benefit from a neuromuscular system, equipped with custom-built sensors, to derive intelligence — a stark contrast to the current AI paradigm. These elements underscore this post and highlight yet another reason to question the hype surrounding AI, LLMs, and other buzzwords emerging from our deep-rooted technocracy. Beneath the surface, we recognize that machines aren’t us. In our eagerness to anthropomorphize machines, claiming they could surpass us or even achieve consciousness, we lose sight of what it means to be a conscious being. We overlook the paradoxical nature of existence, our undefined reality, and our interconnectedness with other beings. I touched a about this in here and likely in future entries.
Distinctions Between Human and Artificial Intelligence
Let’s go back to the topics mentioned earlier and highlight the key differences between today’s AI and human intelligence. Human intelligence is rooted in:
- Sparse data, not the entirety of human-race knowledge.
- Coherency in experience, not hallucinations.
- Being an abstraction engine, not a model with 70B+ parameters.
- Internalized processes, not starting from scratch with each training.
- A co-evolved neuromuscular system, not AI’s isolated advancements.
- Raw and analog data, not pre-defined digital precision.
And finally, to conclude, humans possess:
- A level of intelligence that enables introspection, metacognition, and self-awareness—qualities that remain elusive for AI.
These facets are intertwined, making it impractical to address them individually. Instead, I will attempt to categorize them broadly.
Sparse Data, Coherency, and Abstraction
In the realm of human experience, we are not flooded with vast quantities of data; in fact, being so would be problematic for leading a fulfilling life. Our interaction with information is characterized by its sparsity, both in terms of its temporal availability and volume. We simply get overwhelmed with meaningless data and tune out. From infancy, we are not exposed to the entirety of knowledge, akin to not being read the entire Wikipedia. Yet, we absorb a remarkable array of knowledge beyond merely completing sentences. We grasp words, the emotions they convey, their grammatical structures, pronunciation, and intonation. Over time, our learning extends to understanding the appropriate contexts for our newfound knowledge. Importantly, our learning journey is perpetual; there’s no distinct phase of training followed by application—it unfolds organically.
With sparse data comes abstraction and coherent reasoning. To make sense of data, we attempt to fill in the gaps with coherent reasoning while simultaneously abstracting topics. This abstraction also occurs with our sensory inputs. Have you ever noticed how traveling a new route always seems to take longer or feels more novel? That’s because your mind has already abstracted all the essential information from your usual route, including the time it takes. Every single object on your daily route is still present, but you no longer consciously register it. This method of abstraction applies to everything in life: feelings, daily experiences, and people. These elements become internalized to alleviate the cognitive load on our main intelligence and to encourage us to seek out new experiences, push boundaries, and explore what truly makes us human.(This would lead to a discussion on self-observation, a topic beyond the scope of this post.)
Let’s revisit coherent reasoning. It’s a constant in our lives, often operating beneath our conscious awareness, we judge and reason. We continuously monitor our surroundings; when we detect something amiss (thanks to our internalized, observant processes), our main intelligence springs into action to investigate. This instinctual response can manifest as unease or even chronic stress when we sense something wrong but cannot immediately identify the issue. These processes, whether developed through personal experience or embedded in our DNA, enable us to intuit emotions and intentions in others. Such intricate reasoning fills gaps and alerts us to potential dangers, though it also introduces biases — many of which serve little purpose in today’s world (a discussion for another time).
Coherent reasoning and making sense of the world go beyond biases and alerting us. It is the source of our creativity, aided by our ability to abstract. Only a human can create meaning when shown a series of random images that have no apparent connection. We connect a series of images with meanings (e.g., check this) and we even feel the emotions. Our mind has created a story. The same process occurs within our mind among the abstractions we’ve made throughout our life, what we’ve recently learned or experienced (i.e., ‘priming’), and this coherence engine. These elements foster creativity, but we should pay attention to it, make it useful, and prune the ideas.
All these processes have the potential to fail us. The act of extracting meaning from life through our abstractions has, at times, led humanity into dark chapters of history. Biases, rooted in our internalized processes and the instinct to follow the crowd (conformity), have made indelible marks that are challenging to erase. Furthermore, relying on sparse data for reasoning is a significant factor behind many erroneous decisions people make.
As for today’s AI, it falls remarkably short of reaching these complexities. The current approach of indiscriminately feeding data into algorithms, with the hope that some new algorithm will perform better, is fundamentally flawed. There’s talk of AI achieving consciousness, but honestly, I don’t know where to draw the line for consciousness. Still, maybe in rare cases, our AI will transform into something else, perhaps revealing a form of intelligence that is entirely distinct from our own. In such an instance, we should probably find another word to describe it.
Neuromuscular System, Internalization, and Data Representation
Our bodies and intelligence have co-evolved, intricately woven together long before the dawn of civilization. Each of our senses—sight, taste, smell, touch, etc.—is uniquely calibrated, resulting in subtle perceptual differences among individuals. Unlike standardized equipment, no two humans perceive the world in exactly the same way; we aren’t equipped with uniform eyes or identical sensory organs. Instead, our minds and senses develop in tandem, forming specialized connections that not only allow us to learn but also to feel deeply. This complex interplay is what endows each person with their unique perspective. Consider the simple act of drawing a house: no two drawings are the same. This diversity highlights a fundamental oversight in current AI development, which focuses predominantly on algorithmic processing, neglecting the nuanced, embodied experience of interacting with the world. In essence, AI should be designed to evolve, much like a sentient agent in the real world, adapting and learning from its environment in a way that mirrors human growth and perception.
This discussion extends to data representation, where the new thing these days are the block numbering formats (e.g., BF16, MXs, etc.) mainly for LLMs. Yet, our bodies and the natural world have evolved far superior systems and formats for processing information. Unfortunately, we often overlook the sophistication of these integrated systems. In the realm of AI, while we have identified a few effective approaches, we largely neglect the comprehensive capabilities required for an agent to function effectively in the real world. We primarily process visual data through RGB formats for our CNN or ViT models, assuming that any shortcomings lie within the models themselves. In contrast, human vision has been refined (partially through genetic inheritance) to detect patterns, with some processes being internalized and others relayed to various systems within our intelligence network. Often, we react to patterns instinctively, driven by programming both innate and acquired over time, though it’s possible to modify these internalized processes to some extent. Furthermore, the diverse sensors we possess communicate in formats far more complex than the simplistic binary or numerical systems employed in technology. By honing our focus, we can enhance the data received from these sensors or develop muscle memories that significantly improve our abilities, reducing the need for conscious thought and thereby streamlining our interaction with the environment.
Metacognition: The Pinnacle of Human Intelligence
In the realm of human cognition lies the remarkable ability of metacognition - the capacity to think about our own thinking. René Descartes famously leveraged this concept to argue against the notion that we live in a simulation. This aspect of human intelligence is both profoundly strange and fascinating. It suggests that with deliberate effort, we can transform our thought patterns and internalized processes—perhaps not entirely, but significantly enough to affect most aspects that matter in today’s world.
Unfortunately, today’s AI is in a realm far removed from the essence of true intelligence—if that is what it claims to achieve. The pursuit of buzzwords, alongside the allure of money and fame, has led us astray, causing us to lose touch with our inner selves and, unfortunately, with our planet and fellow beings.