The Artificial in AI

We may have figured out how to create a brain, before we figured out how it actually works.

Full disclosure, the quote above is not mine. I heard it from someone who heard it from someone who read it somewhere on the Internet. But this perfectly summarizes how the release of GPT-4 has made me feel. I’m not ashamed to admit that the recent events make me feel terrified more than excited and not because I see Skynet on the horizon (not to say that it’s an invalid concern). I’m terrified because of what GPT-4 implies about Cognition, Intelligence, … and myself.

To those who don’t know how a Neural Network works, GPT-4 is this mysterious science-y thing that allows them to justify its capabilities (smart science guys came up with smart science stuff to make it). But to those who know how a NN is trained and used, it goes beyond that. Anyone who has moderately studied NNs knows that what GPT-4 does should be impossible. It has just been designed to learn the underlying probability distribution of the next word and spit out words based on that probability distribution. But turns out, asking it to spit out words by feeding it this giant text corpus induces something that’s called an Emergent Behavior whereby it’s capable of following complex commands and mimic human-like reasoning and behavior (or Zero-Shot learning). And even though many people argue that GPT-4 is just pretending to be Intelligent, is it still pretention when you can no longer distinguish between pretention and the real thing? As far as I’m concerned, GPT-4 and other LLMs have begun to show semblance of Intelligence and that, in itself, is a remarkable thing!

All of this begs the question about the nature of Intelligence itself. A very reasonable question to ask would be what is intelligence? But this is such a loaded question with so many interpretations that I’d rather not get into it. However, a related yet different question to ask could be: What is intelligent? Are only humans intelligent? Are only mammals intelligent? Are only Carbon-based forms Intelligent? Is it possible for different kind of forms, say silicon-based forms, to be intelligent? Or is it possible for any arbitrary collection of particles to be Intelligent?

To answer this question, it has increasingly started appearing to me that Intelligence is an Emergent Phenomenon of any immensely large system that operates under few simple rules, irrelevant of its composition. And this is what terrifies me. The knowledge that any “thing” can become Intelligent and that humans, or animals, or plants, or any existing lifeforms in the universe are not special in any way. Any structure that has a large number of synergistic components operating under very simple rules towards a simple task (survival seems like a good candidate) would develop “Intelligence”.

And even though I’m not a devout believer in faith, this ingrained belief in me, that somehow humans, that can reason and argue, are miraculous accidents of nature, has been shattered. We are not something that’s miraculous, we are foregone conclusion of the underlying mechanism of this world. In other words, we are just emergent behavior of a large collection of neurons operating under few simple rules. That which inspires this emergence may be miraculous but I’m too unqualified to even talk about it. But I do know that it’ll instill Emergence in just about any structure that’s sufficiently large in number, very much like how this beautiful and seemingly organized structure emerges in this swarm where each element is just moving according to an exceedingly trivial rule.

If Intelligence can then emerge in any sufficiently large structure, then I guess what I’m asking is, can we even call what LLMs represent as “Artificial Intelligence” as opposed to just, “Intelligence”? After all, there’s nothing special about us that demands that other forms of intelligence be called Artificial. And for sure, there are people who’d argue that these models do not yet exhibit fully intelligent characteristics that are so and so in nature. But from the perspective of Emergence, it’s just that there simply aren’t enough parameters in these models and not because it can never happen! Of what Evolution did in a million years by bringing carbon-based neuronal structure together, we’ve achieved a significant fraction in under 5 years (if you count from the date of invention of Transformers) by bringing silicon-based structures together. Then is it truly hard to believe that we won’t achieve human level intelligence in the next 5, 10, 100 years? Will we still be calling them as AI then?