Is AI the Ultimate Reinvention of the Wheel?
Published on
-/- lines long
AI works by predicting the most likely next piece of information, based on patterns it has learned from vast amounts of training data. Let's say that the following range of numbers represents that training data:
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
When predicting the next word, it can generate novel output that is outside this list, e.g. 5.5
, 4.20
, 6.9817302
. This is really useful, as we can all clearly see from our daily usage of GPT. Theoretically, there is an infinite number of results. But those results are still limited to that 1–10 range.
So if that 1–10 range represents the data that AI knows, then the infinities outside of those two ends represent all data in the universe. But that's out of reach for the AI. So it will never generate e.g. 24
. Or 9121.63
. Or -500
. From its point of view, that knowledge don't exist. Just as how we don't know what there is a gajillion lightyears away from here.
Sure, there is RAG(opens in new tab) and we use it to dynamically enrich the context of AI beyond its initial training data. But this augmentation still depends on data gathered by us and explicitly made available to the AI by us. So it's just like dynamically adding 11
to that 1–10 range.
To really make AI boundless, its context window shouldn't be limited to data gathered by humans. It should be able to perceive the world and discover information on its own. Just how Christopher Columbus knew Asia and hopped on a ship to find a better route to it, but instead discovered America by accident. AI can't discover things by accident. It gets a filtered and watered-down version of what we have already saw. Then it acts on it through math, not wonder.
At this point, we're talking about something way more that the AI we currently have. Something a lot more human-like. We're talking about AGI. But this is where it gets tricky, though.
We still don't fully understand how our brains work, yet they hold the secret sauce that AGI needs. What we only know for sure is that they're very complex. On the other hand, we're already having trouble cramming more data in LLMs, while OpenAI's GPUs are melting(opens in new tab) and costing them a shit-ton of money.
But anyway… transistors on a processor chip are way faster than our sluggish biological brains, right? Well, they are a lot faster than us in many regards, e.g. computing the Fibonacci sequence. But while your brain thinks about Fibonacci numbers, it's also occupied with:
-
Running your heartbeat and breathing on autopilot
-
Keeping 30+ trillion cells alive and talking to each other
-
Predicting what you'll see/hear next so the world feels smooth
-
Managing thousands of tiny chemical balances nonstop
-
Making sure you take a shit every day
-
Remembering if you left the stove on
-
…and unbelievably much more.
So if we even out the playing field and throw all of that crap on your fancy CPU chip, it probably wouldn't end up much quicker than a human brain now, would it?
Let's remember that our brains also work with electrical impulses, much like a processor chip. It's just that a chip has the luxury of having to compute a lot less, therefore it makes sense that it's much faster. The more computation you add, the slower it gets.
So by further pushing AI, are we actually improving it? Storing and processing more data has a computational cost, and computation takes time. Isn't this exactly why we are so slow?
Even if we manage to make AGI that works just like our brain, why do we think that electricity running through silicon is going to fare any better than electricity running through flesh? Even more so when we don't fully comprehend the complexity that this flesh is capable of successfully dealing with.
Perhaps the magic of AI is exactly that it's not like us. That it's vastly simpler than our brains and is therefore able to solve a certain category of problems way more efficiently than us. The more data we try to cram into AI, the more compute it'll need. The more compute it needs, the slower it'll become. And the slower it becomes, the less benefits it'll have over us.
What if you have to wait not minutes, but hours for a decent AI response? And then pay thousands of dollars for it? At which point are we back to square one, where it proves more efficient to just hire a regular old flawed person to do the job?
By continuing down the current path, are we going to end up with a machine god that is truly more capable than us… or are we going to end up with an overly complicated mechanical human that shares much of the same limitations?