Transcript
TA6tpz2kG3c • Why Scaling LLMs Won't Lead to AGI: Yann LeCun’s Reality Check
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/FoundationModelsForRobotics/.shards/text-0001.zst#text/0056_TA6tpz2kG3c.txt
Kind: captions
Language: en
Right now, the entire tech world is
consumed by one single massive bet.
We're talking hundreds of billions of
dollars with every major player on the
planet allin. And the bet is this. If we
just keep building bigger and bigger AI
models, feeding them more and more data,
eventually they'll just spark into
something truly intelligent. But what if
that entire bet is wrong? What if we're
building a trillion dollar ladder
against the wrong wall? Today we are
diving deep into the powerful
counterargument from a man who helped
build this world in the first place. AI
pioneer and touring award winner Yan
Lun. And he believes the industry is
chasing a ghost. And he's not afraid to
say it. I mean, just look at this quote.
This is a bombshell dropped right in the
middle of the AI gold rush. It's not
some polite academic disagreement. It's
a direct no holdsbar declaration that
the main strategy everyone is following
is a dead end. Now, coming from one of
the godfathers of AI, this is like the
chief engineer at NASA turning around
and saying, "Hey guys, the rockets,
they're pointed in the wrong direction."
What Lacun is arguing is that the very
foundation of today's AI, the large
language model, has hard, unbreakable
limits. He's saying it doesn't matter
how much data you feed it or how many
GPUs you throw at it, it will never get
to the thing we're all actually talking
about, genuine human level intelligence.
So, to really get what Lacun is saying,
we're going to go on a bit of a journey.
First, we'll tear down this myth of just
scaling our way to AGI. Then, we'll hop
in a time machine to look at some of
AI's ghosts of the past to see why a
little caution might be a good idea.
We'll dig into the dangerous gap between
today's investment hype and the tech
reality and what that could lead to. And
then, once we've broken down the
problem, we're going to pivot to the
solution, Lun's vision for a totally
different path forward and why he thinks
the future of AI has to be open to
everyone. Okay, let's kick things off
with the biggest myth in AI today. This
idea that we can just make our models
bigger and bigger until they magically
become super intelligent. The hype you
share in boardrooms is that if you just
add enough parameters, these things will
just wake up. That true understanding is
just something that emerges from massive
scale. Well, Lun says that's not just
wrong, it's a complete fantasy, and it's
based on a huge misunderstanding of what
these systems are actually doing. And
this right here gets to the absolute
heart of it. On one hand, what these AI
models do is amazing. They're basically
autocomplete on steroids. You can think
of them as these giant sophisticated
memory banks of human language. They can
recall and remix information in ways
that sound incredibly fluent. They are
masters of recognizing patterns. But,
and this is the big butt, that's all
they are. Lun's critical point is what
they are not. They are not thinkers.
They cannot reason. They can't come up
with a clever solution to a problem they
haven't seen hints of in their training
data. And maybe most important of all,
they have zero connection to the real
world. An LLM knows the word gravity cuz
it's read it a million times next to
words like apple and fall. But it
doesn't understand that if you let go of
your coffee cup, it's going to smash on
the floor. It's a parrot. A brilliant,
eloquent parrot that can quote
Shakespeare, but still just a parrot.
And yeah, Lun does not mince words. He
calls this popular idea of building a
country of genius in a data center
complete BS. Why? Because real genius or
even just regular human intelligence
isn't just about memorizing facts. It's
about understanding how those facts
connect to the real world. It's about
building a mental model of reality and
using it to plan and to reason. An AI
might feel like you're talking to a PhD
with a perfect memory. Sure, but it's a
PhD who spent its entire life locked in
a library reading books. It's never been
outside, never felt rain, never learned
that fire is hot. It has knowledge but
no wisdom, no common sense. And that's a
fundamental gap that you just can't fix
by adding more data. So if scaling isn't
actually the path to real intelligence,
then where are all these billions of
dollars going? This is a really key
point. Most of that cash isn't going
into fundamental research to invent the
next generation of AI. It's going into
infrastructure for what's called
inference. Now, put simply, inference is
the live part. It's the cost of AI
actually answering your question. See,
training the model is like designing a
new car. It's really expensive, but you
do it once. Inference is like building a
gigantic factory that spans a continent
just to manufacture millions of those
cars every single day. The investment
we're seeing is about scaling the
service to a billion people, not scaling
the intelligence of the system itself.
It's about operations, not revolution.
Now, let's take a quick trip back in
time. Because for people who've been in
this field for decades, like Lacon, this
current wave of hype feels, well, eerily
familiar. The history of AI is just
littered with these cautionary tales of
spectacular promises followed by
crushing disappointments. We have seen
it time and time again. Incredible demos
that promise to change the world only to
fall apart in the messy reality of
everyday life. And Lunan thinks we're
walking right into that same old trap, a
classic problem known as the last mile.
The last mile problem is actually pretty
simple to understand, but it's brutally
hard to solve. Getting a system to work
90% or even 95% of the time, yeah,
that's often doable. But getting that
last 5% of reliability, making it tough
enough to handle every weird, unexpected
thing the real world throws at it, that
is exponentially harder. Just think
about self-driving cars. We've had these
jaw-dropping demos of cars driving
through cities for almost a decade. It
looked like the future was right around
the corner. And yet, true level five, go
anywhere, handle anything autonomy is
still a distant dream. Why the last
mile? The car drives perfectly until a
flock of birds suddenly takes off in
front of it or a kid in a weird
Halloween costume runs into the street.
It's that endless list of unpredictable
stuff that makes the last mile so
treacherous. And you know, maybe the
greatest monument ever built to the last
mile problem is IBM Watson. After it
famously destroyed the human champions
on Jeopardy, the hype was just off the
charts. Watson was going to cure cancer.
It was going to be a digital doctor in
every hospital diagnosing rare diseases.
The demo was flawless. The reality, a
complete disaster. It turned out that
the neat, structured, fact-based world
of a game show has basically nothing in
common with the messy, nuanced, and
incomplete world of medical records.
Watson would make these confident, but
dangerously wrong recommendations. The
project burned through billions of
dollars, was ultimately called a
complete failure, and its health
division was literally sold for parts.
It is a powerful reminder that a great
demo does not make a great product. The
thing is, this isn't the first time
we've been on this roller coaster. The
whole history of AI is the cycle of boom
and bust. Back in the 80s, all the hype
was about expert systems. These were
these giant programs made of handcoded
if then rules trying to capture the
knowledge of a human expert. The promise
was digital doctors, digital lawyers.
But the systems were incredibly brittle.
If they saw something that wasn't in
their rules, they just broke. By the
'90s, the hype was gone, and that led to
the first AI winter. Then the Watson
hype in the 2010s. The pattern is pretty
clear. huge promises followed by a
painful collision with that last mile
problem. Lacun's big fear is that we are
now in the biggest hype cycle of all
time and we're setting ourselves up for
the hardest fall yet. Okay, we're really
just scratching the surface here. If
you're getting tired of all the endless
AI hype and you actually want to
understand what's going on under the
hood, then go ahead and subscribe to our
channel. We're all about giving you the
real story, not just the marketing
headlines. All right, let's talk about
the timeline because the biggest problem
with AI today might just be the huge gap
between what's being promised to
investors and what's actually being
delivered to customers. And right at the
center of this problem is one single
word, reliability.
So, picture an AI assistant that can
write all your reports, analyze your
spreadsheets, draft your emails, and
imagine it gets things right 95% of the
time. In school, 95% is an A+. It sounds
amazing, right? a massive productivity
boost. This is the promise that has
fueled billions in investment. But what
about that other 5%. Well, that 5%,
that's the deal breakaker. [snorts] In
the world of business or engineering, a
95% success rate is a complete
catastrophe. You wouldn't get on a plane
if the engines only worked 95% of the
time. And with these models, that 5% is
especially nasty. It's not just that the
AI makes a simple mistake. It's that it
hallucinates. It confidently, fluently,
and very plausibly just makes stuff up.
It lies to you, and you have no idea
when it's happening. Would you trust an
employee who was a genius 19 days a
month, but on that 20th day just
secretly fabricated all their work? No
way. So, let's make this really
concrete. Your AI generates a 100page
market analysis for your next big
product launch. Now, you know that
statistically about five of those pages
are probably just made up. Which five?
Is it some minor factual error? Or is it
the page with the core financial
projections that your entire business
strategy is now based on? You have no
way of knowing without checking every
single word in every single source
yourself, which of course completely
defeats the whole purpose of using the
AI in the first place. This right here
is the biggest roadblock to widespread
missionritical AI adoption in business.
And look, the numbers don't lie. This
isn't just a theory. Right now, almost
every big company is running
experiments, what they call proofs of
concept with AI. But the data shows that
a staggering 80 to 90% of these projects
never actually make it into full
production. They die in the lab. Why?
Because they smash right into that wall
of reliability. They find out the
hallucination problem is a nightmare. Or
they calculate the insane cost of
running these things at scale and
realize the return on investment just
isn't there. It's a fascinating demo,
but it's just not ready for prime time.
So, what happens when that hype starts
to fade? When those promised
productivity gains don't show up on the
timeline investors were promised, well,
this leads us to the most serious risk
of all. The possibility of another and
potentially devastating AI winter. A
time when the funding dries up, research
grinds to a halt, and the dream of
intelligent machines gets put on ice for
a while. And it might be coming sooner
than you think. So, what exactly is an
AI winter? It's what happens when the
hype train completely derails. After the
boom, you get the bust. The grand
promises of AGI don't pan out. Investors
get scared and pull their money. Public
excitement turns into cynicism and the
entire field just goes into a long
hibernation. It happened back in the
'90s after expert systems failed. For a
decade, the very term AI became toxic.
And people like Lun who lived through
that are warning that by overpromising
what today's tech can do, we are setting
ourselves up for a brutal correction
that could set real progress back for
years. And Lacun has a very very direct
message for the investors and VCs who
are fueling this fire. He's basically
telling them, "Look, if your whole
investment is based on a company that
says they're going to get to AGI just by
scaling up their current models, you are
going to lose your money. It's a blunt,
almost brutal reality check for Silicon
Valley that has poured hundreds of
billions into this scale is all you need
philosophy." He's warning them that the
foundations of many multi-billion dollar
companies are built on a flawed
scientific idea. Look, these are
complicated topics, right? There are no
simple sound bites. If you appreciate
getting the real complex story behind
the biggest technological shift of our
lives, then you should probably be
subscribed to our channel. We're not
afraid to ask the tough questions and
challenge what everyone else is saying.
Okay, so we've spent a lot of time
talking about the problems. If just
making things bigger is the wrong path,
then what's the right one? This is where
our story pivots. See, Lacun isn't just
a critic. He's an architect with a
blueprint for what's next. So now we're
going to explore his detailed vision for
a totally different kind of AI. A
smarter, more capable AI that might
actually get us closer to true
understanding. Lacun's vision for the
future of AI is built on four pillars.
Four fundamental things that we humans
do without thinking, but that today's AI
are completely clueless about. Number
one, and this is the most important, AI
needs to understand the physical world.
It needs an intuitive sense of physics
and cause and effect. Number two, it
needs persistent memory, the ability to
remember things over time, not just in a
short chat. Third, it has to be able to
actually reason, to figure out new
things from what it already knows, not
just repeat patterns. And finally, it
needs to be able to plan, to take a big,
complex goal and break it down into
smaller, manageable steps. So, how do we
actually do that? The key, Lucon says,
is to break AI out of its prison of
text. A human baby learns more about the
real world in its first two years just
by watching and listening and touching
things than an LLM learns from reading
the entire internet. The next big
breakthrough will come from training AI
on sensory data, especially video. By
forcing an AI to predict what happens
next in a video, you force it to learn
the basic rules of our reality. That
things fall down, that objects don't
pass through each other. This is how AI
will finally get the common sense it's
missing. It's a big research project for
sure, but Lun thinks we could see the
first practical uses of this new
approach within 3 to 5 years. Okay,
finally, let's talk about the future
because Lun's vision isn't just about a
different kind of technology. It's about
a different philosophy, a whole
different way of doing the research
itself. He believes the race to AGI
shouldn't be some secret competition
between a few giant companies, but an
open global collaborative scientific
project. And at the heart of this whole
philosophy is one simple but really
profound idea. There is no magic bullet.
There's no one secret algorithm that's
going to suddenly unlock super
intelligence. This idea that some small
secret team of geniuses is going to have
a Eureka moment and solve AGI. That's a
Hollywood movie. Not how science
actually works. Intelligence isn't one
thing. It's a super complex combination
of many different systems and ideas
working together. Real scientific
progress, especially on a problem this
huge, is a marathon, not a sprint. It's
a slow, steady process with thousands of
researchers all over the world building
on each other's work. The entire deep
learning revolution we're living through
right now, it happened because of open
research. People published papers, they
shared their code, and they debated
ideas out in the open. Lun is a huge
advocate for this model, and he argues
that in science, the community that
shares will always eventually move
faster than the team that works in
secret. Openness isn't just a nice idea,
it's a strategic advantage. And this is
Lon's final warning, and it ties his
whole argument together. It's a direct
shot at the hype and the secrecy that
has started to creep into the AI
industry. He's telling all of us not to
be fooled by this myth of the lone
genius or the secret breakthrough. The
future of AI will not be built in secret
by one company. It's going to be built
out in the open by the combined effort
of a global community. So, we're left
with this one big question. As hundreds
of billions of dollars pour in to
reshape our world, all betting on this
idea that bigger is better. Are we
actually building the foundation for
tomorrow's intelligence? Or are we just
building a more expensive, more
magnificent version of yesterday's tech?
Are we, as Yan Mcun fears, just climbing
higher and higher up a ladder that's
leaning against the wrong wall? The
answer to that is going to define our
future, and it's up to all of us to make
sure we get it right.