Transcript
bcHaNVo0C4k • Training an AI to Reason with Only 13 Parameters? TinyLoRA Explained
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/FoundationModelsForRobotics/.shards/text-0001.zst#text/0075_bcHaNVo0C4k.txt
Kind: captions
Language: en
Welcome back to the explainer. Today we
are diving into something that honestly
sounds like science fiction. We're going
to talk about how researchers can now
teach a massive AI model, one with
billions of parts, how to perform
complex mathematical reasoning by
changing a piece of its code so small it
is frankly almost unbelievable. All
right, so here's our game plan. We're
going to start with this wild puzzle,
this seemingly impossible claim. Then
we'll look at how we used to do things
the old school way. After that, we'll
uncover the secrets, the new teaching
method, and the new tool that made this
all possible. We'll look at the
jaw-dropping results. And then we'll
wrap up by talking about what this all
means for the future of AI. Okay,
section one, the 13 parameter puzzle.
And really, the big question here is how
on earth can something so tiny make such
a huge difference? This whole thing
starts with a number that just doesn't
seem to make any sense. So this is the
number at the heart of everything today.
13. And no, I didn't misspeak. I don't
mean 13 million or even 13,000. I mean 1
3 13. This is the key to the whole
thing. Okay, let's really let this sink
in. On one side, you have this massive
AI model, right? 8 billion parameters.
You can think of that as a giant control
board with 8 billion tiny little knobs
and dials. And on the other side, to
teach this enormous system a whole new
complex skill, you only need to tweak 13
of them. Just 13. The other what? 7
bill999 million. Well, you get the
point. They don't even get touched. And
get this, those 13 parameters translate
to just 26 bytes of data. 26. That is
nothing. It's less data than the text in
a short tweet. It's like digital dust.
When I first wrote the paper this is
based on, I seriously had to double
check the numbers. It just doesn't sound
real, does it? So, yeah, that's the big
mystery we're unpacking today. How in
the world can you teach an AI advanced
math by changing a file smaller than a
desktop icon? Well, that's exactly what
this incredible paper from researchers
at Meta, Cornell, and Carnegie Melon set
out to prove. And spoiler alert, they
figured it out. All right, moving on. To
really appreciate just how revolutionary
this is, we got to take a quick look at
how things used to be done, the old way,
the brute force methods that have been
standard for a while now. So, for the
longest time, the main method was
something called full fine-tuning. Let's
imagine our AI is a brilliant student.
If you want to teach the student a new
subject, full fine-tuning would be like
performing brain surgery to rewrite
every single neuron related to that
topic. I mean, yeah, it works. The
student learns the new skill, but it's a
total brute force approach. It's wildly
expensive, eats up a ton of energy, and
takes forever.
Then things got a bit smarter with
something called Laura. This was a
really big deal. Instead of that messy
brain surgery with Laura, it's more like
you freeze the students existing brain
and just give them a special little
notebook for the new subject. The core
knowledge stays the same. You're just
adding a small, efficient new layer.
This was awesome. It took the number of
things you had to change from billions
down to millions. A huge step up for
sure. But you know, millions is still a
long way from 13. So that's the big
leap, right? How do you get from
millions all the way down to just 13?
Well, here's the first big clue from the
paper. It turns out it's not just about
what you teach the AI, but how you teach
it. Okay, so the typical way of teaching
is called supervised fine-tuning or SFT.
This is basically wrote memorization.
You show the AI a perfect example of a
solved math problem and you tell it,
"Look at this. Do exactly this." The AI
gets really good at copying the style,
the words, the structure, but it's not
really learning the underlying
principles. It's like memorizing the
lines for a play without understanding
what the play is about. A lot of the
information it's taking in is just
noise. But these researchers did
something different. They used
reinforcement learning or RL. And the
best way to think about RL is, well,
it's like training a dog. You don't give
your dog a PowerPoint presentation on
the physics of sitting, right? No, you
just say, "Sit." And when it finally
does, you give it a treat. That's RL in
a nutshell. You let the AI try to solve
the problem on its own, and you just
give it a super simple signal back. Yep,
that was right. Or, nope, that was
wrong. A thumbs up or a thumbs down. And
this simple feedback forces the AI to
figure out the why for itself. It has to
discover the actual principles of
success, not just copy someone else's
work. And this table really breaks it
down beautifully. With SFT, the learning
signal is dense. You're giving it the
whole perfect answer, but the
information density is actually low
because that signal is packed with all
this extra fluff and stylistic noise.
So, the AI needs a ton of parameters to
try and store all that detail. But with
RL, the signal is sparse, just a simple
yes or no. But the information density
is incredibly high. It's pure signal, no
noise. The AI's goal is just to figure
out what it needs to do to get that yes.
And for that, you don't need a lot of
parameters. Okay, so piece one of the
puzzle is the teaching method,
reinforcement learning. It's clean
signal means you don't need millions of
parameters. But now you need the right
tool for the job. You need an
architecture that can actually use that
hyperfocus lesson. And that is where
Tiny Laura comes in. Okay, the journey
to get here is pretty wild. You start
with Laura, which uses millions of
parameters. Then Laura XS gets it down
to thousands. But tiny Laura, man, this
is where the magic happens. It makes two
just absolutely brilliant moves. First,
instead of a big trainable thing, it
uses one single tiny trainable vector.
Okay, that vector itself is tiny, but it
gets projected through this huge fixed
random thing called a tensor. Now, that
sounds complicated, but think of it this
way. The big random tensor is like a
really complex unchangeable machine. and
the tiny vector we're training. It's
like the one single master dial on that
machine. By just learning the perfect
setting for that one dial, you can
control the entire machine's output in a
really sophisticated way. And then
here's the second genius move, parameter
sharing. They use that exact same dial,
that same single vector on hundreds of
different layers throughout the entire
AI model. So, it's one master control
that's harmonizing the whole system.
It's it's just so elegant. And I mean,
just look at this table from the paper.
This really puts it into perspective.
This isn't just a step forward. It's a
complete demolition of the old scale.
You go from billions to millions to
hundreds and then tiny Laura comes along
and says, "How about one?" The
difference is just it's hard to even
comprehend. And if the numbers don't do
it for you, this visual should. I mean,
look at this. The bar for full
fine-tuning and even Laura would be
skyscrapers. Laura XS would be a tiny
little shack next to them. And tiny
Laura, you you literally can't even see
it. It's a rounding error. It's
completely insane. By the way, if you're
enjoying these kinds of deep technical
breakdowns that actually make sense, do
me a favor and hit that subscribe
button. We do this all the time. Okay,
so we've got the teaching method RL and
we've got the tool tiny Laura. The
theory is beautiful, but does it
actually work in practice? Let's look at
the results. Wow. Okay, this chart right
here is the money shot. What you're
seeing is how a 7 billion parameter
model does on a standardized math test.
That bottom line, that's the model's
score right out of the box. No extra
training. The top line, that's the best
you can possibly do with old school full
fine-tuning. And that blue line that
just shoots straight up to meet the top
line, yep, that's tiny Laura. It gets
you basically to the gold standard
performance, but with an almost
hilariously small number of trained
parameters. It's just incredible. And
here is the headline straight from the
paper. They hit 91% accuracy on this
tough math benchmark while only
fine-tuning 13 total parameters. That's
a 15point jump in performance from the
base model. And just to really hammer
this home, to get a similar score using
the old SFT method, you'd need to train
over a million parameters. So we're
talking 13 versus a million for the same
result. That's not an improvement.
That's a different sport altogether.
Okay. And if your mind isn't blown yet,
get ready for this. It gets weirder. You
would think a bigger, more complex model
would be harder to teach, right? Need
more tweaking? Nope. It's the opposite.
The researchers found that the bigger
the base model is, the fewer parameters
you need to train to teach it a new
skill. Look at the chart. As the model
gets bigger, the update size needed
actually gets smaller. It's like the
smarter and more capable the AI gets,
the easier it is to give it new
instructions. It's a whole new kind of
scaling law. So, there you have it. The
puzzle is solved. It's the onetwo punch
of reinforcement learning's clean signal
combined with Tiny Laura's
hyperefficient architecture. But, okay,
what's the big deal? What does this
actually mean for the future of AI?
Well, the implications are just huge.
First, think about personalization.
Right now, it's way too expensive to
have a custom AI model for every single
person. But if the custom part is only
26 bytes, suddenly a single massive AI
running on one GPU could serve
thousands, maybe millions of users, each
with their own perfectly personalized
version. An AI that knows your specific
coding style or your unique writing
voice. That's a total game changer. It
also suggests a future where we just
build one absolutely gigantic model and
then we create these tiny bite-sized
skill packs to adapt it for millions of
different jobs. And maybe most
profoundly, it forces us to ask, what is
even happening when we fine-tune an AI?
Are we really teaching it something new
with just 13 parameters? Or is something
else going on? Maybe we're not teaching
it, but just unlocking it. And that
brings us to the final really
mindbending question the researchers
leave us with. What if the knowledge is
already in there? What if these giant
models, having read basically the entire
internet, already know how to do
advanced math and physics and everything
else? What if fine-tuning isn't about
teaching at all, but just about learning
the secret knock, the 13 parameter
password that unlocks an ability the AI
had all along? It reframes these models
not as empty vessels we have to fill,
but as vast sleeping giants of
capability just waiting for us to figure
out how to wake them up. It's an
incredible thought. What else is in
there that we just don't know how to ask
for yet? Anyway, thanks for coming along
on this deep dive. If you enjoyed it,
please subscribe for more explainers
that try to make sense of this wild
future we're building. See you in the
next one.