File TXT tidak ditemukan.
Transcript
Gbyv770eg6g • DeepSeek Model 1: The AI Revolution You Didn't See Coming
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/BitBiasedAI/.shards/text-0001.zst#text/0289_Gbyv770eg6g.txt
Kind: captions
Language: en
You're probably paying $20 a month for
ChatGpt Plus or shelling out for Gemini
Advanced, thinking that's just the cost
of using powerful AI.
Well, a Chinese startup just built
something comparable for a fraction of
what the big players spent, and they're
giving it away for free.
I've been digging into DeepSeek Model 1,
and what I found might make you rethink
everything about AI pricing and
accessibility. Welcome back to
bitbiased.ai, AI, where we do the
research so you don't have to join our
community of AI enthusiasts with our
free weekly newsletter. Click the link
in the description below to subscribe.
You will get the key AI news, tools, and
learning resources to stay ahead. So, in
this video, I'm breaking down Deep Seek
Model 1, what it is, who's behind it,
and how it stacks up against GPT4,
Grock, and Gemini.
We'll look at what this means for
everyday users like you and whether this
signals we're getting closer to AGI or
if it's just more AI hype.
First up, let's talk about the company
behind this model because their approach
is completely different from what you're
used to.
The company Deepseek's unconventional
approach. Deepseek isn't your typical AI
company. Founded in 2023 by Leang
Wenfung, who also co-founded the
high-flyier hedge fund, this HJO based
startup has a philosophy that sounds
almost naive in today's tech landscape.
Leang publicly stated that the goal is
not to lose money nor to make huge
profits, but to push technological
frontiers.
Yeah, you heard that right. Not about
the money. Now, here's where it gets
interesting. Deepseek claims they
trained GPT4 level models for under $6
million.
Let me put that in perspective for you.
That's a tiny fraction of what Open AI,
Google, or Anthropic spend on their
models. We're talking orders of
magnitude less. And before you think
they're cutting corners, they're
actually doing something the big tech
companies won't. Releasing everything
under MIT licenses. That means fully
open source. anyone can download,
modify, and use their models.
Leang argues that being open- source
actually provides an edge because it
attracts talent and community
contributions. It's a radically
different approach from the locked down
proprietary models we're used to from
Silicon Valley. Think about it. This is
a well-unded team with serious math and
machine learning expertise,
quietly building large language models
with efficiency and transparency as core
principles. not afterthoughts.
What is Model One? The leaked flagship.
So, what exactly is Model One? Well,
that's part of what makes this story
fascinating.
Model 1, sometimes written Model 1, is
actually a leaked code name for
Deepseek's next flagship AI.
It first surfaced in January 2026 when
developers spotted references in
Deepseek's GitHub, especially in
something called the Flash MLA library.
There's no official paper or
announcement yet, so everything we know
is pieced together from code leaks and
analyst writeups. But here's what we can
piece together, and trust me, it's
impressive.
Model 1 incorporates significant
infrastructure improvements rather than
just throwing more parameters at the
problem.
Deepseek's engineers completely revamped
things like the key value cache layout.
Improved sparse attention handling and
added FP8, that's 8bit floating point
decoding with serious memory
optimizations.
These aren't flashy features that make
good marketing copy. These are low-level
optimizations that make the model much
more efficient per token processed. At
the heart of all this is Flash MLA,
Deepseek's custom library that provides
highly optimized attention kernels. In
plain English, that means they're
squeezing more speed and context out of
each GPU.
They're working smarter, not just
bigger. We don't have a confirmed
parameter count yet, but Deepseek's
recent models are enormous. Their
Deepseek V3 base was a 671 billion
parameter mixture of experts model and
their thinking versions hit around 685
billion parameters. Model 1 is likely in
that ballpark or larger. And wait until
you see this. Deepseek's models support
multimodal inputs.
We're talking text, images, audio, and
video. Much like Google's Gemini or
OpenAI's GPT4 vision, model one will
probably accept all these rich input
types, too. On the training side,
Deepseek's previous top models use novel
reinforcement learning pipelines to
improve reasoning. Model 1 will likely
continue that approach, though they
haven't publicly shared their training
data. What we do know is previous models
were trained on massive corpora of web
text and code. The practical
implications.
Model one should handle even longer
context windows and more complex tasks
faster than its predecessors. The flash
MLA optimizations are specifically
designed to reduce infrastructure costs,
deliver faster response times, and lower
serving latency.
Translation.
This thing should answer questions and
generate text with fewer delays while
handling larger documents or entire code
bases without breaking a sweat. The
showdown model 1 vers the Giants.
Now, let's get to the comparison
everyone wants to see. How does Deep
Seek Model 1 stack up against GPT4,
Grock, and Gemini? Because here's the
thing, it's entering an incredibly
competitive field where every major
player is claiming to have the best
model.
Let's start with performance. Deepseek's
current top models already rival GPT4.
In fact, their R1 model achieved
performance comparable to OpenAI's 01 on
math, coding, and reasoning tasks. It
even outperformed GPT4 in one medical
reasoning study published in Nature
Medicine.
But the competition isn't standing
still. Gro 4 from XAI reportedly set new
records in July 2025, dominating hard
benchmarks like ARC AGI and reaching 50%
on something called humanity's last
exam. feats that beat previous leaders.
Google's Gemini 2.5 Pro tops many
leaderboards, too, outperforming all
major models on math, science, and
reasoning benchmarks, according to
Google's own reports. So, every major
player has a powerful contender. Model 1
will need to match or exceed these
results to compete at the bleeding edge.
But here's where things get really
interesting, and this is where DeepS has
a clear advantage. Licensing and access.
Deepseeks models, including their R1 and
V3 series, are open-source MIT licensed.
That means anyone can download, modify,
and run them locally. Compare that to
the alternatives.
GPT4 requires chat GPT plus or API
access. You're paying to play. Grock is
available as part of X Premium Plus and
through XAI's API.
Gemini is in Google's ecosystem with a
free tier, but paid features locked
behind subscriptions.
Deepseek Model 1, if they stay
consistent, will likely be widely
accessible via their platform and as
downloadable checkpoints.
They even offer free web chat access to
their latest V3.2 model right now. On
features and usability, all four models
are multimodal and conversational.
GPT4 supports image inputs and has that
rich plugins ecosystem in chat GPT.
Gro 4 has integrated real-time web
search using X's data and even a voice
interface with camera vision in voice
mode.
Gemini 2.5 Pro offers an unprecedented 1
million token context window with 2
million coming soon. Plus Google's
strong integration with their knowledge
graph and developer tools. Deepseek will
surely support text and likely images,
audio, and video given their existing VL
models. But unlike the others, it'll
have a fully open ecosystem.
Developers can self-host it, build
custom apps, or use it through DeepS
API.
The trade-off, GPT4, Grock, and Gemini
are polished consumer products backed by
massive tech companies.
Deepseek's offering is open, but may
require more DIY work from developers.
But wait, here's the kicker.
Cost. This is where Deepseek absolutely
crushes the competition.
Deepseek's API pricing is extremely low.
Their R1 reasoning model costs about 14
cents per million input tokens and 55
cents per million output tokens. Compare
that to OpenAI's GPT4, which runs about
$5 and $15 per million tokens,
respectively. Even Grock 4.1 at around
20 and50 per million is more expensive
than Deepseek.
Gemini 2.5 Pro falls in between at
roughly $1.25 and $10 per million. If
Model 1 maintains Deepseek's trend, it
could deliver comparable power at a
fraction of the cost. that could
pressure the entire market to lower
prices or offer more free tiers. When
DeepSync launched their models, some
reports noted it laid waste to US tech
stocks because investors worried that
Chinese open AI could supply similar
quality at dramatically lower cost. What
this means for you? So, what does all
this mean if you're not a developer or
AI researcher? How does DeepSeek model 1
actually impact everyday users?
First, accessibility. Deepseek's open
approach means powerful AI can reach
people who might be left out by closed
platforms.
Schools, nonprofits, hobbyists. They can
run Deepseek models on their own servers
or lowcost cloud instances.
Something that's literally impossible
with GPT4 or Gemini.
Right now, Deepseek offers a free chat
interface to their V3.2 model. If model
one is similarly accessible, students
and developers at smaller companies
worldwide can experiment with
cuttingedge AI without subscriptions.
Second, open- source innovation.
Because Deepseek publishes weights and
encourages what's called distillation,
creating smaller specialized versions,
it fuels community innovation.
The Deepseek R1 release spawned six
distilled models ranging from 1.5
billion to 70 billion parameters. Some
of these actually outperform GPT4 class
models on math tests. Developers can
take model one and build new tools or
tailor it to niche tasks. Deepseek
already has specialized open models like
DeepSeek Coder for programming. Deepseek
VL for vision and Deepseek math.
Model 1 or its derivatives could power
personalized assistants, translators,
coding aids or educational tutors.
Third, cost to consumers. Those lower
API costs mean cheaper or even free
products.
Imagine apps that use model 1 for
homework help or writing assistance
where developers can afford to offer
free or ultra- lowcost subscriptions.
This could seriously undercut price your
services and force the market to become
more competitive. Fourth, privacy and
offline use. Because it's open- source,
organizations can run it entirely
offline.
Hospitals dealing with patient data,
governments with sensitive information,
businesses with proprietary documents,
they could use it on premises.
Plus, developers can audit the model for
biases or errors, increasing trust.
And the use cases,
all the familiar territory. Writing and
editing text, generating code, tutoring
in math, summarizing documents,
answering questions, creative tasks.
Deepseek's emphasis on math and coding
reasoning suggests model 1 will excel in
technical fields. If it's truly
multimodal, it could power image
captioning, voice assistance, and more.
Bottom line, model one arriving in the
ecosystem likely means more AI powered
tools at everyone's disposal. Smartphone
apps, web services, educational bots,
many of which could be free or open. The
AGI question. Now, does a new model like
DeepSeek model 1 mean AGI, artificial
general intelligence, is just around the
corner? Most experts would say not
necessarily. And this is important to
understand because there's a lot of hype
in the AI space right now. Deepseek's
innovations are genuinely impressive,
but they mainly push the frontier of
what we call narrow LLM capabilities.
These are still specialized language
models, even if they're incredibly
powerful ones. Surveys of AI experts
consistently put AGI a decade or more
away. For example, major surveys find a
50% chance of AGI between 2040 and 2050.
with 90% certainty by 2075.
A 2025 AAI panel found that 76% of
researchers doubt that merely scaling up
today's techniques will achieve AGI. In
other words, while model one might
surprise us with new capabilities, it's
still essentially a bigger, faster
language model.
It doesn't solve fundamental challenges
like real world reasoning, common sense
understanding, or robotics integration.
Even Deepseek founder Leang emphasizes
innovation and talent over building one
massive breakthrough model.
So rather than moving the AGI goalpost
from 2050 to 2026, Deepseek's arrival
will probably just intensify the debate.
Some will point to it as evidence that
breakthroughs are accelerating faster
than expected. Others will note that the
broad expert consensus mid-century AGI
remains unchanged.
Model 1 will influence expectations by
lowering cost and increasing access to
powerful AI.
If everyday applications suddenly become
much smarter and more accessible,
people's imagination for AI's future
might expand.
But most AI leaders would caution that
useful AI does not equal general
intelligence. It'll take sustained
advances in many areas. architecture,
learning algorithms, embodiment in
physical systems, alignment with human
values to approach AGI.
For now, Deepseek Model 1 is a sign of
fierce competition and rapid progress in
narrow AI, not a guarantee of imminent
super intelligence.
Deepseek Model 1 is shaping up to be one
of the most exciting developments in AI
this year. It represents a new wave of
what you might call open science
philosophy, combining high performance
with transparency in an industry that's
often dominated by secretive closed
labs. We've seen that Deep Seek is a
well-funded startup with a vision of
cheap open models. Model 1 is rumored to
pack advanced engineering like Flash
MLA, Sparse Attention, and FP8
optimizations under the hood.
and its competitors GPT4, Grock, and
Gemini each have their own strengths in
different parts of the ecosystem.
For everyday users, an open, powerful
model could mean more free AI tools,
dramatically lower costs, and broader
access to cuttingedge technology. But
for those watching the AGI timeline,
it's still one more step on what will be
a long road. It's a sign of rapid
progress, absolutely, but not an
overnight miracle. What do you think?
Will Deepseek Model 1 live up to the
hype, or is this just another flash in
the pan? How do you see yourself using
these new AI capabilities? Let me know
in the comments below. I'm genuinely
curious what you all think about this
shift toward open-source AI and whether
it'll actually force the big players to
drop their prices. And if you found this
deep dive useful, please hit that like
button and subscribe for more AI
analysis. Your support helps us keep
covering the fastest moving developments
in this space.
Thanks for watching and I'll see you in
the next video.