File TXT tidak ditemukan.
Elon Musk vs Sam Altman: From OpenAI Allies to Rival Titans – The Untold Feud Behind the AI Wars!
pOGxkMMPMpU • 2026-01-07
Transcript preview
Open
Kind: captions
Language: en
Elon Musk and Sam Alman are literally
trolling each other on X, formerly known
as Twitter, like they're teenagers in a
high school drama.
One of them offered $97 billion to buy
the other's company, and got clapped
back with, "No thanks, but we'll buy
Twitter if you want." These are two of
the most powerful people in tech, and
they're out here publicly calling each
other swindlers and jerks.
How did we get here?
Well, I spent weeks digging through
their history, and here's the crazy
part. They actually started OpenAI
together as best friends with the same
mission. And now they're in an all-out
war that could literally shape the
future of AI for all of us. Welcome back
to bitbiased.ai,
where we do the research so you don't
have to. Join our community of AI
enthusiasts with our free weekly
newsletter. Click the link in the
description below to subscribe. you will
get the key AI news, tools, and learning
resources to stay ahead. So, in this
video, I'm going to walk you through the
complete timeline of how these two tech
titans went from business partners with
a shared mission to bitter rivals
launching competing AI companies and
suing each other. You'll understand the
real reasons behind their split, what
each of them actually believes about AI,
and why this feud matters way more than
just celebrity drama. Because here's the
thing. Whoever wins this battle might
actually determine how artificial
intelligence affects all of us. Let's
start at the beginning back in 2015 when
they were still friends. The
partnership. When dreams aligned,
picture this. It's December 2015 and two
of the most ambitious minds in Silicon
Valley are sitting across from each
other. And they're both genuinely
terrified. Not of each other, but of
something bigger.
They're scared that artificial
intelligence, if developed wrong, could
literally threaten human existence.
Sam Alman was running Y Combinator at
the time, basically the most prestigious
startup accelerator in the world. He'd
been funding hundreds of companies and
watching technology evolve at breakneck
speed. Elon Musk, well, you know him. He
was already juggling Tesla and SpaceX
trying to solve climate change and make
humanity multilanetary.
And both of them kept coming back to the
same nightmare scenario.
What if Google or some other tech giant
creates superhuman AI behind closed
doors with no accountability?
So they did something bold. They
gathered a group of tech luminaries and
founded Open AI as a nonprofit research
lab. The mission statement was beautiful
in its simplicity. Advanced digital
intelligence in the way that most likely
benefits humanity as a whole. Musk even
insisted on the name open AI to
emphasize transparency.
Everything would be open source.
Everything would be available to
everyone. No corporate overlords, no
profit motives, just pure research for
the good of humanity. And here's where
it gets interesting.
Musk wasn't just being paranoid when he
called AI potentially humanity's biggest
existential threat. He genuinely
believed it. This wasn't a marketing
stunt or some casual comment.
The man was willing to put his money
where his mouth was, pledging massive
funding to make sure AI development
stayed in the right hands.
Altman was equally committed, but came
at it from a different angle. He saw the
enormous upside.
He talked about AI assistants that could
go off and discover new knowledge that
could help solve problems we haven't
even imagined yet.
Both of them agreed on one thing though.
This technology was too important to
leave to chance and it definitely
shouldn't be controlled by one company
optimizing for quarterly earnings.
For those first couple of years, it
actually worked. Open AI was making
progress publishing research and staying
true to its nonprofit roots.
But behind the scenes, cracks were
already forming. Because when you put
two incredibly strong willed visionaries
together, both used to being in charge,
both convinced they know the right path
forward, something's going to give. The
split when control became the issue.
By 2017, the honeymoon phase was
definitely over. Open AI was burning
through money fast and they needed to
figure out how to fund the massive
computational resources required for
cuttingedge AI research.
That's when the conversations about
restructuring began. And this is where
everything started to unravel. According
to OpenAI's own account of what
happened, Musk came to the table with a
very specific demand. He wanted majority
equity stake, absolute control, and to
be CEO if they were going to shift to a
for-profit structure. Now, think about
what that actually means. Musk
essentially wanted to own Open AI, to
have final say on every decision, to
make it his company in everything but
name.
The other co-founders, including Altman,
said no.
And this wasn't just about ego or some
power struggle for its own sake.
They fundamentally disagreed about how
open AI should operate.
Musk wanted to attach it to Tesla as a
cash cow. His words from his own emails.
He saw it as part of his ecosystem of
worldchanging companies. But Altman and
the others wanted OpenAI to remain
independent, to chart its own course. So
in February 2018, Musk made a dramatic
exit. Officially, he said it was to
avoid conflicts with Tesla's AI work. He
tweeted that he didn't agree with what
the Open AI team wanted to do and that
he needed to focus on his other
ventures. But the real story was deeper
than that. He was walking away from
something he helped create because he
couldn't have it his way and he was
taking his future funding pledges with
him.
Alman later described Musk's resignation
as very tough. Suddenly, OpenAI was
scrambling for cash. They had this
ambitious mission, worldclass
researchers, but the funding rug had
just been pulled out. This forced them
to get creative, which eventually led to
the capped profit structure and the
partnership with Microsoft that would
later become such a point of contention.
But here's what most people miss about
this moment. Both of them thought they
were doing the right thing. Musk
believed he needed control to keep open
AAI on track, to prevent it from
becoming just another corporate AI lab.
Alman believed that giving one person
absolute control, even Musk, would
betray the collaborative spirit they'd
started with. Neither one was willing to
compromise. And that's how you get from
partners to rivals.
The public feud from silent split to
allout war.
For a few years after the split, things
were relatively quiet.
Sure, there was tension, but it stayed
mostly behind closed doors. Then
November 2022 happened and everything
changed.
That's when OpenAI launched Chat GPT.
If you remember the moment Chat GPT
dropped, it was absolutely wild. 1
million users in 5 days. Everyone was
suddenly talking about AI. It was on the
news. It was trending everywhere. And
Sam Alman was right at the center of it
all as the face of this revolutionary
technology. And Elon Musk,
he was watching from the sidelines.
His response was immediate and harsh. He
cut off OpenAI's access to Twitter's
data, which they had been using to train
their models. Then he went on Twitter
and said something that would set the
tone for everything that followed. Open
AAI was started as open- source and
nonprofit. Neither are still true.
Think about how that must have felt for
Musk.
He had co-founded this organization with
this beautiful idealistic mission, put
in millions of dollars, and now it was
making headlines and getting valued at
tens of billions, all while he was on
the outside looking in. And to make it
worse, from his perspective,
they had abandoned everything they stood
for. They'd partnered with Microsoft.
They'd gone closed source with some of
their models.
They'd become exactly what they said
they wouldn't be. Over the next months,
the attacks escalated. Musk started
calling OpenAI a maximum profit company
effectively controlled by Microsoft. He
pointed out that the tiny nonprofit he
helped fund with $und00 million had
transformed into a $30 billion
for-profit entity. And you know what? He
wasn't completely wrong about the
transformation.
Open AI had changed dramatically from
its founding vision.
But wait, because this is where it gets
really interesting.
In early 2023, Musk signed a public
letter calling for a pause on advanced
AI training, saying the technology was
moving too fast and we needed to slow
down and think about safety.
Noble sentiment, right? Except
simultaneously, he was quietly launching
his own AI company called XAI.
So he was publicly saying everyone slow
down while privately racing to build his
own competing system. Then came Grock
Musk's answer to chat GPT. He positioned
it as the anti-woke alternative, an AI
chatbot with an absolute focus on the
truth, whether politically correct or
not. This was a direct shot at ChatGpt,
which Musk had been criticizing as too
liberal, too sanitized, too concerned
with not offending anyone. His pitch was
basically, "Here's an AI that won't lie
to you for political reasons."
Alman's response to all this was
interesting. He took the high road
mostly. He acknowledged that Musk really
cares about AI safety a lot and said
they had differences of opinion, but
both wanted a good outcome for the
world. But he also couldn't resist some
jabs. He called Musk a jerk on social
media, though in a somewhat joking way,
and even admitted Musk was one of his
heroes. It was this weird mix of respect
and frustration. The truth is, Alman
seemed genuinely conflicted about the
whole thing. On one hand, Musk had
helped make Open AI possible.
On the other hand, Musk was now actively
trying to undermine them while claiming
the moral high ground.
How do you handle that?
the legal battles. When words became
lawsuits by 2024, the feud moved from
Twitter spats to actual courtrooms. And
that's when things got really messy. In
March 2024, Musk filed a lawsuit against
Altman and Open AI. And the language in
it was absolutely brutal.
The complaint accused them of violating
OpenAI's founding nonprofit promise. But
it wasn't just a dry legal document.
Musk's lawyers wrote that the perity and
deceit is of Shakespearean proportions.
That's not normal legal language. That's
personal. That's saying you betrayed me,
you lied to me, and you stabbed me in
the back. The core argument was that
Altman had deceived Musk into co-ounding
Open AI under false pretenses. That the
whole nonprofit mission was a scam to
get his money and support. And once they
had what they needed, they pivoted to a
for-profit model with Microsoft and left
him behind.
It's a compelling narrative, right? The
idealistic beginning, the corporate
betrayal, the lone trutht teller
fighting against the system.
But OpenAI wasn't about to take that
lying down. They did something
brilliant.
They published Musk's own emails from
the early days of OpenAI. And these
emails told a very different story. in
them. Musk himself was suggesting they
needed massive funding that they should
consider for profit structures that they
should even attach to Tesla as its cash
cow. In other words, the very things he
was now suing them for were ideas he had
proposed. Open AI's response was
essentially,
"You're not mad because we betrayed the
mission. You're mad because we succeeded
without you and now you're trying to
rewrite history."
They called his lawsuit incoherent and
motivated by jealousy.
Musk dropped that first lawsuit, but
then filed again in August with an
amended complaint.
This time, he was even more explicit
about the deception angle, painting
himself as the victim of an elaborate
con.
The suits are still ongoing as we speak,
with Microsoft now pulled into the mix,
too.
Musk is arguing there's an AI monopoly
forming between Open AI and Microsoft,
which is conveniently exactly what his
competitor XAI would want to argue. And
then in February 2025, Musk did
something absolutely wild. He made an
unsolicited 97 billion bid to buy Open
AI.
Think about that. He tried to just
straight up buy the company that kicked
him out. the power move of all power
moves.
Alman's response, he mocked it on
Twitter with, "No, thank you, but we
will buy Twitter if you want." It was
cheeky. It was dismissive, and it
clearly got under Musk's skin because he
immediately called Altman a swindler.
Open AI's board rejected the takeover
attempt, obviously. But can you imagine
if it had worked? Musk would have gotten
everything he wanted back in 2018, just
seven years later, and for a hundred
billion dollars more. The legal warfare
has even expanded beyond AI. Musk's XAI
sued Apple over app store issues related
to AI features. Meanwhile, Altman has
been backing companies that directly
compete with Musk's other ventures.
There's a brain computer interface
company going after Neurolink, new space
companies challenging SpaceX's
dominance.
This has become an everything everywhere
war across the entire tech landscape.
The philosophical divide. Two visions
for humanity's future.
But here's what makes this story more
than just rich guys fighting. At its
core, the Musk Alultman rivalry
represents two genuinely different
philosophies about how to build the
future. And I think both of them
sincerely believe they're right. Let's
start with Musk's worldview because it's
actually pretty consistent across
everything he does. Musk is obsessed
with existential risks. Climate change,
build electric cars and solar panels,
human extinction, make us multilanetary
with SpaceX,
AI going rogue,
build it transparently with maximum
oversight. Everything he does is
filtered through this lens of what could
wipe us out and how do I prevent that?
When it comes to AI specifically, Musk
has been remarkably consistent in
calling it humanity's biggest
existential threat. He's not joking
around. He genuinely believes that if we
get this wrong, it could be the end of
human civilization.
So, his approach is maximum
transparency, maximum openness, and
strict safety protocols.
He rails against open AI's secrecy, the
closed source models, the corporate
partnerships, because in his mind,
that's exactly how you create the
nightmare scenario.
There's also a political dimension that
matters. Musk positions himself as a
free speech absolutist, and he built
Grock explicitly to avoid what he calls
ideological bias.
He thinks chat GPT is too politically
correct, too willing to self-censor. And
that's dangerous because it means AI
isn't showing us the truth. It's showing
us a filtered version of truth. Whether
you agree with him or not, it's a
coherent position. If AI is going to be
super intelligent, it better be honest,
even when that honesty is uncomfortable.
Now, let's look at Altman's philosophy,
which is different in subtle but
important ways.
Alman is a techno optimist who believes
in moving fast and scaling things.
He came up through Y Combinator where
the whole culture is build something,
test it with users, iterate quickly,
scale if it works.
He looks at AI and sees enormous
potential to solve problems, create
abundance, and lift everyone up.
His approach to open AI reflects that
mindset. Yes, they partnered with
Microsoft, but in Altman's view, that
was necessary to get the compute power
needed for frontier AI research.
Yes, they shifted to a capped profit
model, but that was the only way to
attract the talent and resources
required. Yes, they kept some models
closed source, but that was a safety
decision based on preventing misuse.
Every controversial choice has a
pragmatic justification.
Altman also believes in working within
existing power structures. He's
comfortable sitting down with
governments, collaborating with
corporations, finding ways to align
incentives so that AI development can
happen safely but quickly. He's even
talking about universal basic compute,
the idea that everyone should get access
to AI resources as a fundamental right
in the future. These are big ambitious
ideas, but they're rooted in
cooperation, not confrontation.
Where Musk says we need transparency and
decentralization to prevent catastrophe,
Altman says we need partnership and
resources to create abundance.
Where Musk warns about AI risk and wants
to slow things down. Alman acknowledges
the risks but argues we have to move
forward carefully. Where Musk demands
absolute openness, Alman accepts that
some secrecy is necessary for safety.
And here's the kicker. They both think
the other one is dangerous. Musk thinks
Alman's approach will lead to corporate
controlled AI that optimizes for profit
instead of human welfare.
Altman thinks Musk's demands for control
are about ego and insecurity, not
safety.
Musk accuses Altman of betraying the
mission. Altman accuses Musk of jealousy
and sour grapes.
They're both convinced they're fighting
for humanity's future, just with
completely opposite strategies.
The fascinating thing is they're not
entirely wrong about each other.
Open AAI has changed dramatically from
its founding vision, and Musk's concerns
about corporate influence aren't
baseless.
But Musk's own actions, launching a
competing company while calling for
industry-wide pauses, do suggest at
least some self-interest. The truth
probably lies somewhere in the messy
middle, but neither of them seems
willing to meet there. their competing
visions, how they each want to save the
world. So, let's zoom out for a second
and look at what each of them is
actually building, because that tells
you a lot about their different
approaches to improving humanity's
future.
Musk's portfolio is like a checklist of
existential risk mitigation. Tesla and
Solar City, that's addressing climate
change by accelerating the transition to
sustainable energy.
Every electric vehicle on the road,
every solar panel installed is one less
contribution to fossil fuel emissions.
SpaceX is about making humans
multilanetary, ensuring that even if
something catastrophic happens to Earth,
our species survives.
It sounds like science fiction, but he's
actually doing it. Neurolink is tackling
the eventual merger between humans and
AI, trying to ensure that we enhance our
own capabilities rather than getting
left behind.
And then there's XAI and Grock, his
latest venture into artificial
intelligence. The pitch there is
building AI that's truthful and open, an
alternative to what he sees as the
sanitized corporate controlled
alternatives. Musk frames everything as
a battle against various existential
threats. Whether that's climate
extinction or AI gone wrong, his
strategy is essentially identify the big
risks, build engineering solutions to
address them, and maintain maximum
control and transparency while doing it.
It's a very individualistic approach.
Musk doesn't ask permission. He doesn't
wait for consensus. He just builds what
he thinks needs to exist.
Sometimes that makes him a visionary.
Sometimes it makes him a chaos agent.
Often it makes him both simultaneously.
Altman's approach is more about building
systems and leveraging networks.
Under his leadership, OpenAI launched
Chat GPT, which within months became one
of the most widely used AI tools in
history. Millions of people are using it
daily for everything from writing emails
to solving complex problems. That's real
world impact at scale, which is very
much Altman's style.
But he's also thinking way beyond just
chat bots. He's invested hundreds of
millions into Helium Energy, a fusion
power startup, because he believes the
future of AI depends on breakthroughs in
clean energy.
You can't train massive AI models
without massive amounts of electricity.
And if we're going to do this
sustainably, we need abundant clean
power.
It's the same kind of systems level
thinking that made him successful at Y
Combinator. Looking at the entire
ecosystem and figuring out what pieces
need to exist for everything else to
work.
Altman has also floated ideas like
universal basic income or universal
basic compute, recognizing that if AI
does automate large portions of work, we
need social systems to ensure everyone
benefits. These aren't just technical
solutions. their social and economic
frameworks for a transformed world.
The key difference in their approaches,
Musk builds frontier technology and
pushes boundaries, often in
confrontation with existing
institutions.
Altman builds platforms and
partnerships, working with governments
and corporations to scale solutions.
Musk operates like a lone genius,
pushing humanity forward through sheer
force of will.
Altman operates like a network
orchestrator, connecting resources and
people to create collective progress.
Both strategies have merit. Musk's
approach has given us reusable rockets
and made electric vehicles cool. Alman's
approach has put AI tools in the hands
of hundreds of millions of people, but
they require fundamentally different
operating styles, and that's part of why
these two can't seem to find common
ground anymore.
the timeline, how we got here.
Let me walk you through the key moments
that brought us to this point because
seeing it chronologically really
highlights how far they've fallen from
that original partnership. It all
started back in 2015 when Musk and
Altman co-founded OpenAI with this
idealistic nonprofit vision. Musk was
publicly warning that AI was a
fundamental risk to the existence of
civilization,
but he believed the answer was open,
transparent development.
For those first couple of years,
everything seemed to be going according
to plan.
Research was happening. Papers were
being published. The mission was intact.
Then 2018 hit and that's when the first
major rupture happened.
Musk pushed for majority control during
discussions about for-profit funding.
The other founders refused and he walked
away.
Officially, it was about Tesla
conflicts, but really it was about power
and direction.
This left Altman as CEO and forced
OpenAI to restructure with that capped
profit arm to attract serious funding.
Between 2020 and 2022, OpenAI partnered
with Microsoft and developed GPT3,
laying the groundwork for what was
coming.
And then November 2022 arrived with Chat
GPT's launch. and suddenly OpenAI was
the hottest company in tech. That's when
Musk's public criticisms really ramped
up. He cut off Twitter data access,
tweeted about how OpenAI had betrayed
its mission, and made it clear he felt
personally wronged.
2023 was the year tensions turned into
full-blown rivalry. Musk was out there
calling OpenAI closed source, maximum
profit, while simultaneously launching
XAI and Grock. Altman was maintaining
the mission intact while taking subtle
jabs at Musk's combative style. The
contrast between Musk's I'm the only one
who can save us attitude and Altman's
we're building this together approach
had never been starker.
Then 2024 brought the legal warfare.
First lawsuit in February claiming
nonprofit mission violation. Open AAI
fights back with Musk's own emails. Musk
drops that suit and files again in
August with even stronger accusations.
The conflict that had started as a
philosophical disagreement had evolved
into actual litigation with hundreds of
millions of dollars at stake.
And finally, 2025 has given us the 97
billion takeover bid. The continued
lawsuits now involving Microsoft and
Apple and both of them backing companies
that compete with the others ventures.
It's not just AI anymore. Altman
supporting competitors to Neuralink,
Musk's suing over app store issues, and
they're both trying to shape the
narrative about who's the good guy and
who betrayed the mission. What started
as two visionaries agreeing that AI was
too important to leave to chance has
become an all-out war where both sides
genuinely believe the other is
dangerous.
And the wild part, this war is just
getting started.
Conclusion: Who's right and why it
matters to you?
So, after all of that, you might be
wondering who's actually right here. Is
Musk the wronged idealist fighting to
keep AI safe and open, or is he a
control freak who can't handle not being
in charge? Is Altman the pragmatic
builder scaling AI for humanity's
benefit? Or did he really betray the
founding mission for corporate money and
personal success? Here's the honest
answer. They're probably both a little
right and both a little wrong. Musk's
concerns about AI safety and
transparency aren't baseless. The
transformation from nonprofit to
billiondollar partnership with Microsoft
is dramatic, and questions about mission
drift are fair, but his own actions,
launching a competing company while
demanding others pause do suggest his
motives aren't purely altruistic.
Altman's achievements are undeniable.
Chat GPT has put AI in the hands of
millions. Open AI is pushing the
frontier of what's possible. And his
partnerships have enabled research that
might not have happened otherwise. But
Musk's accusation that they abandoned
the original open-source vision isn't
entirely unfair either. Open AAI today
looks very different from what they
started in 2015.
What matters more than picking a side is
understanding what this feud reveals
about the future of artificial
intelligence.
We're watching two different
philosophies of development play out in
real time.
One says AI should be open, transparent,
and decentralized to prevent corporate
control.
The other says AI development requires
resources and partnerships and carefully
managed deployment is safer than radical
openness.
Both approaches have risks. Musk's
vision could slow progress and might
still result in powerful entities
controlling AI, just different ones.
Alman's approach could concentrate power
with a few large players and might
optimize for growth over safety. The
question isn't which philosophy is
perfect, it's which risks we're more
comfortable accepting.
And this matters to you because the
decisions being made right now will
shape what AI looks like when it's even
more powerful and integrated into your
daily life. Will it be controlled by a
few big tech companies? Will it be open-
source and accessible to everyone? Will
safety concerns slow development?
Will competition accelerate progress?
All of these questions are being fought
over in boardrooms, courtrooms, and
Twitter threads right now. and Musk and
Altman are two of the most influential
voices in that conversation.
The one thing both of them agree on, and
this is important, is that AI is going
to be transformative.
Whether it's Musk warning about
existential risk or Altman promising
abundant prosperity, they both
acknowledge this technology will reshape
civilization. They just can't agree on
how to get there safely. So, that's the
full story of how Elon Musk and Sam
Alman went from open AI co-founders to
bitter rivals. From partners with a
shared mission to combatants in a legal
war that spans multiple countries and
involves billions of dollars.
It's a story about vision, ego, money,
and fundamentally different beliefs
about how to build the future. And it's
still being written.
If you found this deep dive valuable,
let me know in the comments which side
you're leaning toward, or if you think
they're both missing something
important. This stuff gets complicated
fast, and I'd love to hear your
perspective. And if you want to see more
videos breaking down the real stories
behind tech headlines, you know what to
do. I'll see you in the next
Resume
Read
file updated 2026-02-12 02:44:13 UTC
Categories
Manage