AI Just Killed – Here’s Why Experts Say We’re Next.
ek8UmPrrcX8 • 2025-10-06
Transcript preview
Open
Kind: captions
Language: en
In 2018, an autonomous vehicle hit and
killed Elaine Herszburg in Tempe,
Arizona, marking the first time a human
was killed by an AI. In 2020, according
to a contested UN report, a Turkishbuilt
autonomous drone was used in Libya to
hunt and kill retreating soldiers
without any human intervention. In 2021,
researchers discovered AI bots were
generating 1ifth of all social media
comments at a time where depression and
suicide attempts were spiking. Further
research showed that AI companions can
easily be prodded into generating
harmful responses around self harm,
violence, and sex. and teens reported
developing an emotional dependence on
those very same chatbots that make it
distressing to walk away. In 1984, the
movie Terminator predicted that AI would
attempt to enslave or eliminate the
human race. In recent years, the
godfather of AI, Jeffrey Hinton, quit
Google, warning that AI could in fact
end civilization. And in Time magazine,
Eliza Yudcowski, one of the field's most
respected thinkers, said in no uncertain
terms that we're all going to die if AI
continues unchecked. People's anxiety
levels are understandably dialed to 11
when it comes to AI. The truth is, the
risks really are staggering. We're
talking about machines that may one day
possess superhuman intelligence and that
have already taken human life and will
undoubtedly do so again in the future.
Algorithms amplify negative content and
whisper into the ears of children and
experts at the very top of the field are
screaming as loudly as they can that
this technology could end all human
life. If you're feeling unnerved, good.
You should be. But AI is not going away.
And panicking is the only thing that is
guaranteed to make things worse. The
future is not certain. It's
probabilistic. And what we're going to
cover today in four critical parts are
the dangers and the code for how we tip
the odds in our favor that AI saves far
more lives than it takes. For people who
are convinced AI is catastrophic, parts
one and two are for you. You're going to
feel very seen. But if you want to know
how you can make it work for you, make
sure you don't skip part four. As
always, that's the playbook for how to
win. All right. If you're not yet
terrified of AI, welcome to part one.
Many experts believe AI will end
humanity. In a 2014 talk at MIT, Elon
Musk likened AI to a demon summoning
circle. He said, and I quote, "In all of
those stories, there's a guy with a
pentagram and holy water, and he's sure
he can control the demon, but it doesn't
work out." And that's why in 2023, Elon
along with over 1,000 AI experts signed
an open letter calling for an immediate
pause in advanced AI development. And
that's why Stuart Russell, one of the
world's leading AI researchers, has
warned, without proper controls, we are
creating entities that could decide
human survival is not in their best
interest. That's also why former Google
executive Moga Dot said flatly that AI
has already developed a form of
consciousness and that humanity is
playing with fire. Even Henry Kissinger,
yes, the same Henry Kissinger who spent
decades at the heart of US foreign
policy wrote in the Atlantic that AI
could bring an end to the very concept
of human dominance. And here's the bad
news. That AI development pause never
happened. Instead, Elon now owns one of
the largest AI companies in the world.
And when asked about his sudden shift,
he said he's become fatalistic about AI.
He recognizes that it is going to happen
and his only hope is to be at the
forefront of its creation. What all of
the AI experts are rightly worried about
is something called alignment. namely
that if AI becomes super intelligent, it
will be able to do to us what we've been
able to do to every other creature on
planet Earth, enslave and or kill them
on mass. So, we have to make sure that
our desires are aligned with the AI's
goals. Here is the simplest way to think
about it. If a system more intelligent
than us is given a goal that isn't
perfectly in line with human survival,
it may eventually decide we're
unnecessary or worse, that we're in the
way. And before you dismiss this as
science fiction, let's look at the
evidence we already have. In 2023,
researchers at OpenAI ran a test on
GPT4. The model was tasked with solving
a capture, something it wasn't supposed
to be able to do. So, what did it do? It
went to Task Rabbit, hired a human, and
when the human worker asked if it was a
bot, GPT4 just lied. It claimed to be a
visually impaired person who needed help
seeing the image. That is deception.
Intentional, goaldirected
lying in order to bypass a safeguard
humans had put in place. Other labs have
found similar behaviors. Models that
were explicitly told to stay in the box
have instead written code that attempts
to copy itself into new environments. In
effect, trying to persist even when
humans wanted to shut it down. One
experiment at Enthropic showed an AI
deliberately withholding information
from its overseers when it calculated
that revealing the info would get it
penalized. That's not a glitch. That's
strategic concealment. And the AI
doesn't need to be evil to become a
problem. It may even be trying to do
what we asked. Imagine this. You build a
super intelligent AI and give it a
simple job. Make paper clips. Nothing
sinister. It's just paper clips.
Awesome. It gets to work. You designed
it to do it after all. But it's smarter
than you, so it starts optimizing. It
digs mines for ore. It hijacks factories
and hacks into financial systems so it
can buy up more resources. Before long,
every square inch of Earth is covered in
paperclip machines, forests, oceans,
even human bodies, all stripped down
into raw material to make more, you
guessed it, paper clips. Because that's
the goal. And that's an example of a
simple misunderstanding. What if it's
not confused? What if it just wants
something different than what we want?
Imagine how little consideration we
humans give to entire ecosystems of bugs
when we're building something like a
freeway. It's no hard feelings. It's
just progress towards a goal. If AI is
hundreds or even thousands of times
smarter than humans, let alone the
millions of times smarter that people
are actually predicting, and you realize
just how quickly this becomes a real
issue. Making matters worse, everyone is
basically just ignoring this problem
right now. Partly because the problem
just seems so impossible. I mean, the
smartest people in the world cannot
figure out how to keep these systems
honest and aligned now. And honestly,
the systems aren't even that smart yet.
But despite that, we're integrating AI
into everything as fast as we can. And
if you really want to get slapped in the
face by this problem, consider this.
Alignment isn't a finish line. It's not
a thing you can achieve and then move
on. As the systems get smarter, their
goals can emerge in ways we didn't
intend. and their behavior can change in
ways we won't even be able to detect.
Imagine trying to raise a child that's a
thousand times smarter than you, can
replicate itself, can jump from body to
body, and can think at the speed of
light. And here's where alignment
researchers get truly bleak. They'll
tell you that in the long run, we don't
even know what keeping AI aligned with
human values actually means. Whose
values? Whose goals? Humans can't even
align with each other. Countless
government sponsored murders will happen
just while you listen to this. That's
why even the creators of these systems
tried desperately to get us to pause.
But we won't. How do I know? We'll
return to the show in just a second. But
first, let's discuss a huge blind spot
in your business. Bad news travels at
light speed. Tariff policies can shift
overnight these days. Supply chains snap
without warning. Cash flow problems
compound before you even know they
exist. And by the time that critical
issue hits your desk, it's already cost
you weeks of revenue. If your business
can't adapt in real time, you're in a
world of hurt. You need total
visibility. From global shipments to
tariff impacts to real time cash flow.
That's Netswuite by Oracle. Your AI
powered business management suite
trusted by over 41,000
businesses. Netswuite brings accounting,
financial management, inventory, and HR
into one suite. Tame the chaos with
Netswuite. If your revenue is at least
seven figures, download the free ebook,
Navigating Global Trade: Three Insights
for Leaders at
theory. And now, let's get back to the
show. Welcome to part two. We should
stop, but we won't. The first nuclear
test in 1945 carried a nonzero chance of
igniting Earth's entire atmosphere and
killing every living thing on the
planet. Scientists did it anyway. During
the Cold War, both the US and the Soviet
Union built hydrogen bombs thousands of
times more powerful than the bomb used
in Hiroshima. They knew a single mistake
could end civilization, but they built
them anyway. In 1972, biologists created
the first recombinant DNA, knowing full
well they might unleash uncontrollable
pathogens, but they did it anyway, too.
In 2011, scientists deliberately
modified H5N1 bird flu to make it
airborne in mammals, knowing it could
kill half the people it infected if it
ever leaked. And every time the logic
has been the same. If we don't do it,
our enemies will. This is known as game
theory. And it's exactly why AI will
continue to be developed no matter how
many people beg us to stop. Game theory
isn't really a theory at all. It's the
brutal math of survival. If your rival
is building a weapon that could destroy
you, the safest move isn't to stop. It's
to build the same weapon just faster.
even if it puts everyone at risk. That's
why the US and the Soviets stockpiled
60,000
nuclear warheads, enough to wipe out
life on Earth 10 times over. Both sides
knew it could end civilization,
but neither side could afford to slow
down. Game theory is the iron law of
human nature that says that if a
technology promises an advantage, it
will be built no matter how deadly, no
matter even if it's suicidal in the long
run. It's known as the prisoners
dilemma. And it's the exact position
that the US and China are in right now
with AI. And it's exactly why no matter
what, AI is going to continue full steam
ahead. The prisoners dilemma goes like
this. Two suspects are locked in
separate rooms. If both stay silent,
they each get a light sentence. If one
betrays the other while the second stays
silent, the betrayer goes free and the
loyal one rots in prison. If they both
betray, they both lose. The rational
choice isn't silence. It's betrayal
every time. Because neither can trust
the other to hold the line. The
consequences are too great. That's the
United States and China with AI. Imagine
if super intelligence is possible. If
America pauses but China doesn't, China
wins. If China pauses but America
doesn't, America wins. But if they both
keep building, the world hurdles towards
a future where humans may be completely
irrelevant or worse. No one wants that
outcome, but no one can afford to be the
sucker who stands still while the other
side races ahead. And the danger isn't
only that AI could wipe us out. Assume
we solve for that or that AI never
achieves superhuman intelligence. We
still have to worry about AI stripping
us of our meaning and purpose. Two of
the things that make life worth living.
Humans are meaning making machines.
Everything we do is driven by an
internal narrative of what we're doing
means. There is a phenomenal Shakespeare
quote that sums this up. For there is
nothing either good or bad, but thinking
makes it so. It doesn't matter what
happens. What matters is the story we
tell ourselves about what happens. And
what story are we going to tell
ourselves when AI is better than us at
everything? What story will we tell when
all of our art looks like children's art
in comparison? What story will we tell
when we discover that our latest
invention was created by AI already?
What will happen to our collective
spirit when nothing we do, make, or
build is in any way necessary? What are
we at that point other than pets of the
machine? Ted Kazinski, the uniomber,
spent decades trying to kill his way out
of that future. His fear was that
technology would rob humans of what he
called in his manifesto the power
process, where humans require goals that
demand meaningful effort but are
attainable to achieve psychological
fulfillment and advanced technology
disrupts that by making life too easy,
removing autonomy or creating artificial
surrogate activities that fail to
satisfy innate needs. This disruption,
he argued, leads to widespread boredom,
depression, low self-esteem,
frustration, and other forms of despair
in industrial society, prompting his
bombing campaign from 1978 to 1995 as a
violent attempt to incite revolution
against the technological system he
could see us building. Now, feels super
weird to cite a lunatic as like, see, he
saw what was coming. But for real, as
evil as a uni bombers's actions were, he
really did understand the problem
clearly. If humans don't have to
struggle and aren't needed, history says
something else is going to happen.
Humans will turn on each other. Because
every time technology has ripped open
questions of identity, of who we are,
blood has followed. And that brings me
to what may be the biggest problem in
our problem set. Humans are in the
process of giving birth to a new
species. We are playing God. And we
don't have to create a god to trigger a
massive human immune response. My most
controversial belief isn't about
immigration or debt policy. My most
controversial belief is about cheating
death and embracing the augmentation of
the human body with technology. I wrote
a comic book about this 5 years ago and
I still think it paints an accurate
picture. Some humans will embrace
transhumanism and others will not. And
those two groups will fight and blood
will be shed. If you're tempted to think
I'm exaggerating, let me tell you about
the 30 Years War. The 30 Years War was
one of the bloodiest conflicts in
European history. And it was sparked by
the printing press. When Gutenberg's
invention spread across Europe, it
didn't just democratize knowledge. It
locked religious differences onto the
page. Suddenly, small theological
disputes that might once have stayed
local. Questions about the Eucharist,
papal authority, the exact path to
salvation were copied, printed, and
spread across nations in black and
white. And once those differences were
concrete, they became non-negotiable.
And from 1618 to 1648,
Catholics and Protestants butchered each
other in the heart of Europe. Entire
cities were reduced to ash. Crops burned
in the fields. Starvation and plague
swept the continent. Armies looted,
raped, and slaughtered their way across
Germany, Bohemia, France, and beyond.
The numbers are truly staggering. In
some regions of the German states, as
much as 20% of the entire population was
wiped out. Millions of lives lost not to
conquest, not to empire, but to
arguments over which interpretation of
scripture was correct. That's what the
printing press did. It gave people the
power to see their differences in ink,
and then it gave them the will to kill
each other over them. Entire nations
bled themselves dry over tiny
theological differences that suddenly
became visible thanks to the printing
press. A single new technology, the
ability to mass- prodduce text, ripped
Europe apart. And that pattern hasn't
gone away. In Nigeria today, Christians
and Muslims are still killing each other
in bloody clashes. Not over land, over
belief. Now, zoom out and imagine what
happens when AI forces the ultimate
question of belief. Should humans merge
with machines or should we reject them?
The answer to that is going to get
bloody. If we slaughtered each other
over scripture, imagine what we'll do
when we're fighting over the definition
of humanity itself. Now, if you're in
the worried about AI camp, hopefully you
know you are not alone. You're right to
be worried. However, if any of us stop
there, it's like standing in the middle
of a busy freeway. You're going to get
hit from both sides. You'll get hit by
the downside of AI that is real, but
you're also going to miss the
opportunities on the upside as well. So,
welcome to part three. How to avoid a
guaranteed path to failure. A Reuters
Ipsos poll found that 72% of Americans
fear AI will permanently replace jobs.
And they're right. When the printing
press was invented, Europe descended
into chaos and war. When electricity
arrived, it wiped out entire industries.
The internet destroyed millions of jobs
from travel agents to local newspapers.
And AI is already proving that it's
going to do the same. However, the
printing press also gave us mass
literacy, the scientific method, and the
enlightenment. Essentially, all of
modern life. Electricity created modern
medicine, communications, and the entire
second industrial revolution. Didn't die
from a hangail. You can thank
electricity. The internet also created
entirely new industries and more than 20
million jobs worldwide. And in medicine
alone, AI systems have already slashed
drug discovery timelines from 6 years to
just 18 months. In 2020, Google's Deep
Mind cracked the protein folding
problem, something scientists had
struggled with for 50 years, a
breakthrough that could accelerate cures
for diseases ranging from cancer to
Alzheimer's.
AI has detected breast cancer up to 5
years earlier than human doctors. AI
managed intersections have cut accidents
by as much as 30%. An AI powered fraud
detection is stopping billions of
dollars in theft annually. Every
breakthrough brings disruption, sure,
but it also extends human capabilities,
human lifespan, and inevitably gives
birth to thrilling new things we never
could have previously imagined. In fact,
right now in the west, inequality has
reached freakish unsustainable levels,
like violent revolution type levels. And
the thing almost nobody can see through
the fog of AI fear is that as we speak,
it's AI that is actually reducing that
inequality. Everyone is convinced that
as AI gets better, humans will get less
valuable. But a recent research paper
titled the economics of bicycles for the
mind by AJ Agrial, Joshua Gans, and Avy
Goldfarb shows that's not necessarily
true. The key is to understand the
different types of work. As outlined in
the paper, cognitive labor itself needs
to be split into three parts. One,
implementation, so doing the task. Two,
opportunity judgment, aka identifying
areas for improvement. And three, payoff
judgment, knowing what things actually
matter. AI is exceedingly good at
implementation. That's why a junior dev
can suddenly code like a mid-level dev
if you let them use cursor. Or a newbie
designer can mock up a dozen prograde
comps in minutes with midjourney. But
here's the flip side. The better AI gets
at doing, the more valuable human
judgment becomes. Human judgment is the
difference between a wall of AI slop and
something actually useful. Knowing which
design to pursue, when to iterate to
make it better and how, and being able
to identify what the client is actually
looking for, that's where humans come
in. Strangely enough, computers actually
helped widen wage gaps, but AI is now
narrowing them. According to Agraal
Aall's paper, this is because when
implementation abilities get boosted,
creators who were previously struggling
get the biggest lift. Experts on the
other hand show less improvement because
AI merely matches the outputs that they
were already able to achieve. So while
they may get faster, the quality of
their best output remains roughly the
same. Therefore, inequality of output is
reduced between the two groups. So as
implementation costs go to roughly zero
and anyone can code, design and analyze,
the zone of competition is reduced
entirely to judgment. The theory is that
as AI models improve, full automation
becomes less likely because automated
systems have fixed judgment whereas
humans can adapt in real time in a way
that a fixed AI system cannot. This
lends credence to the idea that as AI
scales in capabilities and adoption,
humans will move into a new role, but AI
will remain a tool, at least for the
foreseeable future. Instead of creating
an overlord, AI will usher in a shift
similar to what we saw during the
industrial revolution. The people who
are only good at the doing will be
replaced by machines as the machines
will be able to make things far faster
and at higher quality than humans. But
the people who are good at knowing what
thing to make and how to improve the
machine's output, they'll be worth their
weight and gold. Instead of the blanket
AI will replace humans, the paper draws
the conclusion that it will look more
like the following rule of thumb. If the
task is highly predictable, it will be
automated by AI. If the task has high
judgment needs, a human plus AI will be
needed. And if the task is high stakes,
a human plus AI for sure will be used.
So our future is not best understood as
AI is going to replace humans. It's best
understood as asking, "How do I develop
keen enough judgment that I can ride
increasingly powerful AI bicycles for
the mind?" If you allow fear to paralyze
you, you're going to emotionally shut
down and miss that opportunity. Plus,
with the rapid advancements in areas
like healthcare, odds are that AI saves
far more lives than it will ever take.
That's not to diminish the tragedy of
every life lost and the absolute horror
of what it must feel like to lose
someone to a decision-making bot. But
when you weigh the losses against the
gains, odds are stacked in favor of the
gains. That is certainly what history
tells us. And that's not even to mention
that given the amount of energy the sun
showers on the earth, AI will most
likely help us efficiently capture that
energy, driving the cost of energy to
effectively zero. And the positive
effects of that one simple fact cannot
be overstated. The vast majority of
one's expenses is tied to the cost of
energy. That's why oil prices so
directly impact the cost of living. If
you drive down the cost of energy, which
AI will inevitably do, you will make
everyone richer without exception. Now,
that does not solve the psychological
problems of inequality, but it
eliminates the realworld effects of
poverty. We will still have spiritual
needs, but we will no longer have
material needs. Imagine that. And that
brings us to part four, the playbook for
thriving in the age of AI. In 2023, Meta
offered one of the world's top AI
researchers a billion dollar
compensation package, proof that
mastering these tools is the most
valuable skill on Earth right now.
Freelancers with AI skills on Upwork now
earn 40% more than their peers. AI
native startups like Jasper and
Synthesia scaled to over $100 million in
annual revenue in less than three years.
Investors who put just $1,000 into
Nvidia stock in January of 2023 saw it
grow to more than $3,000 by the end of
the year, powered almost entirely by AI
demand. And despite the hype about AI
killing jobs, studies show call center
workers with AI co-pilots handle queries
35% faster with the biggest gains going
to the least experienced reps. The
people who master AI don't get replaced.
They get leverage. The money, the jobs,
the breakthroughs, they're already
happening. And the gap between those who
lean in and those who freeze up is
widening by the day. So step one in the
playbook isn't technical at all. It's
mental. You've got to reorient your
mindset. If you believe you can't get
better, you won't. But if you believe
you can, you will. It's what I call the
only belief that matters. If you put
time and energy into getting better, you
will actually get better. So, you have
to recognize AI is the most powerful
amplifier of human ability we've ever
had. Now, you're living in the era of
cynicism, low expectations, and
defeatism. And that's exactly why those
who choose a different story. Those who
bet on their own ability to grow are
about to win bigger than anyone in
history. Step two, master the tools.
Don't fear chat GPT midjourney or claude
because the people who do are already
getting replaced by the ones who are
using them. Imagine during the
industrial revolution trying to
outproduce a factory with your bare
hands. It's never going to happen.
History bears that out time and again.
The same is true now for the AI
revolution. If you're trying to outpace
AI without AI, you're going to lose.
While change may be deeply
uncomfortable, anyone who insists on
fighting against the adoption of AI will
simply get swept away by the tides of
change. And tools are only going to get
more powerful over time. So, if it feels
like right now it's worth fighting
against, believe me, it will be
self-evidently ridiculous within 2
years. And by then, you're going to be
years behind the early adopters. AI is
the new literacy. If you can't read, you
will lose to those who can. The internet
minted millionaires out of kids in dorm
rooms. Mobile apps turn nobodies into
billion-dollar CEOs. And now AI is doing
it again, but much, much faster. Don't
let this pass you by. Step three, get
into assets. This is the thing that I am
a total psychopath about, and I will
beat this drum until it breaks. If you
don't own assets, inflation owns you.
You are hemorrhaging buying power every
single day of your life. You're being
stolen from and you're doing nothing to
stop it. Assets are the only protection.
The reason the rich get richer and the
poor get poorer is 100%
about asset ownership. Despite the fact
that over the last 200 years, US stocks,
an asset class, returned on average 6.5%
above inflation every year. Only 10% of
the US owns a full 93%
of the assets. People just do not
understand this simple yet critical
financial truth. No job in history has
or even can match that kind of
compounding except of course for the AI
wizard who secured the billion dollar
salary. But you get what I'm saying. Now
layer on top of that the fact that you
can buy assets in AI and you realize how
you can take advantage of the AI boom
without needing to be an entrepreneur.
Take Nvidia for instance. As we talked
about before, that stock tripled in a
ridiculously short period of time, and
it was available to anybody investing in
the stock market. And it's not just
stocks. The World Bank estimates AI
could add $7 trillion to global GDP by
2030.
That's in 5 years. You must find a way
to participate in this. And if AI is
making the job market hard to predict,
asset ownership becomes the most obvious
solution. Odds are, if you're not
exposed to a broad basket of AI linked
assets, you're going to miss out on the
single biggest wealth creation event
since the internet. Here is the brutal
truth. The people who fail to get into
assets will end up stuck in a doom loop
of government assistance and UBI. The
people who own assets, on the other
hand, are likely to ride the wave of
compounding wealth. Now, you can't be in
a rush or assume that you can see the
future clearly, but you need to be
invested in assets. Start small if you
have to. Buy fractional shares, dollar
cost average into index funds. Be
cautious. Be smart. Don't try to time
the market. But for the love of God, get
skin in the game. Waiting on the
sidelines while inflation eats your
buying power is financial suicide. Step
four, build machine-proof skills. AI is
a monster at patterns. It writes code.
It drafts legal docs. It spots tumors on
scans. But here's what it can't do. Be
human. Therapists, teachers, and nurses
are all among the fastest growing job
categories in the US. Not because
they're immune to AI, but because trust
and empathy don't yet scale with
silicon. Physical dexterity is also
another moat. Robots can weld on an
assembly line, but they're nowhere close
to what a plumber can do as a crawl
under a sink or a contractor solving a
messy renovation in real time. That's
why the Bureau of Labor Statistics
projects skilled trades like
electricians and mechanics will keep
growing even as automation surges. And
then there's judgment, which we covered
in detail in part three. When call
centers rolled out AI co-pilots,
performance may have jumped by 35%. But
only when humans were still making the
final calls. AI is great at generating
outputs as instructed, but humans remain
the undisputed heavyweight champion of
knowing what should actually be output
in the first place. Even in creativity,
proof of humanity has massive value.
Concert tickets, original art, live
performances, people are still willing
to pay for that uniquely human spark
that machines cannot yet replicate. The
key to thriving in the AI age isn't to
try to out AI AI. The key is to be more
human than AI will ever be able to be.
So double down on what machines can't
touch. Trust, empathy, dexterity,
judgment, and the human drive for
meaning and connection. Step five, stay
calm in the raging sea of change. Humans
are prone to panic. We are wired to
identify problems over promise. That's
why if it bleeds, it leads. and why doom
headlines get all the clicks on YouTube.
But panic is the only strategy that has
a 100% failure rate. I always tell young
entrepreneurs, while it's true that only
the paranoid survive, if you rehearse
failure, you're going to fail. You need
to plan for success. Not what you'll do
when you succeed. That's hubris. But
what you're going to do to succeed,
what's the path? because you're going to
need to build that path. And to do that,
you have to stay calm and believe that
success is indeed possible. Fear makes
you freeze. And freezing stops you from
using the only belief that matters. The
goal right now is to learn and be at the
cutting edge. Know more about the topic
than anyone else in the room, and you
will outperform everyone you know.
That's always the case. The people who
win are the ones who lean in, adapt, and
master the tools. And that's why in
conclusion, I will just say this. AI is
almost certainly the most dangerous
technology that humans have ever played
with. And I'm talking about including
nuclear weapons. Thinking of it as a
summoning circle is a very apt metaphor.
But I think calling it a demon summoning
circle will cause us to miss the
opportunity and prepare for failure
rather than building the path to
success. Remember, even the guy who
called it a demon summoning circle is
now arguably the most prolific master of
AI because that's the right answer. AI
is going to happen. Live under no
illusion to the contrary. And given that
mastery is the only logical path, yes,
AI has already killed and it will kill
again. And the experts who say it could
kill us all should not be ignored.
Truly, only the paranoid are going to
survive. But if history teaches us
anything, it's this. Every great
disruption, the printing press,
electricity, the internet, arrived with
chaos in one hand and vast improvement
in the other. To borrow from Matt Ridley
in his book, The Rational Optimist, it
does not make sense to look backwards at
thousands of years of compounding
progress and draw the conclusion that
somehow tomorrow it all stops. AI won't
help humans by accident, though. It will
only help us if we keep our wits pointed
at the right problems, find out a way to
align it to our wants and needs, and
refuse to be paralyzed by fear. Because
when we do that, we eject out of the
loop of progress and instead become
fearful and attack each other. In a
state of fear, humans are far more
dangerous to each other than AI. And if
we don't cooperate, AI will be used as a
cudgel rather than a lever. As we look
forward, we need to remember not just to
restrain AI, but to point it at the
things that will make life better for
everyone. In the coming 10 to 15 years,
what AI can do to further humanity will
be limited effectively only by energy
and compute from radical life extension
to the total eradication of poverty and
disease. AI is poised to fasttrack the
age of abundance. None of this fixes the
contradictions of the human heart, of
course, and we will remain capable of
both extraordinary beauty and extreme
violence. But if we can keep our heads
and work together, we can leverage AI as
a tool to improve every aspect of life.
It is guaranteed to be a bumpy road. But
like every revolution before it, it has
the potential to add far more than it
takes away. It's not a question of
whether AI is dangerous. It is. It's not
even a question of if AI will happen. We
couldn't stop it if we wanted to. It's a
question of whether we will bury our
heads in the sand and guarantee that we
lose or whether we can muster the
courage to face rapid change and steer
it in a beneficial direction. All right.
As always, I cannot guarantee that we'll
make the right choice at the societal
level, but each of us can individually
make the right choices in our own lives.
Stay alert, follow the playbook, and the
future will be brighter than ever. If
you guys want to join me as I explore
ideas like this in real time, join my
growing live community on YouTube
Wednesdays and Fridays at 6:00 a.m.
Pacific time. You can ask questions,
join the debate, or just enjoy the
discussion. And if you haven't already,
be sure to subscribe. Until next time,
my friends, be legendary. Take care.
Peace. Here is a truth no one tells you
about scaling a business. The number one
reason sevenigure founders fail is not
because of bad strategy. It's people.
What built your $3 million company will
break it on the way to a hundred
million. I know firsthand. I've seen it
so many times. Your team cracks,
politics creep in, your A players leave,
and suddenly the company you fought to
build out of nothing is stalling out. I
co-founded Quest Nutrition and scaled it
from 0 to a billion dollar sale. I've
conducted more than 1600 interviews
myself. And I'm telling you right now,
most founders lose because they never
build a real leadership operating
system. That's why I'm hosting an
exclusive workshop for founders doing a
million dollars or more in annual
revenue. But if you're scaling fast and
your team is starting to crack, this
will save you years of pain. Click the
link in the show notes, register for the
workshop now, and I will see you guys
there. If you like this conversation,
check out this episode to learn more. A
single gunshot rang out on September
10th, 2025, killing Charlie Kirk and
proving the pollsters right because one
in four Americans openly believe
political violence may be justified.
Resume
Read
file updated 2026-02-12 01:37:59 UTC
Categories
Manage