Transcript
Rtv-W7IE4Mw • Truth About Elon Musk vs Sam Altman: AI, Immortality, War, Power & Simulation Theory | Bryan Johnson
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/TomBilyeu/.shards/text-0001.zst#text/1031_Rtv-W7IE4Mw.txt
Kind: captions
Language: en
you're on the precipice of an artificial
super intelligence Revolution that will
alter your day-to-day life in ways that
none of us can quite imagine but no
matter how much human creativity is
unlocked by AI or how disruptive it will
be to the traditional economy no change
would be as dramatic as unlocking
immortality today's guest believes the
rate of progress in AI means our only
goal should be to stay alive long enough
for AI to take over he believes that
algorithms May hold the key to living
forever here we go for round three with
Brian
Johnson do you think that we as a
species should hand over our
decision-making process entirely to AI
when it's ready yeah I think we we need
a new form of cognition for ourselves
and paed with AI for the species and uh
if you look at hom sapiens as an
independent source of intelligence we're
pretty good at some things we're
terrible at other things and the things
we're terrible at and include our our
willingness to engage in
self-destructive behaviors or planetary
destruction or even destroying each
other instead of trying to build
something uh that is much Superior than
our own uh self-interests and So
currently we are the best form of
Intelligence on this planet but we need
to evolve to something Superior so I've
tried to demonstrate with blueprint that
yes I think that as a
opening conversation we should be
mindful that it may be the time that we
graduate to a new form of intelligence
okay those sound slightly different to
me so what I took away from a ton of
research on you is that basically we to
your point we know that we can't think
through these problems well we know that
we're on the cusp of artificial
intelligence that is almost certainly
going to be way more powerful than us
unlike anything that we could ever
imagine that immortality becomes just
one of the problems that we'll never
solve for ourselves but AI could
potentially solve for us I think you've
even said Evolution gave us man and man
will create God yeah so if we're taking
that lens on AI are we as humans
evolving into something or have we
created the thing that you would for our
own benefit I think is how you see it
want to sign over our decision-making
process to so I think your question is
correct I'm saying that if we lay out
let's say that we do 10,000 things as
individuals in that we seek out food we
seek out shelter we seek out love we try
to solve problems we learn we we list
out all the things we do as humans and
then then you start applying
computational intelligence to each one
of these things and say can an algorithm
do this better than we can you know can
it do uh can it perform mathematical
computations better than humans yes it
can you know much better than we can in
our heads much better we can on paper
it's faster it's better and then if you
size up and say of the things that AI
can do better than humans now what
remains we humans are better than AI
what is the time frame of which is going
to close that Gap so that's one way to
look at it and it's reasonable to say in
any one of those functions AI is
probably going to be better at every
single one of us on every single one of
those functions it's hard to short that
bet and then if you say okay so what
things might be imminent and what I was
trying to Pro prove with blueprint is
one of the most sacred things about
being human which is my autonomy and
free will to make decisions about what
I'm in the mood for or what I want to do
my preference in any given moment and I
started with health and wellness so I
said Can an algorithm decide what I eat
can an algorithm decide when I go to bed
can an algorithm decide what exercise
protocol I should do and I seated
control and I said yes I'm going to let
the I do this and I'm going to believe
or trust in this process and I'm going
to see the results can I actually be
better at being me than I can myself and
three years in the answer is yes that I
would much rather have an algorithm take
care of my health and wellness than I
would myself I am prone to
self-destructive behavior and so I look
at the bigger model and I say it seems
like the most obvious conclusion I could
make in looking at the world right now
that algorithms are going to be superior
Superior to us in all in all things
therefore uh it would be much more
constructive for me to adopt this
framework and build with it rather than
try to resist it okay so uh having had
the privilege of reading your book don't
die I know where some of this goes so I
know that your punchline is people need
to have that aha moment for themselves
but I'd love to walk through what those
building blocks are because I I am way
Pro AI like I could not be more excited
about a future with AI uh however I'm
equally paranoid that this all goes
wrong I have a huge problem with
authority so I have just terrifying
fears of authoritarianism which I feel
knocking at the door right now uh for us
certainly in the west so my initial
reaction to this is hey I love being
able to opt out I love being able to use
algorithms that may have insights but I
want to be definitively in control of
whether I do or don't um engage with
this and one of the the sort of playful
questions that you will ask people is if
you couldn't opt out of the AI would it
still make sense to do it so if we
couldn't opt out of the AI would you
still want us to do
it yes because probabilistically I think
it increases our chance of uh survival
and so what I'm really saying is that
what I find is in
conversation most people will
find call it five to 10 good arguments
on why what I'm saying shouldn't be
considered you know why it it
immediately steps into authoritarianism
or it steps into some kind of dystopic
environment or blank blank blank and I
concede that they reasonable
contemplations and my framework is I
remove myself from the the next few
years and even the next few decades and
I say I'm only going to look at our
moment from the perspective of the 25th
Century that's the only thing that gives
me Clarity because if you are trying to
extract wisdom in the moment of this
noise it's almost impossible and you
hear basically an infinite number of
opinions that all Log Jam the situation
and so I guess when I say when I look at
it from that perspective I can say well
okay on a few hundredy year time scale
I'm going to think it's reasonable to
say say that algorithms are going to get
better at a rate than our native
abilities that uh that speed of
improvement is probably going to even
outstrip our pared abilities like I
don't think I'm going to get a you know
a chip in my brain that is going to make
me a super intelligent species right it
may augment a few things here and there
but it's not going to rival um a
computational system with millions of
servers it's just not going to do it so
I'm not going to become that super AI
myself I'm going to live in this larger
framework of intelligence and so if I
say if that's the case and we're heading
down this trajectory right now we need
to think about it from that kind of
multi 100-year time scale if we want to
survive ourselves and that's really the
point I'm trying to make and if you if
you take that frame it opens up the mind
because like you're saying and you make
good points is there's so many ways to
kill this idea right so I'm basically
saying um an algorithm takes better care
of me than I can
myself we are facing multiple
existential crisis a species we're on
we're on the eve of the creation of baby
uh we're on the eve of creating super
intelligence what do we do as a species
I'm asking that question in the most
sober form possible if we create super
intelligence and we drop it into the
games we play right now as humans we're
going to say okay are we going to become
better at War are we going to use it to
make more money are we going to use it
to get more social media followers like
how do we use our new super intelligence
and I'm saying if you take the Super
intelligence and put it into current
games the homo sapiens Play We increase
the likelihood of we going to kill
ourselves and so we have to basically
look at this from A New Perspective and
what I'm suggesting is as we create
super intelligence the the new game of
existence is eliminating all sources of
death for humans and the planet that we
try to eliminate death for us
individually we eliminate sources of
death for the planet we eliminate
sources of death at all causes across
every function Society that's the new
game we play as super intelligence okay
so let me give people a mile marker so
you and I did an episode about a year
ago where you walked us through what the
blueprint for health is you striving for
immortality you've been on my radar for
years and years but recently you've
really rocketed to prominence because
what you've done
to
demonstrate that we may be able to slow
aging Poss possibly reverse aging has
caught the attention of a lot of people
some people think it's stupid other
people are so inspired they're
completely embracing it but the idea was
and correct me if I go wrong anywhere
here um but this will help people
understand your point about seeding
control to an AI uh you say I'm made up
of trillions of cells um me being an
authoritarian Overlord does not work
well because I eat ice cream I eat late
I'm 60 lbs heavier than you are now
certainly a lot less muscle mass losing
my hair going gray like all the things
would not want to do people need to only
look at the photos of where you started
uh I'm going to use the Democratic model
and I'm going to let my cells speak up
yes and so the AI is just reading the
will of the people as it were the will
of the cells and it's saying ooh for
your liver to be happy you need to eat
this extract right now uh for your heart
to be healthy you need to go run in the
treadmill on and on and on yeah you were
the most measured man in history uh we
will certainly get into the measurements
of your penis which is amazing it's
utterly fascinating uh but that's what's
driving this so you as you said you've
been doing this for three years you've
seen the results in just your
physicality um quality of your mental
state because you're getting perfect
sleep scores and all that stuff okay so
now if people hold that in their mind
this is somebody who's run the very
first leg of this test of can algorithms
actually make my life better and that
begins driving the thinking um why do
you think because you've held a bunch of
um to use a nice fancy word salons where
you invite people to your house and you
throw this idea would you give yourself
over to the algorithm and you said it
people go through this very predictable
course of no basically and they have a
reaction if it would make our lives
better why do we push back yeah people
push back for uh reasons that are
99% common
they perceive that their
autonomy is better than happiness so the
the beginning of the of the question is
you get the best physical mental
spiritual health of your life like the
best you've never felt better in your
entire life and people are willing to
give that up so that they can maintain
autonomy for some unknown reason it's
basically the autonomy is going to make
them more miserable but they'd rather be
miserable and have autonomy than the
best of their of their life but they
just can't let go of control of autonomy
and usually behind the autonomy it's a
few perceptions one is I can't have my
vices anymore I'm not going to be able
to make that you know spur the moment
decision to eat the cookie or the ice
cream or whatever or you may miss out on
social norms you want to participate in
you you think I can no longer go out
with friends or I can no longer do this
thing but they're viewing it from a a
framework of loss and that loss aversion
is so significant that they don't care
they're giving up the best version of
their existence and so that's like a
first set of common and then others is
just deep distrust there's good reasons
why people in society
distrust organizations distrust others
so it's a deep distrust and so I
understand why people say no because
it's rational and reasonable uh the the
larger context is that if there's a loss
aversion to this thing on the flip side
to those who are open to
gain if you're if you live you know a
few thousand few th000 years ago it say
you're Alexander the Great what's the
most ambitious thing you can do in that
moment or one of the most ambitious
things you can do you can raise an army
conquer territory and establish an
Empire if you fast forward a few hundred
years uh maybe you can start playing
with mathematics maybe you can start
playing with poetry you know like
Society has progressed and in any given
age you can kind of in your time in
place Express the most ambitious thing
possible and so the question in 2024 is
what what is Peak expression of
ambition like the absolute Max you know
in mellin it was selling around the
world in the in the 1500s and Armstrong
is going to to the moon so in 2024 what
is it and so it used to be that there's
five there's three levels of ambition uh
start a company start a country start a
religion you know and those are on time
scales because you build a company it
may do well for a certain time but then
it goes out of business and you it kind
of Fades countries have usually have
longer durations than countries and
religions usually Outlast them all maybe
thousands of years now there's a four
and five so now you'd say number four is
don't die number five is hodas or become
a God and so that's you know number four
and number five are things that people
have always dreamed of doing all
throughout history but it's never been
practical you've always had to make up
stories like in this religion if you
obey these rules you get this afterlife
where this amazing thing happens or I'm
going to go to the jungle and drink this
Elixir because of whatever it's been in
the imaginations but it's never been
practical and so what I'm suggesting is
this thought experiment teases out where
we're at in time and place but I'm
suggesting as a species we have not yet
internalized the ambition right before
us we don't understand that death can be
conquered it is a reasonable thing to
say that's possible and if you do that
then the idea of becoming some kind of
expansive
omnipotent kind of intelligence we don't
know what the limitation is and so it's
an interesting goal and that's what I'm
trying to do is I hit at this from why
we're scared and then also trying to
flip around and say Here's the
opportunity which we don't really know
exists yet all right this is another I
think really important mile marker for
people uh and that mile marker would be
we're at a pivotal moment in history and
once you put you and everything you're
saying into the context of before that
this would not have made any sense
living the BL blueprint life making all
these sacrifices leaning into
aestheticism where you're just not doing
all the fun stuff that people think of
drugs drinking party um you're not going
to do that stuff because if you forsake
it in this very unique moment in time
you can cross a Chasm to Super
intelligence and we have faith that the
super intelligence will solve these
problems yes okay that's the first half
of this mile marker the second half is
this idea of the only people you're
trying to impress live in the year 2400
the 25th Century yeah and you're making
the assumption that this really is the
pivotal moment that we think it is that
that it is a moment upon Which history
turns and how we handle this moment is
going to determine whether the future
looks back on us as the people that
messed up this moment or they look back
on us as the people that that really
laid the groundwork so that this new um
AI fueled hopefully amazing future can
actually come into existence well said
okay so uh there in lies the
framework what will people in
2500 respect about us in this moment
what do we need to pull off they will
say we
are eternally grateful that homo sapiens
in the early 21st century when they
first saw Sparks of super
intelligence they were wise enough to
realize that the games they were playing
which was primarily capitalism of how
much money you could make the status you
could achieve the power you could
acquire territory conflict they acquired
the wisdom to see past a moment of time
and they said we are going to direct
every bit of soile energy individually
and collectively and Conquer death that
means uh individually we're not going to
kill each other we're not going to kill
the planet and we're going to take the
Super intelligence and align it for the
sole objective of eliminating death
across all of
society that that was the rallying cry
and so what was what what they did this
is the 25th Century what they did was
interesting is they were so clever they
saw that don't die was the the most
played game on planet Earth that they
had all these fractions of different
religions and nation states and ethnic
you know ethnic groups and gender they
were at war with each other over every
conceivable divide you could imagine and
they just fought Non-Stop and they were
able to unify themselves and say you
know what we're all playing the Don't
Die game every second of every day it's
the most played game in in that time and
place even more so than capitalism and
they were wise enough to wake up and say
you know what this is the moment We join
together on the one thing we all agree
upon let's sort this thing out and they
did that and now
intelligence is thriving throughout the
Galaxy because they were at that
critical point where intelligence went
on this you know exponential curve of
growth throughout the the solar system
do you think that compassion is innate
to intelligence I hope but do you have
any reason to believe that's true no no
reason whatsoever yeah that's what
scares me so I love your thesis I want
your thesis to be true and And yet when
I think about what humans are really
like we we play Don't Die there is no
doubt but you and I are both history
Buffs and so we know that there are
these horrifying spikes of kill and
Conquer yes and that that is native to
the human mind it seems to me the battle
that don't die is up against uh and I
don't know if you like the idea of
categorizing don't die as a new religion
but I will very much say that's what it
looks like from the outside
um so you've got this new religion
here's what you're going to be up
against the human mind has a tremendous
capacity for compassion and so I get
what you're trying to tap into and what
you want people to breathe life into but
we are also hypert tribal that was
necessary to survive to get us here we
have these what I call evolutionary
algorithms that are implanted in our
brains that make us the way that we are
that make us desire autonomy that make
us desire the dopamine feedback loop
that social media takes so much
advantage of that Oreos take advantage
of the the whole uh big agriculture uh
industrial complex takes advantage of
kill and Conquer you've got the um
military industrial complex taking
advantage of it but all of them are
riding on the back of a thing that
already exists in our mind and so you've
talked about don't die as being like hey
this might be the one you don't say
religion I'm going to use that word
Anytime by all means tell me to stop but
uh that we are living in a time where
this religion instead of taking a
thousand years to catch on that this
could take off this could have that
rapid
acceleration if you set aside super
intelligence forcing us to adopt it or
inspiring us to adopt it either way but
if you take that off the table I don't
see the signs that we will do that
naturally do
you yeah
um this is why I tried to be the example
myself I approached this and I said I'm
not a holy being you know like somehow
above the Primal instincts that we all
have so I know if I have in my house bad
food I'm probably going to eat it and I
know if I put myself in certain
situations I'm probably going to make
bad decisions and so this is why I said
I'm going to willingly build an
algorithm that takes better care of me
than I can myself and so then when I
squawk inside and I'm like I don't want
to do this anymore I want to do
something else I'm Bound by the
algorithm I mean this is a story as old
as you know ulyses being tied to the
mask right like he knew he wanted to
hear the siren song but he told his
mates to tie him to the mask so that
when they when he could hear it he
couldn't say anything he put He put wax
in their ears so that they couldn't hear
him uh say give the command to release
him and so I was doing the same thing
and so um I concur with what you're
saying the idea that humanity is going
to
cheerfully walk into this
scenario it's hard to see now having
said that sometimes these things uh the
principles I'm talking about arrive in
benign ways so OIC OIC is an example of
an algorithm that takes better care of
you than you can
yourself so OIC works really well for
weight loss now it has a whole bunch of
side effects it's not an ideal drug like
it's very complicated that said people
it's almost can't be bought it's in that
high of demand because you take it and
it just turns off these bad parts of
your brain and allows you to lose weight
and become the person you want to become
being able to communicate effectively is
critically important whether you're a
manager a CEO an entrepreneur grammarly
can help you have a greater impact at
work with better and faster everyday
Comm communication grammarly is an AI
writing partner that's trusted by tens
of millions of professionals and 96% of
them report that grammarly helps some
craft more impactful writing from
ideating on video titles or product
names to summarizing Long documents and
replying to client emails better and
faster than ever before all of these AI
features are for free and they work
where you work it literally Works across
over
500,000 apps and websites make your
point and have a greater impact with
grammarly sign up now and download
grammarly for free at grammarly.com
impact Theory that's grammarly.com
impact Theory and so in that regard
people are willing to take an algorithm
that does something for them they can't
do themselves to achieve an outcome they
want and so it doesn't have to be a top-
down control scenario it could just be
very benign ways where we find where
algorithms actually help help us achieve
the things we already care about and we
willfully do these things now for me I
wanted to see if I could get to age
escape velocity so was a goal I wanted
to say so I'm willing to make that
trade-off so I think you can take what
you said you can adjust it just a little
bit and see all the examples in life
where we all of us already willfully
partake of the situation and so we run
our mouths and say how scared we are how
we're never going to do it is this
dystopic Meanwhile we're all already
already doing it in so many ways so I
think it can actually it can work itself
out we need to be thoughtful that this
is the case I don't think we want to
walk into the
future blind about what we're doing I
think it's worthwhile having this
discussion and for everyone to say let
me tell you all the reasons why I hate
it we can cycle through it together and
then we can think through how we might
execute okay so uh the big problem for
me is alignment so for people that
aren't familiar with the idea of AI
alignment um one I will Point people to
Elon Musk is now suing Sam Alman uh who
founded open AI Elon went on just a
absolute World Tour trying to convince
world leaders including the US Congress
and Senate please slow AI down nobody
would listen to him and so finally he
developed a fatalistic Viewpoint and his
way forward was let's develop an AI
company that's open so at least
everybody has access to the same thing
so that it can't be leveraged against
people Sam Alman then turns it into a
for-profit company closes it so people
can't see how it's working working they
don't have access to it elon's now suing
him uh what do you take away from that
and then what do you think about that
the alignment problem how should people
think about it there's so many layers to
that situation
um open AI is the most powerful and most
successful AI company in the world
there's a lot of money at stake um
elon's comp uh building a competing AI
product uh there if you if you pel it
back just from the headlines there's so
much happening behind the scenes that's
not not part of the widely told story so
it's more nuanced it's more
complex there's another element where
humans are going to be humans and
they're going to play the game of
thrones game that's also there um
there's also a legitimate conversation
on how we use these AI
systems and are they open are they
closed do governments regulate uh does
the US position itself differently
relative to China so it's this layered
complicated nuanced topic that I think
is very difficult to speak coherently to
because there's so many competing
interests and In This Moment uh you know
I or anyone else could probably say a
hundred things about the situation and
it's why I go back out to 21st century
to say how could I try to sober myself
up to say anything meaningful about this
moment and so these systems are going to
feed into how Humanity currently does
what it does we go to war we have
violence we go for Domination we try for
power
[Music]
and I'm trying to suggest
that we need to go after a Zeitgeist
shift a Zeitgeist like a cultural shift
that is unimaginable to us right now
like right now if I say to you
we're going to point all our attention
on trying to solve all things that cause
death for us collectively that's
unimaginable and you put forward one
argument you said that's dystopic
whatever uh it's also just PR
practically what does that even mean are
people going to do it and I don't think
in our current mindset that's going to
happen what I am suggesting is if you
look at
Co how the world behaved in response to
co was also un thinkable that the entire
world would shut down within weeks over
a virus the entire world rebuilt itself
around one thing within a few weeks and
what I'm hypothesizing is that AI
progress is going to move along at a
certain speed and there will be certain
demonstrations and certain realities
that introduce existential crisis for
the human race it will maybe call into
question uh who do we trust
uh with news yeah welcome to 2024 yeah
who do we trust who is uh who in the
government is in charge of what uh who
is in charge of knowledge who is in
charge of identity verification who is
in charge of you know like you take all
these basic functions of society and you
now have this new question who's
actually in charge and who do we trust
that starts breaking down all these
layers of society that has that has kept
things relatively stable and
predictable and when that happens we're
just freestyling as a species again and
we kind of have to rebuild from scratch
and say okay what are these basic sturdy
building blocks of society on how we
keep Law and Order of how we keep Trust
of how we actually build and so we're
going to we're going to have
experience decades or centuries
equivalent of change in the coming years
now someone may say like I'm being too
ambitious on AI is not real fine say
it's 10 years say it's 20 like whatever
like take whatever time frame you want
for all intents and purposes if you're
thinking about it from a 205th
perspective it's right now it doesn't
matter if it's one year or five or 10 or
20 or 5050 it's all right now it's about
the future of the species and it's about
this contemplation of we're probably not
going to make it through this moment if
we uh if we can't figure out how to not
to annihilate ourselves when you say
that we might annihilate
ourselves what odds are you giving that
like do you feel that we are on the
precipice where there is a meaningful
percentage chance that we actually don't
make it through this moment which I want
you to Define like you worried about
climate change the most are you worried
about AI the most worried about war yeah
you know an asteroid wiped out the
dinosaurs and so we have that kind of
risk we have solar flares as problems we
have um like we have all kinds of
exential risks that are outside of our
control or or less so then we have the
ones that are in our control so will we
use weapons of mass
destruction um will we point AI uh uh to
cause unmitigated harm will
AI Crisis happen because it just runs
away out of our control um will clim
will our climate change so much that the
Earth becomes so difficult to live here
that our our supply chains break down
and we come back we we crawl back to be
Hunter and gatherers like we saw that
when the supply chains for broke down
with covid Society kind of broke down
you couldn't get computers you couldn't
get companies couldn't get their
supplies and the world just kind of
broke down from one little virus and now
you've got this complicated situation
where the climate is making food
supplies challenging and transport and
rivers dry up and like all the things
we're seeing uh it might render us
basically neutered to operate as a
species in some kind of effective way
which renders us kind of powerless and
so what I'm saying is we yes the
situation is pretty serious and we've
done pretty well we've had nukes for a
couple decades and and you know
gratefully we we've only used it twice
sad you know sadly we used it but um
only a limited number know two times and
so what I'm saying is um if we're having
breakfast and a tsunami is on its way
what we have for breakfast kind of
matters but kind of doesn't like we've
got a bigger problem to handle what I'm
suggesting In This Moment is the
situation at our right in front of us is
sufficiently serious that we should be
reallocating all of our attention to
figure out how we make this thing
through this is not a normal day and um
so like when you bring up you know like
this drama between this and that company
sure um fine it's it's it's a drama that
people understand it's a drama people
want to comment on it's good fun it um
it does redirect our attention away from
this bigger problem that I'm trying to
address okay in your book so I won't
feel bad pushing you on this in your
book one of the characters says uh a
number will do we can debate later I
want to get a sense of what what in fact
that same character goes on to say like
we have to be able to agree with the
stakes are and so this is where Sam
Harris who I love and think is amazing
he believes I don't understand his
thinking he has very great graciously
agreed to come back on the show which I
can't wait for um I think I do
understand his is thinking and I think
the disconnect is I don't believe that
Trump in this case is an existential
threat meaning that we are at risk of
human the human race ending um so
because we disagree on the stakes we
disagree on all the follow on to-do
items yeah now what I'm trying to
understand is at the very beginning of
this interview you said yes we should
when a when AI is ready we should give
our decision-making over to that entity
and I'm saying
okay are you saying that because you
actually believe this unique moment in
time is existential in nature or is this
just well there's always asteroids
there's always those things is this
moment uniquely dangerous yes there's an
element of danger more
importantly we are still the architects
of intelligence we're still we are the
ones building AI we are pointing it to
do certain things we training it to
become good at certain things we're
giving it
feedback we're giving birth to Super
intelligence that is more omnipresent on
my mind than
anything and what I'm saying is the way
in which we give birth like the way we
raise super intelligence is the most
critical thing because I think on the
time scales that we're talking about I
don't think whether I'm going to live to
the year 200 or 120 you know be 120
years old or 200 years old I think
that this will be brought to a head in
the next few years five 10 years I think
it's imminent now if I'm wrong like we
have more time great but it's something
that when you're gambling the future of
intelligent existence and we don't know
if it's happened before in the Galaxy we
haven't found any evidence of it maybe
it's out there maybe it's not we don't
know we are so fortunate to have it on
this planet and we as a species are we
are willing to gamble our
existence and it's a representation that
you death has always been inevitable and
when death is inevitable you kind of
don't care like you're willing to
just say fine like it's going to end for
everyone so why not When Death Becomes
inevitable or When Death Becomes a maybe
and maybe we can extend our lives it's a
different reframe on how much we value
con ious and so what I'm saying is if we
could shift our framework and this is
what I've been trying through the
blueprint I'm trying to say I've been
trying to
say death may not be inevitable and if
it may not be inevitable that may give
us something to Aspire to so that we
want to solve these imminent problems in
front of us but that hasn't been a clean
thing because the when I say death may
not be inevitable people will say I
don't want to live forever I never said
forever I just said I I don't want to
die because people say then I uh I'm
going to get bored or you know they come
up with all these reasons and so this um
this is why back to your first question
you know do I think
algorithms should we as a species begin
adopting these
algorithms in many ways yes
because as a species our intelligence is
pretty Limited in acting in our own best
interest from the small things like been
Jing to the big things like discounting
the future that we've never even
experienced it's it's idiotic to say I
don't want to live some duration of time
we have no idea what it's going to be
like we've never been in this situation
before so for any human to foreclose
that opportunity is beyond foolish but
yet we do it and we just want to for we
stop that thought process and we stop
the conversation and you kill the will
to live it's it's a really weird
attribute of own intelligence so like we
have all these weird things that were
both brilliant and were idiotic and it's
very hard to tease out where one begins
and the other ends if we get AI
engineering
wrong could it wipe out all of humanity
certainly give me a percentage chance no
idea no one knows nobody can say
anything intelligent about that question
so if I said um
that that should be the number one thing
we think about is AI alignment because
there's any chance that it's existential
would that feel like the right base
assumption to operate from exactly okay
uh that makes sense to me um now going
back to this was all nested inside of an
idea and I just wanted to make sure that
we got all of that clear so we're
building from the base assumption that
if we don't engineer or bir in your word
AI correctly it has some chance however
slight to completely annihilate humanity
and therefore you have to take this
extraordinarily um seriously step number
one if I can get people to uh recognize
believe not sure which of those is more
apt but if I can get them to recognize
that that not dying we're drawing a line
right now though I think we need to come
back to this uh between not dying and
living forever but if I can get people
to understand that for the first time
ever not dying is a real possibility uh
that that will hopefully create the
fundamental shift in the way that they
think about um Humanity going forward
that's going to be necessary to do AI
right to align it well okay now all of
that was nested inside of this idea that
covid happens we have this unbelievable
response we um Marshall forces quickly
in a way that we never thought we could
I assume this is tied to the idea of how
don't die as a religious movement could
sweep that AI could present an
existential threat question glimmer of
of something hey if we don't address
this immediately and with the
forcefulness of the entire world in
unity we're in real trouble so that sets
the table now the question becomes you
and I may look at what happened during
covid differently I am
horrified by the
authoritarian tap dance to grab power
that happened during that time and that
is literally why I am super in love with
AI and deploying it as fast as I can and
cannot wait to see it come to fruition
and at the same time I'm like reading
about Mouse China I'm reading about
Stalin's Russia I'm like just really
getting more and more Paranoid by the
day about how often and quickly this
goes wrong yeah so do you have a
different take like was Co only a
beautiful thing from your perspective
and it like showed how we can come
together and that you look at that as
the blueprint for AI yeah I mean so yeah
Co was an unmitigated disaster for
everyone and uh I don't think it for me
it played out exactly how you would
expect it to play out among Waring
humans Waring humans yes I didn't see it
coming I thought it was going to be
beautiful and first few weeks when it
felt beautiful and everyone came
together I like oh my God this is
amazing and then it rapidly fell apart
it's just it's so algorithmic I mean
it's exactly how humans behave in Mass
there's zero surprises about the whole
thing but what I'm suggesting is I'm
just introducing the idea that we often
time think that something is
impossible until it's not and what I'm
suggesting right now is in this moment
you and I could be talking about a
thousand different things like you know
fashion or politics or drama or like
whatever is happening that can fill the
air and what I'm suggesting is that we
do this and it consumes our attention at
the cost of realizing this bigger
picture phenomena which you just said
aligning with AI and making sure our our
Earth is actually inhabitable and that
we humans don't annihilate our eles is
the most
significant situation on planet Earth it
is much more important than anything
else in existence we could be talking
about
and I was first to say you like you're
saying you don't trust others uh you
don't trust Authority you don't
trust individuals technically I said I
have a problem with authority and I
don't trust individuals but slightly in
my mind two different things yeah okay
so yes on both those things I don't
trust
myself I don't I don't trust what my
mind says I don't trust what my mind
wants to do I don't trust what it thinks
it wants I don't trust my mind and my
entire life has been a process of
learning to not trust my mind and so a
thought experiment for me that really
helps make this clear is if I travel in
time and I hang out with Homo erectus a
million years ago so they have an axe in
their hand and we say Homo rectus wears
food where's sh with shelter and where's
danger we listen to their answers
because they know then we say what is
the future of the species and we laugh
because there's no way they're going to
imagine all the things we have today
this Set uh the technology we have our
ability to travel outside of the Earth's
biosphere in this moment I think it's
reasonable to imagine that we are the
equivalent of homo
erectus that's how primitive we are in
our thoughts we we have nothing
intelligent to say about the future the
only thing and I've as I've gone through
this thought process I've come back to
this observation the only intelligent
thing I can say right now in this moment
is that I don't want to die even one
layer above that I don't
know and that's never been the case uh
as a species we we've never had a wall
of fog right in front of us and so um I
think it's possible
that we could be steps away from the
most extraordinary existence to ever
happen in the
Galaxy that our Consciousness could be
more
expansive than we have imagination to
contemplate that requires us to sober up
a little bit and realize what's
happening right now and Rise Above
ourselves because
the way we do things now is probably
going to lead to to some undesirable
outcome if we don't res situate
ourselves on a new goal okay so I'll
call that human alignment how do we
align humans step number one you're
trying to get people excited you might
be able to live forever you're hoping
that causes a shift is that your only
card or is there another card to play
yeah so I'm suggesting that every single
one of us should become all problems and
so if I say that I don't want to die
individually that's what I built
blueprint for is to say how can I
scientifically not die as an individual
if I say how can we stop the Earth from
being uninhabitable so how do we
actually take on climate change now
typically the Mind goes to I'm going to
recycle my Amazon box I'm going to vote
for somebody who does this I'm going to
you like these are the path our mind
typically think if no one thinks I'm
going to become climate change and so
the way you do that is you realize that
we treat planet Earth the same way we
treat our bodies that relationship is
identical we pollute the Earth however
we want just like we pollute our bodies
however we want there's no constraints
on what we do and so if I'm going to
become the climate change problem I'm
going to adopt this uh no not die
infrastructure in my own life like I'm
going to try to go to zero death across
all aspects if I want to align with AI I
think of myself as 35 trillion cells and
so what I'm trying to say is uh yes I'll
finish that thought I'm 35 trillion
cells I need to get 35 int 35 trillion
intelligent agents to want one thing and
it's not have fun with friends it's not
meaning making it's not you know
whatever it's don't die I'm reduced to
one goal above all goals and so in that
way I'm acknowledging that I'm
one part in an8 billion person part game
and that there's power in each one of us
doing these things individually and that
prevents us from saying I'm going to
want something and to take action I'm
going to blame everyone else I'm going
to point my finger and say everyone else
is doing something wrong or they should
be doing this or that I'm trying to say
every single one of us owns this problem
that we have to solve don't die by
ourselves first and then collectively
so it's a mass call for all of us to be
that and so to also to answer your
question on don't die don't die is a
recipe book don't die is a medical
protocol don't die is an engineering
guide don't die is a economic plan don't
die is philosophical don't die is
religious like don't die is all things
it's applicable to every domain so if
you're an AI engineer and you're
building AI what are you building it to
do what are its attributes how do you
train it what's the feedback mechanism
who do you sell to what do they do with
it these are all very practical things
if you're a don't die in building that
you've got guardrails now what you're
going to do other than just this
maximize how much money you're going to
make with this given
thing okay um the area that I am deeply
concerned by in that analysis is that I
the idea of becoming the problem I'm not
sure you're
seeing um what actually drives human
beings so you talked about we have this
wall of fog and the only thing I know is
don't die everything even one step above
that doesn't make sense I'm not going to
be able to comment you might not have
wisdom to offer but there is a thing
that's driving you so this goes back to
my thing about you have evolutionarily
planted algorithms running in your mind
I don't think there's a way around that
and this is I think the the thing that
um is the problem to solve so when
people talk about Ai and they say the
problem to solve is alignment when I
think about humans the problem to solve
is everyone you included in my
estimation are slaves to those
algorithms so for instance you you are
saying that I don't think you would do
the things the algorithm was telling you
to do from diet sleep all of that if it
made you hate your life and feel worse
about yourself it makes you feel great
you say I've never been happier and you
say that as a way of saying see it works
you're not saying um it it's just about
the clock I miserable I hate it but I
can see that my aging clock has slowed
down and therefore I'm going to keep
doing it it's I'm happier I feel better
I sleep better I have better cognition
that is a response not to truth that is
a response to the evolutionary algorithm
running in your brain makes thinking
clearly and being well rested feel
awesome and so I would say you're
pursuing feeling awesome so when I think
about U becoming climate change for
instance people pollute because of the
tragedy of the commons they are it it is
exactly the same as money
printing the government prints money
because it socializes losses it says hey
everybody we're all going to share we're
all going to get diluted by printing
extra money so no no one class group
uh me as a giant Corporation I'm pumping
this into the river because the river
carries it away and so now it's like H
it's all of our problems anybody that
had to worry about so it's not just me
anymore that has I still have to deal
with it but it's now manageable for me
because I'm only one of all of us that
has to deal with that and so I have a
profit motive and a distribution of pain
and suffering that incentivizes me to do
that
so that to me those are algorithms at
work for better or worse the only way to
get so far the only way to get humans to
act as a collective one we have an
evolutionary algorithm as a social
species so there is massive amounts of
cooperation no doubt but we need a
religious element to really get us so I
get why zero or uh don't die has that
element to it that tends to take a long
time and it's only ingroup and religions
kill a lot um so that becomes the thing
that I worry about is what you end up
seeing is to get something done at the
population
level an authority has to come in and
force it upon you and there was an
awesome meme going around it says what
um authoritarian leaders claim to be
doing and it shows the Care Bears and
like love shooting out of their hearts
and it says what they're actually doing
and it shows a real photo of a young
woman being um she's kneeling facing a
wall with her hands behind her head and
somebody has an AK-47 pointed at the
back of her head now for fans of History
you will know that I can't fathom how
many millions of people have died in
that kind of state sponsored government
violence uh you need look no farther
than Pax Mongolia for just like an
unimaginable amount of brutality or hey
go back Hitler anyone heard of him so uh
all of this stuff goes just horrifically
wrong because of the algorithms running
in the human mind
yeah Crossing that
Chasm
seems impossible if I'm honest like I
don't know how we use this will be good
for you as a way to get everybody to
fall in line yeah yeah I hear you on it
feels
impossible and I guess the question is
what could possibly happen to make that
impossible
possible I have one answer yeah which
you and I are talking about this off
camera you and I approaches very
differently my thing is I I like to show
people how I think even though I know I
no one should listen to me on how to
align a eye I just do not know enough
about the problem however it's fun to
walk through here's how I try to think
up from first principles so I may change
my mind in an hour and realize that
there was something I missed but on AI
alignment you have to get AI aligned the
only way I can see to do that is to make
AI completely agnostic to outcome it
cannot want something because the moment
it wants life over death the moment it
wants to make sure that you get the
outcome that you asked for now all of a
sudden you get into deranging territory
where you get the paperclip maximizer
problem you ask the AI with all the good
intentions in the world hey make us more
paper clips become efficient at making
paper clips and then it suddenly looks
at you and goes oh man the atams your
body be way better more useful
configured as paper clips uh so I think
that problem is so pernicious the only
thing that you can do is make AI
completely agnostic such that when asked
to stop it will stop immediately now
does
that carry
some fourth fifth order consequence that
I'm not looking at almost certainly but
that that is me doing my best to
think up from anything that wants
something if it is if its sole reason
for existing is to optimiz for that then
I know eventually I become an antill in
the way of it creating to reaching its
goal and so it will just become if it's
a billion times smarter than us it just
will be so
indifferent and it will know what you
want doesn't really
matter because I have to think as the AI
I have to think population level M so
anyway a kill switch is basically my
punchline that is designed to do no harm
that it doesn't want one thing over
another when it's told to stop it stops
yeah yeah this is I guess if I if I try
to piece certain things together in my
mind to make sense at this moment
because everything you said they're good
arguments they're reasonable arguments
they they're based upon this reality
when I read history and I look back
at momentous time
almost always the solution to these
problems were unimaginable to those who
existed in that
time it it just came from nowhere that's
happened repeatedly can you give us some
examples uh like um discovering so
discovering that microscopic objects
that were beyond the resolution of the
eyes these things called germs were
responsible for infection and death and
that simple things like washing one's
hands and cleaning instruments between
surgeries would lessen that death rate
and increase the mortality increase
lifespans the idea that microscopic
objects could be an influential thing in
our lives was
absurd absolute
Bonkers turn out to be
true and that's just happened throughout
history and so in this moment I think
think it's reasonable to say just like
history has been full of these moments
that the solution to our problems is
probably unimaginable to us
currently you know no one thought that
when when in early New York when horses
were the primary mode of transport and
manure was the biggest problem in New
York polluting the Hudson and making
everyone
sick no one thought the Model T would be
the solution to horse
manure it just wasn't on people's minds
and so if I stack this together and say
okay um we may not know the solution and
all of us can make really compelling
arguments about this moment if we also
say it's possible that our intelligence
relative to AI in the very near future
that we are like Homo erectus absolutely
primitive in our
cognition and we start layering those
two on top now we have to now we can say
maybe it's a
case the solution is
unimaginable maybe it's a case that we
are as primitive as cavemen in our own
ability to contemplate the future then
what do we say like what is what what
can we possibly say that is intelligent
if those are true and this is where I
come back to don't die
our we are currently transitioning as a
species humans are the stewards of all
knowledge we have discovered things
we've acquired AC ired knowledge as a
species we learn it we memorize it we
regurgitate it we reinforce it with each
other we own all knowledge AI is now
becoming the steward of all knowledge it
is much better at discovering in many in
many places and becoming better every
day and it's much better at maintaining
that knowledge than any one of us
individually we can query that knowledge
base but we're no longer the stewards AI
is becoming the steward so we don't even
have that as our Authority anymore it's
going to be soon passed on somewhere
else now if in that
situation what is the manifestation of
our intelligence and that's where I
arrive at this conclusion is in this
moment I'm much better off saying yes to
an algorithm that improves my being at
the speed of an
iPhone I want the algorithm to be
pulling from Discovery and from insight
and improving me at this rapid rate
rather than me being this silly form of
intelligence making the same mistakes
again and again and again now I realize
that we have fear of uh government and
power and people that's all true and I'm
suggesting as a species we are not
trustworthy we know this to be true this
is not something that should be taken as
an offense I'm not trustworthy you're
not trustworthy governments are not
trustworthy no one's trustworthy and we
shouldn't trust anyone we should have a
new form of societal structures that
does not require this trust
because we're just not we're not worthy
of it in this in this moment there's a
better form of intelligence now ai has
its limitations it needs to grow and
mature it's got its own problems
certainly so both forms of intelligence
have the strengths and weaknesses and
I'm suggesting if you look at the two
forms of intelligence it's much more
reasonable to bet on the Improvement of
AI than it is the Improvement of The
Human Condition it's a much better bet
and if I'm going to look at one of those
forms of intelligence is which one takes
us into this extraordinary future I'd
much rather bet on this one as being the
the one that is able to help us mature
beyond our current primitive
selves okay so if those are the two
paths you see before us uh AI helps us
evolve into what we can become or humans
have to get themselves there which do
you think I would bet on it probably
depends on whether you perceive you to
be in control of the algorithm or
whether you perceive another entity to
be uh in control of the algorithm I
think if you perceived the algorithm to
be within your control that is solely
after your own benefit and acting as
your in your best interest and also as a
guardian of your best interest in
relation to others I think you'd say yes
if you felt like the algorithm was
influenced by another form of power that
could somehow hurt you for their benefit
you would say no that's really helpful
to see how you see my perspective um
that's very close so what I would say is
it seems
self-evident that the AI is our only
hope of us transcending human nature
we've seen human nature play out over
the last how many thousands of years so
I think we can feel good looking forward
that we know what that looks like
barring something like uh and we
actually have never killed ourselves
despite the radical changes in climate
that we've already lived through uh
despite now us messing with the climate
on top of the fact that the climate is
already just unstable over long enough
periods of time asteroids on and on and
on uh so yes AI I think is your only
hope of transcending that muck if you
love coffee and a little bit of caffeine
but hate the Jitters and that afternoon
crash that comes with it there is
finally a coffee replacement that you've
got to try it's from Peak and it's
called Nanda Nanda is made from the
highest quality ingredients and claims
to activate your metabolism promote
healthy levels of testosterone
production and provide sustained energy
without the Jitters or crash with slow
relase caffeine that comes from
fermented probiotic teas Nanda delivers
an energy boost that lasts throughout
the day so you can stay focused and
productive Peak has well over
15,000 five-star reviews and for a
limited time you can get up to 15% off
plus a free rechargeable frother and cup
with my link Peak
life.com impact click the link in the
description or go directly to Peak
life.com impact to get up to 15% off
plus two free gifts
now I also think that humans
have I if it isn't basically
zero% like true existential risk because
even if we nuked each other
back 10,000 years which would be
horrific I don't think that it would be
existential not to downplay it it's
something worthy of dramatic concern but
I'll put us is like low probability very
low probability of a true existential
threat AI on the other hand is is either
going to be Utopia or just
total existentially wiping us from the
face of the Earth or from the universe
uh so that's the one where I'm like okay
we have to be really careful of this
thing that we're playing with so I I ask
you that question only so that you know
as I lay out the the following argument
that it is we both agree that AI is our
only shot of overcoming what I will call
um we're humans we we're animals we are
very encumbered by The evolutionary
things that run in our brain as evidence
by obesity right we just the most basic
thing we can't escape yeah so okay we
want AI to work but the bad news is that
it is going to be I would say near
impossible to get humans to use it well
to not use it as a weapon to not just
make it another thing that we add to the
Arsenal of problems and so what I'm
trying to figure out is what we're going
to do to cross iasm so let me ask you of
what I think is a very interesting
thought experiment that will help map
out um maybe our value systems not sure
yeah exactly how to um lay out what this
going to reveal but it will
help let's Flash Forward seven
years AI is really taking off we
certainly have general intelligence we
may have the Bottom Rung of super
intelligence it's changed your life in
ways that are already unimaginable um
the don't die religion is really taking
off and everybody's lives that are in
that religion are better let's say it's
the fastest growing religion of all time
and then there's Tom Bilu and he just
won't shut the [ __ ] up of about AI
dangers and I'm slowing the adoption of
don't die and the AI says Brian
listen love Tom great
kid bad
news he is causing I mean I just know
the math he's going to cause uh 42
million. 7842 13 million deaths yeah um
just because he's he's going to delay
this by 72 days and that's how many
people we're going to lose um so I'm
just going to need you to kill him would
you I I'll take um a
small detour around the thought
experiment oh come on Johnson I'll come
back to it okay so if you look at the
history of trading initially we talking
Silk Roads here uh yeah just like you
you know people exchange objects you
know they trade these physical objects
then you introduce currency then
introduce you like you going forward and
now you've got algorithms running you
know
high-speed machine learning algorithms
trading stocks at time scales that we
humans can't they have access to more
data uh they can make decisions faster
than humans a significant portion of our
economy is run by
algorithms that same process is going to
happen with our own health and wellness
and even our own cognition and So
currently we have biological processes
happening at multiple time scales within
our body proteins and genes and blood
and delivery of nutrients all these
things and um we operate at a certain
time scale on reality as we get better
with
technology to have it inside our body
and outside our body our bodies and our
health and wellness is going to be like
a high frequency trading stock market
you're going to have things doing
protein folding and and misfolding
exchange it's going to be real time and
it's going to be faster than our
cognition and so these algorithms are
going to do this and you Tom are going
to say yes to these things because it's
going to allow you to stay alive so
you're going to have these algorithms
that are increasingly take better care
of you than you can yourself now as part
of that it's probably going to help you
sleep and it's probably going to help
you focus and it's probably going to
help you problem solve and at some point
the line between your thought and the
algorithm thought is probably going to
be
blurred because you're becoming the
synthesis of all these different systems
and so your thought experiment
assumed that we've moved forward in the
future but you are the exact same person
as you are now with the same cognition
the same proclivities the same perceived
Free Will and I don't know if that's
fair that's the same thing that Blade
Runner did blad Runner marches forward
all the way I think it's 2049 or
something like that yeah everything is
futuristic the droids are the
environment is except for humans human
behavior human thought and Human Action
so that's the the and Movie Makers have
to do that because otherwise it's not
relatable you can't make an
unintelligible cre unintelligent recure
make it be a big hit and so your thought
experiment assumes these things stay the
same about you but in that moment I
would imagine your cognition is
unimaginable to your stance right now
there's no way you could op occupy your
mind state in seven years from now when
you're now in this mesh of tech
technology and in this bigger web I'm
suggesting that the we do goal alignment
today in society like we have rules like
you can't be violent you know if you do
you're going to have to pay a penalty
and you're probably go to jail so we
have these rules these
guardrails our systems of goal alignment
are going to get more robust to a point
where maybe nobody can hurt anyone under
any circumstance that's just how the tap
hisory of our computational goal
alignment
works and so we can't hurt ourselves we
can't hurt each other and in that moment
so this right now this thought
experiment people will be like that
sounds [ __ ] dystopic but when we
arrive there we'll be like this makes
total sense why would I want to commit
self harm why would I want to hurt
someone else and you just eliminate all
the problems of humans throughout
history this is going to sound
unimaginable to most everyone listening
to this
why would why couldn't it be the case if
we just walk down the natural steps of
our integration with
technology by definition we can't Define
what those mental States will be so I'm
neither right nor wrong it's just a
contemplation but what I'm suggesting is
that most likely the future is
unimaginable to us and anytime we
presuppose anything we are probably
wrong because we we can't imagine right
or wrong so in that scenario I don't
even know what to say because I don't
know how I would feel because I can't
presuppose I even know my disposition
towards you in seven years from now and
you can't imagine how you feel this is
what I'm saying it's like this is why I
try to reduce all of reality to the only
intelligent thing I can say which is
don't die I can literally say nothing
intelligent above that I'm an absolutely
constrained as intelligent being in this
moment I can run my mouth and say a
thousand things they're all foolish
they're all just this time and place
that over the period of time the 25th
Century will not care about it has no
meaning whatsoever okay uh then I'm
gonna have to trap you so I'm gonna ask
you a question I've heard you answer
before uh if because here's here's what
I think the big problem is I have um I
have a base assumption that we as a
society the thing we're actually going
to have to deal with is the chasm where
the algorithms are headed in the right
direction but we need people to change
for the algorithms to keep moving in the
right direction that Chasm is where all
the action is and your answer at least
to that question is well you just have
to assume on the other side of that we
can't even imagine how we think so I'm
not even going to comment on it but we
are going to have to cross that Chasm
and from where I'm sitting and that is
that is what we have to think through
yeah okay so the question I'm going to
ask you to trap you in your own answers
uh I can't wait is if the algorithm were
to whisper in your ear uh people
shouldn't have
kids and remember yeah it knows what's
better for you for the planet for the
species everything and it just says yeah
I don't have kids would you then not
have kids and tell other people don't
have kids yeah it depends on the
maturity of the algorithm okay sure yeah
so I know for example from building this
algorithm for the past three years with
my health and wellness we've built uh
solid intuition observing the accuracy
of this algorithm so you take a given
measurement of my body you look at the
scientific evidence you do the
intervention and then you measure again
and so we have been able to witness the
accuracy of this entire process again
and again and it is
superior to any doctor I've seen in the
same time period so in this moment if an
algorithm were to make a recommendation
on my health and wellness I would trust
the algorithm over any human
doctor so in that regard yes it's there
now if you take it into other
applications you know like children uh I
I don't I don't have I have no data on
whether that would be an algorithm I
would trust or not so it reaches certain
thresholds like for example you know uh
if you're on a ship and you're using a a
depth finder on your ship that's pretty
good technology and you can probably r
the depth finder is telling you the
proper depth and you're not going to run
ashore uh same thing with you know a
radar gun like is it clocking the speed
of a baseball on the car at the proper
speed yes so certain Technologies
certain levels of confidence we know how
to trust them so I'd say I look at the
confidence interval of whether that
thing is superior to human intelligence
and this is the entirety of my argument
is that human intelligence has been the
superior form of intelligence and it is
now being surpassed by AI in more Fields
than ever contemplated which gives rise
to not to something scary but to the
opportunity that we can say amazing a
new version of intelligence is here it's
Superior to our own we're going to adopt
it and become a better species but to do
that we kind of have to be willing to
reconcile leaving behind the inferior
aspects of our Tech of our own selves
and that creates fear of loss it creates
fear of dystopia it creates all the
fears that we've been talking about so
it's a very scary process to go through
so I understand why
people uh grapple these Notions it's not
easy to say yes especially when we live
in a society that is fundamentally not
trustworthy in this
moment so the way that you answered the
question in the past and I'm certainly I
don't want to make you feel like you
have to be consistent with that if
you've changed your mind but what you
said in the past was yes I'm completely
open to rewriting the way that I think
about the world yeah um and this is
where this is part of that crossing the
chasm getting other people to adopt it I
think we are it's very much like the
trolley problem which seems like this
really abstract thing until you realize
you have to program self-driving cars to
make a decision do I kill a cat a baby a
human a pregnant woman like what it if
there are four corners that it could
choose but on each one of them is a
different thing that it's going to most
likely kill you have to tell it how to
make that decision right and we are
going to be in that zone of what do we
tell people in terms of how to get the
cultural you said we need a big cultural
change we're going to have to tell the
culture this is the new value system you
should adopt which is really what I'm
trying to figure out right now is what
your value system is so for instance
I'll I'll answer the question see how
this sits with you um I have a value
system that may be deranged to your
point I'm very open to overtime
realizing oh my God a better
intelligence comes along shows me that
this is just a dumb way to think about
it but I so value people being able to
make a
choice especially if it is their they're
not doing anything they're not sorry
they're not hurting anyone in fact this
is something that we have to Define you
said something that really freaked me
out a few minutes ago where you were
like we may get to the point where we
don't let each other hurt each other now
we're living through a moment where what
people are saying is is violence is
absurd yeah to me so now we're we're
seeing people um attacking speakers on
campus because they think that their
words are violence and that their words
are genocide I mean like crazy [ __ ] man
and they will actually harm that person
because they think that their words are
hurting other people now I disagree
aggressively with their assessment of
harm and hurt and violence and all that
so that that's where I'm like this
problem has to be dealt with this we're
watching humans right now in real time
grapple with this and so again I'm with
you on the when the AI is there and when
people adopt it now we've got something
amazing I want to get to that side of
the chasm I'm with you but to cross
aasm uh we're going to have to Define
terms like that we're going to have to
decide if Tom is slowing down the
adoption of don't die and people the the
algorithm just runs the math and is like
I know to just an 100% degree of
certainty that this many extra people
will die because I can see the rate at
which he slows down the adoption of this
ideology and therefore killing that one
person just makes more sense than
killing the 47 million and
what I'm trying to get to is from my
value system you can't do that like even
though it's one life versus 47 million
with my value set because that person is
just saying what they believe to be true
and I value I mean I'll wrap it in free
speech I'm not a free speech absolutist
but I'm extraordinarily close I like the
way we say you can't inside violence I
think those things are good um but even
there you have to be careful like in
terms of giving people the power to say
well this does incite violence because
that person whatever simple thing from
my perspective they said that other
people think is is violence um
so that's my value set what I
get that you're saying on the other side
that you'll think completely differently
I would think completely differently and
all that but just from a value
standpoint today Brian Johnson what do
you think is a and I use the word ought
on purpose ought you kill me m or ought
you let me
live uh first on your trolley
problem
the again the the likely so we've
created this thought experiment that
basically
creates a conundrum for our current
morals and ethics there's no answer that
is satisfactory every answer is bad and
that's the value of the thought
experiment is you say we we have
technology and it basically leads to
loss in every direction because you
can't reconcile morals and ethics so
it's a really it's a really big problem
and so if you think about the future of
what might the future of don't die look
like you could say well Society is a
tapestry of goal alignment where it's
structured in a way where no moving
object can ever be at risk of killing
another being the physics of that thing
stopping
can never be in excess to its ability to
stop so by by definition human harm by a
moving object is impossible because of
the computational tapestry of sensors on
the human sensors on the machine the
physics of the stopping you know the
backup systems behind it now in that
regard you you have a solution that no
one thought of so it's like well we had
this thought experiment it broke
everyone's minds and then the solution
was actually you just solve it by
physics and so these are the kinds of
things where I'm saying that we we
stumble around with these things and
then we arrive in the future it's like
Ah that's pretty simple we solve that
and but oftentimes people just get stuck
in thinking we can't move forward
because this problem is unsolvable in
reality there's absolutely a solution
for it with with your thought experiment
on killing you or not and why why are
you making me a
murderer the the uh I mean first of all
do I even have authority to make this
decision in in the thought experiment
yes wait how did I get it uh well
because you are doing a very good job of
shepherding this movement into existence
um and the AI in this scenario doesn't
have opposable thumbs yet and so it
needs it needs a little human
intervention and because you so believe
in the accuracy of its assessment of the
number of people that will die based on
my words and my ability to persuade them
to a worse way of life uh it knows that
you would be um the most likely
candidate to do the right thing even
though it is very hard
yeah yeah I think I was talking to a
friend of mine the day you're gonna hate
me for this
answer uh he uh he build he he builds Ai
and we spent a few hours talking about
new mathematical models for goal
alignment so John Nash built the Nash
equilibrium you know with
Beautiful Mind exactly a gorgeous
mathematical
model on how humans negotiate and arrive
at Solutions and we were talking about
mathematical models on how you would
actually build alignment and I'm not a
mathematician he is and I found it to be
enlightening that the elegant ways in
which
math is a substrate of intelligence that
is super interior to my own right wrong
cognition that I in like in that
conversation I saw how limited my own
brain is because my brain wants to
structure things into good and bad it
wants to create tribes like this tribe
and that tribe it wants to create so my
mind has a a limited set of dimensional
structures on how it resolves
conflict and math is much more expansive
and even creative in its ability a model
you know paired up with these with these
machine learning models and I walked
away from that
conversation feeling
Overjoyed that we could be leveling up
as a species in our own language of how
we understand each other how we resolve
our differences how we repair the things
that are broken we have such a limited
set I mean so many of our fight in
society are born from misunderstandings
of our own cognition my inability to
understand you and your inability to
understand me and our ability to
construct language that is consolatory
even when we're trying our best we just
can't quite inhabit our own each other's
perspectives and I think these new
mathematical models could that's what
gives me hope for the
future and so in this scenario you
know the reality you occupy in this
future time State and the reality I
occupy in my
state you know there's like a reality
which we perceive there's a reality
which is running the computational
substrate beneath
us do they are they the same thing are
they different things is my perceived
reality the same as what I live reality
I don't know like we kind of get into
some really fun territory of what is
reality what is perception what is my
lived experience what is my real
experience and this is where I think
we're going we we we are that primitive
in contemplating this future we're that
blind to it
and so every time I come up my in my own
thinking of a limitation of some
unsolvable problem I just have to give
myself a reminder a few of these
examples of how primitive I am like oh
yeah I'm Homo rectus like we're
definitely walking into this much more
elaborate and beautiful way of being so
I think uh if if it's the scenario Tom
where you had those thoughts and that
that was your honest reaction to reality
I would have confidence that we could
Rec recile it and we'd find an outcome
it wouldn't require murder it wouldn't
require me making a decision we'd figure
it out like we would sort and we'd
probably have stability and like we
don't want to die maybe our versions of
different of don't die are different
that's fine but we can find a solution
here that's why the AI wants to deal
with you um I think you're imminently
reasonable uh while I am admittedly sad
uh that I I got such a nuanced answer
because I do I think ultimately we are
going to have to program these a AI is
good at optimization if you don't tell
it what to optimize for you will never
get there now I'm not privy to the math
that you're talking about but anyway I
love how nuanced you're being um I think
that's really important now the one
thing I will say though I think there is
a reality to be faced which is that
there are no Solutions there are only
tradeoffs and the thing I'm trying to
get out with my question I ended up
writing a few questions like this that I
wanted to ask like the algorithm tells
you to do this do you do it the
algorithm tells you to do that do you do
it um because it Maps out what we will
ask the AI to optimize for it Maps out
our value system it Maps out and I get
it you're saying look the AIS are going
to change us they're going to change the
way we think and I think all of that's
true but that will not remove the
reality that everything is a trade-off I
wrote a comic book called neon
future um and I want to play this clip
back in like seven years because this is
what's going to happen I am certain of
it okay the world's going to bifurcate
and there will be people that adopt the
algorithms and there will be people that
flat reject it plain and simple and
you're going to see like a pure human
movement it'll be something like that
and there will be extreme conflict
between the two groups and what I I was
contemplating at the time was uh people
are going to start actually putting
technology inside their body and there's
going to be a violent reaction against
that a very religious tribal reaction
yeah
and
I we were playing with AI in it I wish I
had made AI like the central Focus it's
more of a focus on people realizing in
the early episodes for something that we
haven't revealed yet but anyway um given
that that from where I'm sitting is
inevitable it is The Human Condition
it's just the way the people work Ray
bradberry's Fahrenheit
451 uh you have the people that live off
in the woods because they're just not
willing to give up reading that was the
first time that that really put on my
mind oh yeah like people will create
these alternate societies
um I I have yet to see on planet Earth
with humans
where two different tribes decide to
just let the other one live in
peace okay so let's pose this question
to
homoerectus what becomes the species uh
is Homo erectus
capable of
Imagining the presence of 8 billion
people on this
planet and you know several billion of
them cooperating on a shared
infrastructure the internet every
day and you like like could hom just
imagine that kind of scale of human
cooperation as a structure where there's
still conflict so I agree with you if
you map today's reality to the Future
you probably you reasonably conclude
those things that's algorithmic that's
how humans have always behaved the the
difference here is
that the intelligence we're giving birth
to will be so much more intelligent than
us
it's very hard to model that
out um I agree with your premise though
that conflict is inevitable that we're
going to go through a rough patch as a
species and it's an open question
whether we make it
through it's doubtful that it's going to
be Kumbaya so yeah it's it's hard for me
to to offer up a
tangible rebuttal to that
uh it seems likely but at the same time
uh you know I mean like okay so like if
we give this a thought give it a minute
to breathe like let's imagine we're
giving birth to
um super intelligence and we find out
that we can dramatically slow the speed
of aging and not and almost reverse it
so people are basically achieving
immortality or perceived immortality or
like some radically extended life like
something that that changes this the
frame from Death being inevitable to us
having some unknown time
Horizon
and so if you lean into the systems you
get access to this and just say it's
accessible if you don't you want to go
out and live in the forest and die like
a homo sapien that's your
prerogative uh and that simultaneous to
this being possible that the authority
structures of society
of our ability to do Mass harm to each
other are arrested that basically the
majority let's just say I'm just making
stuff up like the majority of our
systems right
now uh have computational control over
access to uh Annihilation level weapons
and let's imagine that our computational
systems arrest those abilities in some
form or fashion now again this is sci-fi
this is like a movie plot this is right
these these are things all very hard to
believe and In This Moment uh in any
other moment it would be reasonable for
someone to say that's silly to make that
contemplation process it's not and this
is what I you fact this is like the
Genesis of blueprint is I was I was
posing this question what would the 25th
Century respect what what is seemingly
impossible and are we blinded to in this
moment and so
I'm I am open-minded to The Impossible
being the most likely
scenario in the future I don't I don't
think actually that's okay so I I assume
the impossible is much more probable
than the likely there we go that's what
I would say on this one okay uh let me
say that back in different words and
make sure that I understand it given the
power of AI to upgrade our thinking and
to and we're now going to have to define
zeroth principal thinking it's a ai's
ability to pull forth something we've
never thought of yeah Lisa doll when he
played the AI that beat him at go said
it plays like an alien yes and that
always gave me the chills like that
sense of whoa It's approaching the game
from an angle so
oblique that even though this game's
been played for a thousand years and the
most brilliant human Minds have all gone
into this game and played it none of
them approached the game from such an
unexpected
angle uh so you walked us through some
zero principle things earlier but that
idea that the unknown unknown will
become known yeah thanks to AI okay
so given that that reality is inevitable
it is far more likely that we do what we
now think of as impossible than that the
human algorithm that we've all been
dealing with for
Millennia is the thing that has enough
energy to push
forward it's clear yeah I want to
believe it I still think that we have to
cross a Chasm I still think that that's
a bloody nightmare but that's a very
clear statement yeah okay can you walk
us through please zeroi thinking I pride
myself on thinking from first principles
um whatever success I've had in life is
brought to you by first principles
thinking yeah um not getting trapped in
well this is how it's always been done
and therefore that's how we do it it it
really is powerful yeah um but zeroi
thinking which I hate that that's the
real way to say that by the way zeroi
zeroi yeah um is three syllables yeah I
don't know it just feels weird but
anyway uh what is it and can we do it or
is it reserved for AI yeah all of us are
capable of zero principle thinking only
a few humans have ever have ever
achieved
it so um the Earth is not the center of
the universe it orbits a sun in a larger
Galaxy is a zeroth principal
understanding from where we were at
where the center the Earth is the center
of the universe you should know the
following three numbers 37,000
251 37,000 that is the number of
businesses which have upgraded to net
Suite by Oracle the number one Cloud
Financial System 25 net Suite turns 25
this year so you know that they know
what they're doing they've been around
for a long time that is 25 years of
helping businesses do more with less
close their books in days not weeks and
drive down costs one because your
business is truly One of a Kind so
you're going to need a customized
solution for all of your key performance
indicators in one efficient system with
one source of Truth which you will get
from Nets Suite manage risk get reliable
forecast and improve margins right now
download net su's popular kpi checklist
designed to give you consistently
excellent performance absolutely free at
netsuite.com
Theory that's netsuite.com
Theory to get your own kpi checklist
netsuite.com
Theory do you have use first principles
to get to a zeroi principle Insight
sometimes yeah sometimes it's entirely a
new primitive like Einstein's discovery
of special theory of relativity that
there's a new dimension building on
Newtonian physics it's hard to get there
through first principes thinking or you
know uh from ukids elements to cartisian
Geometry so sometimes these ones to
zeros are very hard to get to you just
you're plucking them out of another
dimension
so I mean like with go you know could
humans have discovered the G the the
moves that that alphao played probably
did they no were was alphago able to
discover more gamechanging moves in the
game go faster than any human yes you
know like a stunning degree and so yeah
first principal thinking is you learn
everything you can you make the fewest
number of assumptions given a time frame
and you act with that knowledge and so
that's how you go from version one to
version two to version three it's a
systematic process to improve based upon
what you
know zeroth principal thinking you're
working with unknown unknowns so you
don't know what you're missing and when
a so just like if you're a physicist and
you're working in newtonium physics you
don't know another dimension exists in
the model of physics in until Einstein
discovers it with now rebuilds our
entire model of physics if you're a
doctor in the
1870s and your patients are dying at a
high
rate you through first principal
thinking you're not discovering these
microscopic objects called germs causing
infection you to discover there's this
microscopic world of germs is a zero
principal insight and so typically these
kinds of insights have come every every
you know few Century a couple a century
or a few decades like they've been
pretty sporadic throughout human
history and AI is a zero
manufacturer it manufactures zero
principal insights and so you're humans
were good at first principal thinking
every once in a while we do a zero
principal Insight you bring AI online
and it just starts manufacturing zeros
and what happens when zeros hit Society
is it change changes our understanding
of reality and we humans have a natural
time to adopt new understandings of
reality we're not very fast at it
sometimes it takes a lifetime sometimes
a few lifetimes sometimes a few a few
generations and so what the observation
is on Zero's principal thinking is that
AI is going to move Society forward at a
speed that will be unintelligible to us
and be faster than we can imagine and
will give us uh basically make us dizzy
and
nauseous because we won't be able to
keep up with change and so this goes
back to this idea that we are the
stewards of knowledge we are first
principles thinking species zeros come
we incorporate it into our knowledge we
rebuild reality but we get ourselves res
situated for the next insight as AI
drops more and more zeros it becomes not
only the steward of first principal
thinking but and of the new zero
knowledge and of doing new knowledge onp
top of that and so we're moving from a
species of knowing to not
knowing that is such a
significant change in our understanding
of reality because you we our existence
is premised on us knowing every time we
talk we presume we know or we're
offering something we know or we're
sharing
knowledge and we're mov moving to a
state where we don't know we've got to
query these intelligent systems to know
and yeah so zero principal Insight is b
i I had a I personally was
struggling I was obsessed about thinking
about the future of existence and I I
saw in my own mind I would walk up to
the frontier and go as far as I could
and then I hit a wall of fog and my
intelligence couldn't punch past the
wall of fog and I got so frustrated with
my limitations of my own intelligence I
went to bed one night and I was thinking
how do I solve this problem and I had a
dream about zero principal thinking and
uh it was like one of one of the most
joyous moments of my entire life where
it just broke this for me that my
intelligence could push beyond the fog
through the framework of zero principal
thinking and then um yeah just it's
become a pillar of my understanding of
reality and much of what we've talked
about today is practicing zero principal
thinking of this idea
that uh the trolley problem or the
thought experiment of you know do we
kill Tom or not um first principles
would lead you to arrive at a certain
observation but zeroth principal
thinking takes you down an entirely
different path just opens up so much of
more optionality or the new math of goal
alignment that is a leveling up of our
cognition past our simplistic thinking
of good and bad and tribes and you know
the contracts we have right now okay so
so I wish I were smart enough to really
understand zeroth principal thinking um
because when I go to reach for it what I
come up against is I understand the
descriptions that Einstein gave about
his thought experiments that led him to
have the insights that he had but they
sound just like
intuition they they aren't I don't
understand the building blocks well
enough to say whether he was or wasn't
building from first principles like I
know these things to be true and
therefore if those are true the
following has to be true like quantum
physics is first principles thinking
from Einstein's equations now the great
irony is that Einstein himself did not
believe his own equations and he has
since been proven um incorrect on one
thing for sure so far uh which is our
universe is not locally real that's
crazy I don't want to derail the
conversation on that but I will just say
that is the one thing that makes me go
maybe we really are in a simulation um
but is so you said that zeroi principal
thinking takes you down a path can you
actually walk us down that path or is it
that it triggers an intuition and people
smart enough to have these kinds of
intuitions yeah I mean so if we just
there's a great book that it's my
favorite book uh the biography of a
dangerous idea uh so it's about zero and
the number zero the number zero yeah we
assume zero is obvious we have we've had
it our entire lives but that wasn't true
for most of human history the number
zero wasn't really needed you didn't
need zero fish or zero bread and so uh
zero had to be discovered and zero it
took humans hundreds of years to
understand zero as it relates to math as
it relates to philosophy as it relates
to art as it relates to every aspect of
society zero is the the number zero and
the uh concept of zero is
the biggest revolutionary to exist in
society it's not immediately uh
intuitive for people I saw the interview
that you did on
flagrant and you stopped pushing the
point you said I concede your point
because Andrew Schultz was like um look
people understand they don't have fish
like they get it they know they don't
have any you know whatever uh they're
not getting laid they have zero women
whatever it was he gave a bunch of
different examples and finally you were
like look I concede the point but I
still think the idea may be more radical
than that yes um in that yes people had
an idea of um not having a thing but
they didn't think of it as something you
could put in a
ledger is how I've always read like it
there's no way to use it in math kind of
thing
[Music]
um why is it a dangerous idea why is it
one of the biggest breakthroughs ever I
actually don't understand I have not
read that book I mean so philosophically
zero was accepted in the East and
rejected in the west because in the East
their religious uh ideas were open to
nothingness and in the west that was the
most fearful thing to to think about
existentially or spiritually and so the
West rejected zero as a concept of
nothingness there's always something and
the East embraced it and so you divided
these religious uh spheres
in the Renaissance the vanishing point
that allows you to do Dimensions is a
zero you can't do dimensions in art
until you have a vanishing point right
and so it if you look throughout history
whether it be in philosophy whether it
be in mathematics whether it be in art
whether it be in the language of
computers of zeros and ones you can't
have the modern world without the number
zero
that's really interesting I'm going to
say it in another way tell me if you
think this gets at the heart of it um
because I can feel this be the kind of
thing that
would bring forth cries of heresy which
in a modern lens doesn't seem like a big
deal and then you flash back 500 years
and the guy you killed yeah um that
there is a difference between I don't
have any apples but apples still
exist versus nothingness yeah no
Universe no God nothing nothing nothing
that's really interesting I definitely
never thought of that before and even
going from like basic math like UK's
elements to cartisian Geometry I don't
I'm too stupid in math I don't can you
walk me through the audience May tune
out but I'm actually really Keen yeah
there's several chapters there's
actually I think it's my favorite part
of the book there's several level UPS in
mathematical constructs because of zero
not worth getting into it now but math
hit certain points in history and then
they hit a wall and nobody could punch
through and you had these breakthroughs
of building our understanding of math
over time which has allowed us to do
these know inconceivable things with
math today that previous generations
couldn't so i' say it's worth a read
because we you again like we are born
into the world and we have these elegant
mathematical models at our disposal to
do all things but Humanity had to
struggle very hard to punch punch
through these barriers at certain times
and zero played a critical role okay um
so uh but the path of thinking in a
zeroth principle way what so in first
principles in fact maybe this is a good
place to start so here's how I think
from first principles
um how do I escape analogy how do I get
down to what is so if you think about
even just building up from the physical
world or like what is marketing
marketing is psychology oh [ __ ] so
marketing isn't funnels yeah marketing
is understanding the human mind wanting
them to do a thing and then getting them
to do that thing so now it's like oh I
could send an email I suppose or I could
go on social media right and so people
that understand that marketing is
psychology they don't get tricked into
thinking it's a phone call or it's a
billboard th those are just ways in
which you can reach the person whom
you're trying to influence but if this
is a game of influence now it becomes
anything could become a song it could be
getting a basketball player to wear a
pair of shoes I looking at you right and
so now all of a sudden when you're
thinking from first principles you're
not trapped by what came before you you
can actually lead because you understand
the essence of the thing at its most
fundamental level where again you're not
thinking from analogy okay so that's
first principles you just think up from
that what is this thing at its Essence
and then boom how can this play out okay
so if that's first principles what is
zeroi principle what's the path I walk
yeah I'll make this related to our
blueprint and our conversation on the
algorithm uh so so if you take the 13
colonies and America it was run by the
British Monarchy and so from a first
principles perspective if the US
colonies are unhappy 13 colonies are
unhappy with the Monarch you say first
principles how are we going to make the
Monarch better we need better
information flows we need better
decision- making we need committees we
need like whatever you can come up with
a whole long list of how do you be a
better Monarch to these 13 colonies and
the 13 colonies came up with and there's
the there's a gradient between one and
zero like some it's not that something
is an absolute zero and something is an
absolute one you can be somewhere in the
Spectrum the 13 colony said we kind of
have this zeroth idea where our own
citizens are going to manage our
Affairs now we don't we can't fully
understand how Bonkers that
was because that you know monarchy was
the only way to to manage something like
theide a that you and your fellows are
going to run your own institution was
something no one had ever done before
and they did it and so that was a
radical departure from the monarchy so
first principle you just try to
perpetually improve the monarchy zero
you're like no no we're going to figure
out an entirely new form of governance
with these representation structures and
people voting and like all these
different things and we've been sorting
that problem for the past 200 years I
did the same thing with my body I said
okay if I want to be in Better Health I
can
read books listen to blogs I can you
know do this I can do that uh I can use
my mind to do it I can exercise
willpower if I want to eat a dessert I
can say I'm not going to do it or I'm
going to exercise every day that's uh
the Monarch running me and I said I'm
going to do a zero and I'm going to
build an algorithm that runs me because
it's going to be better at being me than
I am and so I flipped it on its head and
so that's it's such a radical depart
from what is and if you think about if
you listen to How Humans talk about
health Wellness decision-making
self-improvement anything we do as
humans it's entirely assumed that our
mind is the best form of intelligence
always we always know what we want to do
we always trust what we want to do we
always trust the decision and we always
have an excuse for why we did it and I
said that's out like my mind is an
inferior form of intelligence relative
to this new technology I built to take
better care of me I can't myself that's
a zero and then the same is true for
what I'm suggesting for the species
which is we've always thought our minds
we the best thing which it has been true
it's the best thing we've had but now
we're on the prip of something being
better and the idea that our minds
wouldn't be in charge of all things all
the time under any circumstance and then
we can tell stories about whatever we
did to make sense of it is Unthinkable
to people and this is why I what my
thought experiment provokes it
demonstrates the unthinkability of doing
anything different than what we have
right now and the most basic assumption
that we all have about life is that our
mind makes every decision in every
moment no matter what and the future of
our existence may be the exact opposite
of that thing which is which brings back
to the point of the future is most
likely the thing we cannot imagine and
and least likely to be the thing we
imagine okay very interesting I'm going
to see if there is a line between first
principles thinking and zeroth thinking
or if zeroth thinking is really first
principles thinking when it yields a
thing that is so radically
transformative that it feels alien just
to use that word okay so uh AI is going
to be a zero
Factory leas it all going back to the go
Champion that lost to the
AI um says playing this AI felt alien
now I have a feeling that what the AI
was doing and I I have not looked at
this so if you know it better please let
me know but I have a feeling um because
I know how the game played out that it
what the AI did was just something that
the human mind it can't do and it said
if I do this move there's a
0.012% over 50% likelihood that I will
win this game and so it just Stacks
moves like that where there's just a
slight Advantage slight Advantage slight
advantage and because the human mind
can't daisy chain them like that yeah um
it goes for more blunt instruments if I
do this move you know my percentage of
winning goes up by a a much larger
number that's easier for a human mind to
track that isn't as many Daisy chained
items and
so again it the the AI is is using first
principles and what makes it feel alien
is that it's doing a thing that humans
can't do exactly um the invention of
zero is that something that is
truly unique I don't know that's where
my intellect begins to break down and I
can't tell um because if if I were to
walk away from here and I had to think
in a zeroth way yeah what I would do is
anchor on first principles thinking and
simply remind myself that there's an
answer that may be so unored to anything
I've ever thought of before um and hope
that that remembrance yeah allows for an
intuition and I use that word on purpose
um to come in that I might not have
otherwise had but I wouldn't know how to
force myself to invent a zero yeah yeah
it's and again it's uh it's helpful to
relax the definition between zero and
one because if you get too firmly caught
up in zero and one it gets confusing and
so there's like certain level levels of
obviousness and so when I went through
after I sold brain tree
venmo I was in search for a zero I
didn't know to use that name at the time
and so I convened a whole bunch of
dinners with my most capable friends and
I would pose a thought experiment so
we're in 2017 imagine we arrive at 2050
we're thriving as a species like we're
just blown away at how marvelous
existence is what did we focus on in
2017 to make 2050 so magnificent and
then I would just intently listen to
everyone's perspectives what were they
working on what did they believe what
were their models what were they
anticipating and I created a map of
basically all the knowledge of people
who I trusted and really respected and
then I said okay all of this is off
limits I can't do anything in this
circle what's outside the circle what is
no one focusing on what is the zero and
that's when I I determined I'm going to
solve death death individually death
collectively and if you solve
death you open up opportunity to some
infinite Horizon because if if the goal
of intelligence is continued existence
all the energies we previously allocated
towards deathlike activities get put
back into the system for expansive
activities that's when you have an
explosion a thrive I of intelligence in
this part of the Galaxy and so for me it
was a fun journey to go from what can I
map to what doesn't exist to how to
identify that and then build out the
scaffolding to do it you mentioned the
Galaxy a few times do you think that
there is intelligent life are there
aliens Brian Johnson outside of our
galaxy you know like there's there is an
economy of podcasts that basically
speculate on this answer right like it's
there are thousands of people having
this
conversation if I add my words to this
conversation I'm just like thousands of
people if I stop myself and say I have
no [ __ ] idea I'm different and I
think I actually size up appropriately
my level of intelligence now I know that
people when they watch these
conversations are like godamn that's
interesting what a fun perspective and
how amazing would it be there's like
sure like go have your fun do your thing
the role I'm playing in society I'm
trying to be I'm trying my very best to
be the voice of reason of wisdom and of
insight from the 25th Century I'm not
trying to play any game that's happening
today no conversation really interests
me the politics don't interest me
current dramas don't interest me none of
it does I want thriving the species to
unimaginable levels that's the only
thing I'm trying to do and if I
otherwise go back you know I I can't
distinguish my thought process from
anyone else I'm just blurred in this
mesh of
opinion it's
interesting uh why do you prize being
different so much throughout
history I've observed that certain
people can identify impossibly hard to
see
things and see them through which become
normal for the
future and we live an existence that
benefits from all their
contributions I see myself in a chain of
people endeavoring to do that I may fail
but I am that's what I admire of people
in the past it's what I hope people in
the in in the future will admire of our
time and that's the role I see myself
trying to play it's really I mean this
is a thing I find interesting if you're
a if you're an
intelligent resourceful ambitious person
and you're trying to train your
attention on a given thing what do you
do and every Century has had its
examples of people who go through that
thought process and they say a given
thing
and I'm trying to raise to our
Collective awareness that this is our
shared
moment to push aside our
pettiness our primitive dispositions and
raise our sights to understand what's
right before us it's
extraordinary and like there's so many
powers that keep us glued to this
present moment and we blind ourselves to
these
opportunities but man Consciousness is
extraordinary I love to
exist so much I don't know what it's
like to be dead I can find out at some
future point right now the sacredness of
conscious existence is something we
should fight for with everything we have
and we're pretty fickle right now about
that and I think it's just a snapshot of
our time and place in the future I think
we will value Consciousness with a level
of Sac goodness and care that we can't
even contemplate it's I think it's
just uh I think that's where we will be
on how we understand our privilege of
being on this ball floating in space as
conscious
beings I so take for granted that being
dead is exactly
like it was before I was born do you
have an intuition that they are
different I mean that's zero right zero
is nothing
so we were a zero right the idea is you
become a zero and then at some time in
between zero and zero you're something
you said I have no idea what it's like
to be dead yeah which is zero a zero but
when you think about before you were
alive yeah that that doesn't scratch
that edge for you it's unknowable to me
on on either side of the spectrum it's a
zero I mean zero is both zero is
nothingness and zero is infinity
zero is nothingness and zero is infinity
I don't know that I can track
that it's infinite in both in both
directions it's just
absence I mean you're going from zero to
positive you're going from zero to
negative zero is on the scale of infinus
infinity I guess what I'm getting at is
you draw a distinction between the time
before you were alive which feels
palpable to me in its it feels palpable
and soothing in its just absence there
just is nothing yeah uh which look I get
I'm not a religious person and so uh
maybe some people find anxiety in that
the I I have tremendous anxiety around
leaving my wife alone through death I
have tremendous anxiety around dying
pain F that does not sound fun but I
have zero anxiety about not existing
none and the reason that I have zero
anxiety about no longer existing is the
just absolute sense of peace I get when
I think about what my life was like
before I was born yeah there's just
nothing yeah is there a difference for
you between before you were born and the
thought of dying yeah so I mean it
reframed u a different way so let's
imagine right now I'm speaking to Tom in
this exact moment in
2024 uh you are the Authority for you
now let's imagine in a different version
of reality where baby Tom from day zero
day one to your death let's imagine I'm
speaking to the collection of all those
Toms at the same time and I'm inquiring
of your opinion Tom and so you just just
play with me in imagination that you're
this emergent phenomena of all these
Toms over that duration of time so all
those people's interests are represented
so it's hard for you let's imagine you
can live to the age of 142 that
Technologies get good enough or you can
do that let's now imagine that Tom at
140 and you died by an accident not
because of old age by 142 you're
actually younger than you are right now
now because of Technologies I love it
already so 141 Tom is now in this
conversation offering his perspective
about your wife about existing not
existing about new developments that
have happened in our understanding of
intelligence of existence
Etc if you can channel that thought
experiment you can realize how narrow
our contemplations are at any given
moment because it we can't reach into
the future and ask all the future
versions of ourselves their enlightened
dispositions and we forget the previous
versions of ourselves of their state of
beings and this is why it's so dangerous
for us in this moment this is like what
humans have always done but going
forward this is why it's so dangerous
for us to give ourselves too much
Authority at any given time is we when
death is
inevitable all you have is the
moment if death is not inevitable
you have the
expanse and when you have the expanse
there's more stakeholders than just you
in this moment that already seems true
to me in terms of I am trying to be kind
to My Future
Self uh I don't think about my past
selves a lot but I certainly try to be
kind to my future self I try to make
choices now where my knees aren't going
to regret it uh later or you know
whatever I'm I'm not going to mess
myself up in a thousand ways um you just
brought up the expanse so you said
something really early in the
conversation about you're very careful
to tell people this isn't about living
forever this is about don't die uh is
that because people have that negative
reaction to that because I want to talk
about the expanse like that's the thing
so um as somebody who just gave birth to
a religion Congratulations by the way uh
I think you have to you or someone in
the movement has to paint a picture of
what that looks like right so uh you
grew up Mormon and you said it was
actually a wonderful childhood follow
these rules and you get credits towards
this everlasting life and that was
really awesome that felt really good
super simple to follow and you got this
huge upside um I think people are going
to need at least in the early days
before they're really integrating with
the AI and it's changing the very way
they think uh they're going to need that
Vision what does living forever what
does that expanse look like to you yeah
there's a a few layers why I use the
negative because a very common reaction
to someone who hears don't die is they
say why the negative right like why not
the positive like live long live forever
live great whatever because when you
make the positive statement of live long
for example the person's going to
immediately say [ __ ] yeah I'm doing it
everyone does if you say don't
die you invite the person to collapse
everything they understand about reality
and have to rebuild from scratch you if
you take this the don't die admonition
seriously you have to reconstruct
everything about your existence you have
to reconcile that you believe death is
inevitable you have to believe that your
choice to engage in debauchery is living
life you have to right you everything is
about it's so people are not consciously
aware of How Deeply programmed death is
into their existence and therefore it
sits blindly so live long [ __ ] yeah they
just carry on with life and give it no
thought don't die they have to say I
either believe that we're heading
towards a path where we may not die or I
have to say I am going to die you but
you create a fork for decision- making
and it creates a real psychological and
practical conundrum do you continue to
boty or do you say I'm going to try this
new thing and so I love it that it it
cracks reality in a way that forces
people to reconcile their existence and
number two is don't die is omnipresent
practical you and I are doing don't die
right now we breathe every few seconds
whereas the future is so far away of you
know what is the expanse what is this
and what is that our minds immediately
discount it because we don't know if
it's true we don't know if it's is going
to arrive we think it's something of
sci-fi it may be a fun imagination but I
can't act on that now and so we do
future discounting where it's like I'm
just going to have a beer and chill and
so it's not actionable it's not on the
present and so you have so don't die
creates a crack where they have to
reconcile existence and immediately
actionable one second later for the
decision they make
next okay but the expanse what about the
expans so the the expanse um so there's
a few easy ones for us to
imagine if an algorithm has access to
our body to do readouts and it has the
ability to to deliver
interventions and our AI is you know now
doing scientific discovery at speeds
that we haven't been able to do it's
pretty easy to imagine us stepping into
a future where our body is doing repair
in real time and getting access to
Therapies in real time that we become
the Improvement rate of our technology
so just like we get new releases on our
our smartphones all the time we're
getting constant updates all the time
through our technology so you can
imagine something that's reasonable to
say we are in near perfect health all
the time that that's just a our systems
run that way and then you can easily
take the next step and you say well if
our bodies are maintaining themselves in
their perfect health then you could eily
say now what does enhancement look like
yeah buddy right so that's very easy to
do uh we know where that goes and then
if you take the next step where you say
what kind of controls are we going to
have on our cognition well as a baseline
we have the ability to engineer atoms
and molecules and organism like we can
design the physical and biological
elements of our reality including our
conscious States like when we you know
when we drink alcohol or when we take a
psychedelic or when we fall in love
these are biochemical states that we
have we can start constructing
Consciousness and now when you pair
because it's just biochemical States and
when you pair AI in doing that where you
have this real-time feedback loop you
then be you have this really fun
question how big is
consciousness can you say how big what
do you mean like right now you know what
it's like to sit down and have a
conversation you know what it's like to
follow in love you know what it's like
to have a fight you know do you mean how
many variations are there of
interpreting stimulus
of version versions of reality like you
start off as a kid and you go through
life and you have all these experiences
like the slightly different right uhhuh
what is the what is the scale of
Consciousness like if you said it take
this back in time and you ask a human
how big is reality they can say well I
have five senses I can see I have
surroundings around me I can see there's
like something in the sky there's stars
like you're trying to size it up and
then we arrive at the modern day day we
say actually the microscopic is near
infinite like when you go down to the
smallest things and then you say you go
out to the edge of the universe how big
can you go out and so then you create a
map and say how big and how small is
reality it's gigantic like so big it
boggles our mind so we've discovered
through scientific the scientific
process that the reality is really
really big we haven't had the tools to
play with our own consciousness so if
you were to pose the same question like
right now I know what it's like to feel
drunk and I to to be happy and sad of
these certain varieties of fall in love
but how big is consciousness is it are
we going to have the same kind of
expanse so if our bodies are in perfect
health if we're now playing with
enhancement and now we're simply playing
games of
cognition could we walk into a scenario
where our minds
are
absolutely un imaginably different than
what we have right now like so far
removed from our current state of
consciousness that we are unrecognizable
creatures I don't know why that wouldn't
be
possible in the same way like we Homo
rectus has some form of
cognition it had some similarities to us
but definitely not exactly like us and
not with the expansive tools we have of
what we can do today so we've been on
this evolutionary track and now we have
much more powerful tools to expand our
conscious States and so in the same way
like when someone does a
psychedelic they'll report that you know
they they experienced dimensions and
experiences and possibilities they never
thought
possible can I um
so there's a quote you actually have it
in your book I think his name is
Frederick pole uh and he said the job of
a science fiction writer is not to
predict the automobile but instead to
predict the traffic jam right let me
predict a traffic jam tell me what you
think about this so uh I have never
thought about how big Consciousness
could be that is a really interesting
idea my head goes somewhere dark
immediately um so I am really entranced
by the hypothesis that we could be
living in a simulation and my the first
question I had though was why would
somebody simulate something that takes
so much time and then uh the answer was
well what if all of this we have a
perception of time that's you know a day
is a day and we all know what that feels
like but if uh a million years to the
person watching the simulation only
takes a week now it's like you wake up
every day oh my god look nuclear weapons
how cool uh and it becomes a very
meaningful simulation for them to run if
the scales of time are dramatically
different and you'll hear people talk
about I was in surgery and it felt like
I was there for three years or whatever
um one of the most terrifying things I
ever heard about the future of digital
worlds is that you would have hackers
that would hack somebody's mind and slow
their sense of time down so that they
feel like they've been trapped for a
thousand years yeah and you'll see that
pop up every now and then in sci-fi uh
but that is really really interesting if
you were going to predict AA traffic jam
which I know you're going to be hesitant
to do because of course you can't
accurately do it I get it I get it uh
but it is so interesting are there
traffic jams that you've predicted I
think the most motivating goal we could
have as a
species is consciousness
exploration and the reason why I think
that is most of us do that right now we
do that when we seek out novel
experience we do that when we learn new
things we do that when we have children
we're experiencing new things we're
exploring new feelings emotions
intellect like we're all of us are
basically experiencing the conscious
mind and some of us do it more actively
than others but that's what we're
basically doing when we watch
entertainment you know watch new movies
or or scrolling we're trying to explore
stimuli and currently uh that system is
set up as a drug it's drugged us into
this addictive clo Loop thing um in
other things we've imagined our greatest
Adventure is going to Mars so we've been
to the Moon we're saying we're going to
go one step further and go to this
place um Consciousness is not yet on the
radar as something that we can explore
with the same
Vigor and so I would say the potential
traffic jam
is if we can sort through getting don't
die as the do as The Guiding Philosophy
for the 21st century that we lock in as
a species and say let's just stay alive
and let's walk into this super
intelligent future together because it
may be pretty interesting to do and to
kill each other anymore would be really
primitive like that's not our thing and
so then let's say that super
intelligence is expanding at a certain
speed and we we bump up against some
itical limitations of what our brains
are capable of so even if we do things
like we're playing with the biochemicals
even if we have access to all these
different molecules like I say we're
inventing brand new molecules we're
inventing brand new psychological States
we're inventing brand new appendages the
body like we're doing all these things
enhancement how big is that space and so
the traffic jam may be that we bump up
against Discovery limitations on a new
place to explore that satisfies our
wants so if we
say don't die then we're going to say we
want to play and then we say if we want
to play What's the the playground look
like and the playground needs to be
sufficiently interesting for us to play
do you know um the um there's a name for
it but like the fact that aliens have
never come despite how big it is oh God
do you remember the name of that that
the Dark Forest no no but that's also
that's very very interesting there
there's a name for it the the equation
as to why nobody order the Paradox is it
Drake's equation anyway the um the fact
that we haven't found aliens despite the
absolute just sheer massiveness of the
universe uh and why is that one
hypothesis that I've had about that is
and I think this is actually true and
it's interesting that you bring up
exploring Consciousness is that I have a
feeling it is far easier and probably
ultimately more satisfying to collapse
in Inside the Mind into Virtual Worlds
that are piped directly into our brains
than it is to go out and deal with
radiation and all that stuff and to have
to generate enough energy to open a
wormhole like it just seems like
civilizations that get sufficiently
Advanced would end up looking inwards
yeah agreed yeah like I think it's like
actually tangibly representative where
our brain is hidden on the other side of
our eyes M and so we look forward and we
see everything where we want to go we
forget what's right behind it and so I
think the the two things I'm personally
betting on that will be the exact
opposite of what most people think one
is I think that the next era of
exploration is not out it's
in and number two is it's not our minds
that will run reality it's the algorithm
and our minds will will sit in some
relationship to these
algorithms even though algorithms are
running us we may feel like it's us we
may not know the difference and so I I
had this experience where I was talking
to this woman when I was building kernel
who had a brain implant uh for her uh
her neural degenerative condition and
she had her cognitive experience before
she got the implant
and the implant changed her cognition
who and she spoke about it though that
um it was a unified thing that was her
it wasn't that the chip and the
algorithm was one thing and that she was
another she was it she was merged with
it whoa whoa whoa uh uh me and people
listening are that sounds like the
technology is so much more farther
advanced than I
realized what is the chip capable of
doing there's 100,000 people in the
world who have brain implants they've
been around yeah it's a it's a thing
that's been around for I think maybe two
decades and they do what uh the most the
implants are for Parkinson's it's you
know people have these uncontrollable
convulsions and it helps them steady
state and so it delivers the electrical
stimulation and so it brings them back
to you know totally or semi-normal
function and so yeah brain implants have
been going on for quite some time and
but when she says that it's changing her
cognition is it reading as uh the
Tremors felt like they were outside of
me but now that I have this thing inside
my brain I feel like I'm capable of
calming myself and that's where she
feels like it is her yeah she was
commenting that before she was in this
uncontrolled state where her body was
doing something that she didn't want it
to do that made living very hard she got
this implant and it changed her body to
perform as it normally would and she
didn't view that as other she viewed
that as self and so even though there
was an actual mechanical object
implanted in her brain doing the
simulation it's a pretty crude
technology right like just doing
stimulation it's not like it's this
elaborate mesh in the Brain still for
her it was fully her conscious self
existing it wasn't like there was some
other intervention by a mechanical chip
doing a certain thing and so she
fully embraced the technology she became
became one with it and there was no
change to her conscious existence so
when I talk about this thing of like
blueprint running
me like actually making decisions for me
on my sleep and my diet and all these
things I do the same thing I don't see
it as other I just see it as
me and so it sounds dystopic to someone
else of like I don't want to do that
that sounds like the worst thing ever
it's a a knee-jerk reaction that we have
to new ideas but once you experience it
it's like oh of course like and I'm so
much better because of it and this is
why I am bullish on this situation of
that we run our mouths because we're
terrified of new ideas it threatens our
identities it gives us it creates loss
aversion where we want to hold tight to
our vices and how things are and we're
scared and we don't know what we're
getting in exchange or it's even scary
so everything about us wants to shut it
down as fast as possible it's very human
but you know when you get to the other
side of it it's like yep all
good how do you keep yourself so open to
new
ideas I
assume that everything my
mind resp in any given prompt anything
that my mind generates is probably wrong
so I watch the first three or four
thoughts land and I'll look at them and
say that's wrong in this Direction
that's wrong in that direction so it
takes my mind about five or six thoughts
to kind of get stabilized and then I'll
come up with something maybe interesting
but the first thought's always wrong
same with the second same with the third
the the the brain just vomits in
response and it's like a need it's a
nric reaction to a to a stimuli and so
when you say that something is wrong
compared to what do you have a like I
base everything off a goal so something
is right or wrong based on this is my
goal did it move me towards my goal if
yes it's right if no it's wrong yeah do
you have something like that that you
use well like the like the trolley
problem you the tri problem uh presents
a
quandry where there can be no correct
answer in our current framework of
morals and
ethics no one can make the argument that
you should kill the the elder man over
the mother and a child you can't right I
mean if you say we're going to sacrifice
age for youth that's just not acceptable
to us and so in that moment um I have to
take stock that my brain has to wrestle
with this problem for a certain duration
of time and it's going to misfire
several times and so I just know it and
it's like every time I get to a new
idea I make a rule to myself I can't
express an opinion for at least the
first few thoughts because once you form
a
conclusion you're locked in and your
only incentive is to then justify why
you made that observation
and so I don't feel any need whatsoever
for internal consistency if I need to
contradict myself that's great like I
want to find where I've been wrong in
previous versions of myself and you'll
see like in this
conversation the majority of the
comments in you know in in the section
will be people's initial reaction
watching this they'll hear a first few
minutes and they'll say something like
this guy's a quack this guy's a cult
leader um this guy is not living life
this guy like they're they're so
predictable uh because that's the
knee-jerk reaction that a new idea
landed in their inbox and their mind
wanted to violently crush it and lower
my status and Power in the world by
insulting me in the comment section and
then others want to hit the thumb up
thumb up on the content because they
further want to to uh they want to harm
me and lower my status and my ability so
the insults would be pretty vicious and
they'll stack on each other there'll be
a few people who will
say maybe like maybe this guy has
something interesting to say I agree on
this I disagree on that like but the
comment section is like 95% predictable
to this kind of
conversation uh and just to recap
because they're afraid of new ideas they
shut it down like why want to lower your
power in the world is that just humans
yeah because they if this is why I don't
die if if don't
die is
possible it invites them to reconcile
their existence because everything they
do right now assumes death is inevitable
every Vice they have every life decision
they've made every thought process they
have assumes life is it is death is
inevitable if don't die is a reality
they have to reexamine their life and
that is
uncomfortable you've got to stare down
change you've got to stare down your
thought processes you have to stare down
your friends you have to stare down
everything you you are about yourself
which I think maps to why the reaction
to me has been so violent in the world
over the past year yeah it's been
hysterical it it's totally hysterical
and it's predictable because it is so
uncomfortable to Grapple so people take
cheap shots at me which again is great
like I love it it's fine I'm very happy
about it it doesn't bother me one bit
it's rithmic so I don't blame them
they're just dealing with the brains
they have and they're trusting that the
thoughts their brains are dropping are
you know the correct assessment and so
they're just going through their own
process it's fine but yeah it's really
is don't die is the most threatening
thing so it's ironic don't die is the
most threatening thing you can say to a
human in this moment that is hilarious
uh one of our mutual friends wanted me
to ask you in the face of all that
criticism how does it not bother you
emotionally oh I love it not not only
does it not bother me I have an absolute
love of it that seems impossible even I
and I know your punchline on this it
just seems like do you use a a negative
response as like a uh Jiu-Jitsu move to
remind yourself oh this is hilarious and
I'm looking at myself in the perspective
of the 21st century or when you read the
most vicious unending wall of [ __ ]
thrown at you your actual first
emotional response is this is
hilarious I legitimately am playing for
the respect of the 21st century I'm
playing for the most magnificent
existence in the Galaxy these ideas are
sufficiently powerful enough that EV it
evokes that reaction over that kind of
sustained time period it's an absolute
victory dance it's the best thing to
happen the worst thing would be
silence no one cares that is the
absolute that's that's what would be
dead so that would be the most painful
thing to me because it means no one
cares it means that the ideas are wrong
it means somehow that the ideas won't
make an impact it means that somehow
this does not bridge to anyone's
reality and so this is why
I've talked you know
my I've had so many people approach me
about the haters because it bothers them
so badly it just it wrecks their day it
wrecks their life they can't even look
at the comment section it just ruins
them psychologically and these are
really smart capable people that are
doing things that have the power to
change the world and then they're
diminished by you know 20% 60% 80%
because of how beaten down they feel
from those they they want their respect
but they can't get them and it's just
sad and so I've become like this
therapist of sort of walking people
through like how they can remap their
disposition that if they're not getting
um if to evaluate are they getting
enough hate now you don't want to seek
it out and be silly you don't want to
seek it out just to create it you want
to tune your ideas and your project so
they're sufficiently Innovative that
they actually move the needle forward
and hate is a wonderful indicator on
this space you're creating for
Innovation it's almost like you're
playing on the zero to one scale like
you want to you want to really move that
back and forth and so you want an
appropriate level of hate for anything
you're doing again like don't go out and
be an [ __ ] to create
hate create hate with the quality of the
idea how would you feel if the 21st
century clowned on you I give it my best
like I give it my best shot would that
sting though no like I am doing my very
best as an intelligent being I feel
primitive I can see my limitations of my
own intelligence I've given up my very
best and so they clown on me like I'm
satisfied I have genuinely given up my
very best shot of everything inside of
me to be useful to intelligence
to be useful for intelligence that's
interesting um talk to me about what's
what's real today what's possible today
so you founded a company called OS fund
I think uh you invested hundred million
into the most Cutting Edge building
blocks building with atams kind of stuff
um how much can we engineer our reality
today yeah it's
more I think it's a actually a more
revealing question to ask what can't we
do we're that far
along I don't have an interface with
that reality so yeah so help me
understand because that I want that to
be true the Sci-Fi writer in me wants
that to be true um don't get me wrong
the modern world is
unbelievable yeah
but okay so I'll give you like uh a
couple practical examples with which um
I'll give you an example so your your
ear your left ear yeah uh has you your
left ear is at the age of a 62-year-old
if I remember correctly and you can't
find a way to fix it
yes yeah that's true that does not seem
like a reality that we can just engineer
our way out of oh I see yep um I mean if
you just break it down to First
principles so you say how are auditory
signals how do they enter the air how do
those get converted into electrical
signals that the brain then understands
as
sound it's all right it's a very
understandable process mechanically
biologically like we know the mechanisms
and we know well we just we know exactly
how it works and we know how you would
need to repair it so it's just getting
to work and solving it now there's an
there's a space where we don't know I gu
how do you actually engineer a solution
is not known but in terms of do can we
um can we construct mechanical objects
to perform the function yes can we
change biology to accommodate the
situation yes can we program genes to do
a certain situation yes like we can do
all those things so it's just a matter
of time of applying our efforts to try
to solve it but it's not like we're up
against uh the speed of light where it's
like can you travel faster than the
speed of light physics currently doesn't
have an answer for that we have we know
know there's a solution here we can
program it we just need some time to
figure it out and this is why I'm so
bullish on don't die we're not up
against the law of
physics we increasingly understand these
systems of biology we have the
mechanical physical world infrastructure
and the biology D these Solutions so and
then you bring AI online and now you've
got more and more intelligence all the
time come into play these are solvable
things what's the most shocking thing
you've been able to repair on your body
that you were just like whoa is it uh
fost Statin is that is that the Banger
is it something else yeah so yeah I have
my first longevity gene therapy and what
this company did minicircle is gene
therapy is a thing that's going to break
through the ceiling of our 120ish
lifespans because you can only get so
far with sleep diet and exercise are you
reprogramming your genes no so what they
did is so typically so you can change
your actual genes uh you can you know
use crisper or some other technology
this one you're getting a plasma
injected as the delivery vector and then
a protein ftin it just sets up sh shop
inside the nucleus and it produces Mor F
Statin for people that don't know my
Statin Gene controls the amount of
muscle that you have uh if you've ever
seen a double muscled cow you can do a
search for it uh it's
crazy bodybuilders some percentage of
them a huge number them are going to
have M Statin deficiencies you will
starve to death if you have too much
muscle on you from an evolutionary
perspective and there's no vs around the
corner uh so not putting on too much
muscle has been an evolutionary
challenge you had to solve for it that's
the Mya Gene follow STA and as far as I
understand it mitigates the dampening
effects of myostatin so that it's easier
to put on and maintain muscle yes yes
and yeah and a bunch of other stuff and
so what's cool about bunch of other
stuff that matters or like the what this
company is seeing from some of the
participants it's not these are not the
study results yet but some of the anal
reports they're seeing a slowing of the
speed of Aging they're seeing Tas uh
increase they're seeing tele increase
they're seeing epigenetic age reduce
they're seeing other other effects so um
there's all kinds of benefits are seing
but the thing like even this therapy was
great but what I love about it is that
it introduces
lowcost therapy gene therapy for a whole
bunch of other applications so
Take Out full Statin and put in mil or
uh um clo or put in HT to extend teles H
what uh H tur so just like it's to
increase I heard something else it's to
increase the telor your endcaps on the
chromosomes okay but what I like about
it is it's a
gene um delivery it's gene therapy
delivery system that is safe it has a
kill switch and it can be applied to all
sorts of Technologies and so what I'd
like is it introduces the idea that we
could now routinely get gene therapy for
a variety of things
testosterone brain function muscle like
you name it we can play with it and so
once you get these platforms built it
just opens up this this new possibility
and so what I like about the field is
we're now getting to a point with these
therapies where we're starting to really
push up and out over and outside of diet
sleep and exercise or the reprogram the
hamanaka factors so it's getting really
close and so we haven't yet had a moment
in longevity where it's like that's it
like we just did The Impossible thing in
a human and these results are Inc
incontrovertible and so prior to getting
there I'm suggesting that don't I so I'm
try what I'm trying to say is we don't
need to prove radical life extension
before fully adopting don't die we can
do them at the same time it's not
contingent but yeah I mean that the
stuff we're seeing in the field right
now even last week there was a last
month there was a paper where a company
did some cell reprogramming they took a
mouse that was 124 weeks it was going to
die at 128 weeks it extended their life
by 100 by
109% uh with this yeah in their 124th
week uh with the cell reprogramming so
they're taking like so wait uh maybe I'm
just bad at math so if the average human
life is 84 or whatever you're saying it
live double that no so just in the last
it doubled its remaining lifespan so God
it yeah so 124 was going to die at 128
but extended by 109% got it got it
because they just for this experiment
they're just taking very end of life and
saying what can you do at very end of
life of reprogramming these cells for
that specific study what were they
reprogramming the cell to do uh so yeah
they this is um you take CS back to ipsc
you take cells back to their uh potent
state where they can then differentiate
into a whole bunch of different cell
types so you're you're enabling the most
interesting cellular reprogramming where
you're taking it back to its more
youthful State and so the yanaka factors
and so I I think we're actually in a
really interesting spot that we could
see some pretty big breakthroughs in the
coming years that get people excited
about this will we ever be able to make
men
taller I mean that's like dude the more
I get into like modern dating and stuff
it's crazy that [ __ ] matters yeah she
didn't but like you just get filtered
out yeah yeah I mean I don't know why
we'd say
no what uh what would we say no to what
what are the limitations that we can
overcome and why are the limitations I
think is the more relevant question that
though so there's the fantasy of like
hey AI is going to solve it all so sure
but is there like I did uh an episode
not too long ago I I was literally
shocked to find out you can actually
enlarge a penis I was like what like I
legitimately uh my wife say said
absolutely not which I was very sad by
CU I was like if this shit's real I'm
going ham she was like no absolutely not
I don't she was she reacted so violently
negative that uh yeah I was saddened is
the honest answer but uh why why was i s
why was she I don't know well I mean
just from the perspective of I my penis
is nothing to write home about I'll just
be very honest with you but it fits
perfectly with the person that I'm
married to and so she is not
enthusiastic about more does she have
data does she have do she AB test she
has AB tested yeah yeah
so uh for what it's worth I I would I
would if it were possible I would have
one I could throw over my shoulder so
I'll just be very honest that sounds
awesome but my wife is not impressed by
the idea so but anyway I didn't know it
was possible I really thought that was
like total BS absolute make believe uh
had a urologist on the show she's like
no it's real um did a bunch of research
because honestly when I first started
researching her I was like there's no
way she's making this up and a bunch of
people like yeah it's actually real
there's studies I read the study and I
was like God I can't believe this is
actually real yeah um so that's like a
today thing yeah that I was shocked to
find out but going back to like should
men really care that one even though
guys care a lot about it from a get a
mate perspective if you're heterosexual
it's not just it ranks so low on what
they care about but they women really do
care about height so that one seems like
if there was something that we could do
you'd have a lot of people uptake now
you can literally I mean correct me if
I'm wrong you can break the bone and it
will grow back and you will get taller
but whoa that sounds rough yeah I mean I
think like getting taller sounds like a
from first glance a much easier thing to
solve than say for example increasing
the number of neurons in your brain like
you know you got we have very practical
limitations of skull size of neuron
density like it's the brain's a much
more complicated organ so I mean if you
if you pose the question what is
possible you probably could back
into the constraints and then rank them
like how hard each one is to imagine
being able to overcome not that that's a
conclusive thing but yeah but getting
taller doesn't seem to me like it would
be among the top of the hardest things
to do maybe that is incredibly
interesting uh so here is a question
that most of my guests can rest assured
I will not ask but tell me about your
penis
yeah yeah I think it's the most
Quantified penis in the
world that is very interesting so what
are we quantifying and to what end can
why why care what what is the direction
that we want to take it yeah so if you m
if you take to where we've been of how
do you create the most extraordinary
existence out you know possible and you
go back and back and back and back and
back it ends up with penis
quantification as one would expect the
same thing yeah so one day with the team
I said hey you know we've been doing
assessment of heart health lung Health
brain health why don't we do penis
Health next and what would it take for
to have the most Quantified penis in the
world and we went out and we looked at
all the scientific literature we said
here's all the validated ways to measure
a penis like to quantify a penis and one
of the things we did is we measured
nighttime erections and so that's just
something that people don't realize is a
thing you you as a man you realize you
have nighttime erections as a kid you
get a lot of them as you get older you
get fewer of them and so I did my
Baseline measurement uh 2 hours and 12
minutes was my Baseline which was
roughly average for my age
46 and then I did a few therapies Focus
shock wave therapy and Botox and my
recent measurement was 179 minutes to
put that in context the movie Titanic is
just over 3 hours so I'm having
nighttime rections about the same
duration of time as the Titanic every
night and that is better than that is
quite the analogy or the comparison of
of it better than the average
18-year-old and so I reversed the
functional biological age of my penis
from roughly a 46y old to better than
the average 18-year-old in six months
time okay uh all algorithm Le or this is
you guys um going rogue yeah we did the
same Process Measurement scientific
literature therapy measurement and we
just repeat again and again and again
and it probably helps you know of course
that my sleep is is
is routine that my diet is tuned in my
exercise is consistent so it probably
helps that I have these other basics in
place but yeah if nighttime erections is
a significant indicator it's biological
age predictor of cardiovascular health
psychological health and sexual function
so it's actually a really it's not uh
it's not a glamour stat it's actually a
reliable Health marker for men and
that's because the size of the
vasculature in the penis is very small
and so if you have a problem there it's
a very early indicator that you're going
to start having problems in the bigger
veins down the road uh so what are the
things that you do that help yeah I mean
so for example the reason why so as
evidence of it being a function of
health or of
well-being be sleep deprived for one
night or even a few nights nighttime
erections
vanish is that from a there's something
going wrong with your vasculature or the
brain is just like we don't have time to
play with that I just have to deal with
everything else the body yeah the body's
is prioritizing other essential
functions huh and so yeah it it
evaporates
almost instantaneously when the basics
are not in place and so you really need
to be in top tier Health uh so it's a
good yeah it's a like a weather gauge
for how you're doing with your health
and then overall for your other sexual
function psychological function and so
what was your question what do you do to
improve it oh um we did two therapies so
we did Focus shock wave which is a wand
that you put on the penis it can be used
for the entire body so when people like
Teran ACL they're doing Rehabilitation
you can uh use Focus shock wave it
improves healing processes so you can it
use it anywhere in the body by creating
micro damage basically yeah okay you can
do it painful pain management yeah yeah
exactly pain management people for pain
management and so you put on the penis
it's you can do on the shaft and the tip
so you it's it's pretty painful I mean
you you and there's different levels of
power you can go low or go high and it's
especially painful on the tip yeah that
sounds less fun we have people off
camera even chuckling on that one
yeah uh so you've talked about one of
the things you have to think about as
you get older is you begin losing
sensation that includes in the penis
um regardless of body part what do you
do to try to get that sensitivity back
is it blast it with shock waves I mean
that seems a little counterintuitive
yeah so um I don't know the answer on
this one I've been measuring nerve
sensitivity on my body for the past
couple years and specifically my hill so
we do this test where I'm I'm laying
down I can't see anything and then
there's a Contraption where you're
measuring you have two points and you
increase in distance until you get to a
point where I can't I'm sorry you start
you start wide and you narrow and then
you get to a point where I can't
distinguish that it's two points and not
one so it's two one two one and so I'm
seeing if I can discern so I've a
measuring my nerve sensitivity to see if
I can distinguish between two points on
my hill and the same thing is true with
the penis so you can basically do the
same nerve sensitivity test doing that
and as you age you become less sensitive
you lose your feelings like on an
80-year-old I think the data was
something like they can only distinguish
between two and one points at something
like two or three centimeters it's a
pretty big difference I didn't realize
how much uh loss they would have at that
age but yeah so that was a new
measurement for me so same thing on the
tip of the penis so your sexual
satisfaction over time diminishes
because your sensitivity also goes down
okay all the things you do um there is a
lot of self-denial there is a lot of I
mean getting your penis shock waved uh
it's not fun like there's a lot of pain
management there's um to be measured as
frequently as you are constantly drawing
blood I mean it's just a lot a lot a lot
uh the idea of handing over my life
choices to an algorithm the one thing
this all comes down to for me is I
assume your optimizing for a good life
but then the question becomes what is a
good life and can you are you living it
now can you program that into AI so that
the AI actually knows based on the data
it can collect I actually am steering
you to yeah the good life yeah if you
pose the question what is a good life
throughout all of time and let's just
say you take data points for every year
for the past 2024 Years you'll get an
answer that's different I mean they'll
change over time and every in the year
for example 37 ad or the year 137 ad
you'll get a different answer on what a
good life means and in any situation
their response is not truth it's a
mirror to their time and place and so
there is no such thing as a good life
it's just simply a phenomena that
emerges in time and place with a given
culture and when death is
inevitable you play a good life game
because we're all going to die when
Death Becomes potentially not
inevitable it becomes a different thing
and so I'm not playing the 2024 version
of a good life I'm playing for the
infinite Horizon and this is why
everyone dings me because they say this
guy is has not conformed to the norms of
the 2024 good life which include you
know the following things and that's why
they they want to dunk on me and they
want to minimize me because they want to
they want their version of a good life
to be superior to the game I'm playing
which is the infinite Horizon and so
it's just different we just see the
reality different
differently okay I imagine though you're
just trying to get past this moment
would you want to play the the don't die
game forever or are you do you believe
that don't die gets you to and now the
good life begins yeah so this is the
thing like we we may lock in as a
species and don't die in a shockingly
short period of time like this could be
I introduced the concept of don't die in
November of or December of
2023 it's only three or four months old
and if you look at history
over time of when ideas were introduced
when they were adopted into some kind of
mainstream and then they became
continuous typically hundreds of years
for them to kind of find their way
through Society this idea could find its
way to be the guiding philosophy of the
species in a matter of a decade or
shorter and this is because AI is going
to create existential crisis it's going
to call into question everything we
understand about reality it's going to
open up this the op cre an opening where
something's going to jump in for a new
thing and I'm playing for when crisis
happens for don't die to drop in that we
played as a species and so could it
happen in a year or three years or five
years yes and if we lock in on don't die
and we say oh my God that's insane we
were talking about this the other day
like it was impossible and it's just
here and it's omnipresent and let's just
say we lock in as a species and we're
playing them we get really good at it in
a couple a couple years and then seven
years from now we're off to a different
reality like who knows what the game is
like don't die is just now part of the
fabric of existence like zero is we
don't even think about it like no one's
committing self harm no one's harming
each other we have a sacredness for
conscious existence we've never had
before and now we're off playing brand
new games of how big is consciousness
that's where that's where the brightest
minds are is what are the boundary
conditions for Consciousness and how do
we punch through those
limitations religions need uh rituals
they need things that Bond people
together they need um movements that you
share things that really make you feel
like a tribe you guys don't have a
unless you're pointing to AI like a
deity which maybe you are um you don't
have the obvious things that Catholicism
you know had hundreds of years to Cobble
together and
um how are you going to make this sticky
oh we do yeah so I'm working right now
on trying to build a don't die nation
state so if you if you take this don't
die idea
seriously yes then you say okay I'm in
what do I do now no government in the
world is helping its citizens not
die and so a government a nation state
needs to actually perform the basic
functions including testing and medical
services and therapies and you know
access to all kinds of things you can't
get and so I want to build it'd be
amazing if I could do something like a
20 million person don't die nation state
in less than a year where you have 20
million people locked in as Citizens and
they say we don't want to die but to do
that we need to we need proper sleep we
need exercise we need blood draws we
need medical care and so I want to
provide the
basic infrastructure for people to take
care of themselves in ways that their
governments don't allow
currently okay uh that is a massive can
of worms so let's Dive Right In are you
doing a nation state or are you doing a
network State allab bology shason yes
both so I mean I I use nation state so
yes nwork Network State I say nation
state because it's a more understandable
word yeah but it also comes with like if
if you're really trying to build a
nation state my next question is
geography are you you taxing uh what are
you doing to fight off the Nations that
are going to crush you for trying to
start a nation yeah Network State uh for
people that don't know b g uh he's been
a guest on the show I highly encourage
you to watch that episode but he he
believes that now and even more so into
the future the people will be
non-geographically tied through
basically a belief system they use
crypto currency is like the the way that
they interact on financial rails uh and
that even though they're distributed all
over the world they will have a unifying
identity code of ethics so on and so
forth
um okay so nation state you say just so
people sort of get what you're talking
about but you really mean Network State
yeah amby is a friend yes I listened to
your episode I didn't know how well you
knew each other but yeah yeah and we're
working together on this this all as
soon as you said said I was like okay
wait that biology episode makes a lot
more sense now we we see the world in
very similar ways and we both see the
new Trinity being um
Ai onchain and don't die like that
that's the structure that builds Society
on chain you get the computational
scaffolding of systematic build don't
die gives you the directional attention
of what you're actually playing and AI
is the intelligence fuel in the system
that's the new structure of reality and
so the chain gives
you a reliable Foundation to build
methodically and mathematically and
allows you to imagine a situation where
you have goal alignment that is
computationally aligned right like right
now we wil nearly decide what we're
going to do and say to someone else it's
a it's like a pretty random process in
our brains and our behaviors when you
have this mesh it's a different game and
so yeah it's basically a network State
and we can walk into this to say we
share these things together in common of
these basic uh Health practices and then
you walk up to increasingly more
sophisticated things as a network state
so uh initially no geography but there's
no reason why we couldn't spin up
geography and multiple regions there's
no reason why we couldn't be negotiating
with other uh governments to do various
things so it's just a baby step but it's
it's
a it's a tangible step to say if we
contemplate don't die as a philosophical
exercise which is Fun the next step is
what time am I going to bed this evening
and what am I going to have for
breakfast and how do I address some kind
of medical problem I have and so you
just need to be extremely practical you
know like
religion has not served the purpose of
solving practical problems religion
solves a problem of soothing you that
your practical problems are
insignificant in the larger scale of
things false okay
uh don't eat pork why
trinos there are reasons why um so if
you can buy into the idea that
Traditions are experiments that worked
religion is the the medium by which the
meme spreads so you need a way and look
maybe it's not serving us in a modern
context and I am not religious but I get
why they have worked for thousands of
years and I get why there is a new
movement that I see coming um that I'm
calling the tradical isation of people
where people are going to try to go
backwards they're going to try to
reintroduce religion as a thing that
people ought to embrace and I understand
what they're trying to address which is
you know just to Channel A little Nichi
uh God will die we will have killed him
and there will be so much blood on our
hands we will never be able to wash it
away meaning that it it does damage when
people no longer have that the medium by
which the meme can spread so it isn't
like the only way that you can be moral
is to believe in religion and this is
one thing I meant to bring up earlier
and didn't get a chance um all of the
things that you're saying presume that
people have your level of intellect
you're north of 130 for sure you might
be north of 150 uh dude most people just
can't hang like they need that
propagation medium of religion for them
to orient to the world to know oh I
don't do this thing because God told me
not to like they need that right um
religion as far as I can tell is the
only thing that works for hyper
intelligent people and for people that
are um that struggle and and I don't
even want to be derogatory there before
the grace of God go I um so that that to
me like religion certainly was for a
very long time hyper practical but for
the same reasons that I'm paranoid about
AI I'm paranoid about religions they
kill a lot of people but they do like a
thing like they have a they they have
existed you even said that they tend to
outlast nation states for a reason yeah
yeah oh yeah it's like um it's one of
it's one of if not the most ambitious
thing that anybody could have done in
the first a few centuries the past few
centuries to start a religion yeah oh
yeah it's interesting you anchor that
around ambition for me the the more
interesting thing about religion is just
how it
galvanizes gigantic groups of people
yeah and will give them a
fervor that can be amazing or terrifying
yeah I mean this is so we're saying the
same thing so what I'm saying is if you
if you look
at uh Intelligence on this Earth and
let's just say you're doing some form of
calculation of like
what what's the impact of influence on
all
Intelligence on this
Earth and you're looking at uh influence
in the way of like where this species
moved what it did how it thought whether
it progressed or
not in that larger contemplation
religion is one of the most influential
things on the body of Intelligence on
planet Earth
because it it it feeds into state it
feeds into culture it feeds into
invention it feeds into everything and
so Country Companies come and go you
know individuals come and go uh but
religion has this stain power which is
unique and so if
you're if you're someone who's playing
the biggest game of ambition to say how
do you move this body of Intelligence on
this
planet you you wouldn't rule out
religion as a mechanism doing it m right
you would actually say that actually is
probably the most powerful technology
ever created by the humans until the
arrival of AI and so now we have this
new point in time where you're saying if
you're ambitious and you're playing with
the body of Intelligence on planet Earth
what do you do and so a lot of people
look at don't die and they immediately
want to tag it as a cult and they want
to say it in a in a demeaning way and
they want to call it religion in a
demeaning way I don't say it's a
religion I don't say it's not a religion
I'm saying it's something practical and
wherever you want to take it with your
definition of whether you want to be
demeaning or supportive I don't care
like it's whatever you interpret it to
be I'm talking about it from something
extremely practical that informs what I
eat for
breakfast how I understand ideas how AI
Engineers engineer intelligence it's
full stack and so my comment to you was
religions have not tried to practically
solve
death it was never in their power to do
so the technology has never been present
they've been trying to soothe the
reality that death is inevitable it's
interesting that is a very valid
interpretation I get why you're saying
that but in some ways if they believe
the book the religion at a at a um
organism level if you think of the all
of the people in the religion all the
people that contributed to the books all
that if you think of that as an organism
the organism has tried to solve for
death by telling you that if you believe
when you die you will live on now I
think you and I both think that that is
just inaccurate factually
but I mean that really pretty impressive
when you think about I don't know if
you've ever felt this but I um let's say
that something happens to someone I love
love I don't believe in God and I wish I
did because I want to pray I want there
to be someone I can appeal to that help
that situation you know it's like such a
powerful impulse and so if you're about
to die or someone you love is about to
die that impulse is so strong yeah that
it it is a while yes it's
soothing um it's soothing by
creating a place in consciousness going
back to your earlier thing that exists
and is real and that you maybe can't
access it the way you want um but it's
pretty interesting I remember because I
used to believe when I was a kid and
when my grandmother died I really had
this profound sense of she's watching
you and it got weird when I wanted to
masturbate but it was cool to know that
Grandma was alive you know what I mean
um yeah yeah it's a trip the way that it
works on our minds let me ask so
how do you think governments are going
to respond to network States it's I mean
you could model it out and I can't
imagine it wouldn't be predictable yeah
it doesn't I'm worried onchain worries
me um yeah I mean this is so your cont
your your question is how are existing
power structures going
to uh goal align or how are they going
to reconcile with new
power uh conflicts yeah I think
governments are gon to get upended yeah
and they're going to freak out sure yeah
and freak out means shoot people yeah
yeah I mean so that's what humans do
yeah yeah yeah and this is again why
don't die is so important is because you
know if you again zoom out to say what
are the patterns at play there's so few
surprises in our reality we humans are
so
predictable in our thoughts in our
responses in our behaviors
we are individually we are collectively
it's it's like we we think we're living
this novel experience we're not we're
patterns and we're repeatable
patterns so you just take you zoom out a
far enough and you see the patterns it's
not novel so of course that's what's
going to happen and so that's what I'm
saying is like
you it doesn't take a lot of creativity
to imagine how we as a species are going
to play out with increasingly more
powerful weapons at our dis osal or our
our willingness to commit
self-destructive harm and ruin our
planet it's not like we can't stop our
vice we can't take corrective actions we
can't save ourselves from Death it is a
limitation of our intelligence and if
we're sober enough we can acknowledge
that and so something has to change if
we're going to stop ourselves from
Annihilation now you can take a
different approach and say be optimistic
like we as a species we've always done
it before we're going to do it again
like okay like I'm open to that but the
probabilities to me like the the
situation we're in with with climate
change like where we may already be past
the point of no return we don't know we
cannot model these weather systems with
a whole lot of accuracy not like we have
been repeatedly surprised over the past
few years at how fast and how
significant the change have been we
don't know like this idea is like if we
can turn things around before 2030 we
may be okay we don't know and so as a
species if we were actually intelligent
we would stop everything we're doing and
say first job is take care of our planet
if we don't have an inhabitable home
we're all dead but we don't we
prioritize making money and economic
interest above all things so as a
species we're extremely foolish and we
just we need to reconcile with that and
just accept it for what it is and then
address the problem because we're not
going to change until aii arrives Brian
Johnson man has this been interesting
where can people follow you I hang out
on all the social channels I'm on
YouTube Instagram
Twitter hit me up yeah there it is all
right everybody if you haven't already
be sure to subscribe and until next time
my friends be legendary take care
peace if you enjoyed this episode be
sure to check out this other
conversation with Peter diamandis the
boohead whale the largest mammal can
live 200 years uh Greenland shark can
live 500 years and have pups at 200
years old and the question is if they
can live that long why can't
we