File TXT tidak ditemukan.
"Life Will Get Weird The Next 3 Years!" - Future of AI, Humanity & Utopia vs Dystopia | Nick Bostrom
OCNH3KZmby4 • 2024-08-06
Transcript preview
Open
Kind: captions
Language: en
if at one extreme you had an AI that was
like exactly functionally identical to
human that had um that lived for 80
years that had like a human-like body
that humanlike memories that had this
brain in brain like an artificial brain
structured very much like a biological
brain I think in that case the it would
be a very strong moral case that we
should treat it as as a moral subject as
well that it would be wrong to mistreat
it and be cruel to it Etc
Nick Bostrom welcome to the show happy
to be here all right written language
gave rise to nation states because they
could track things like laws and taxes
the printing press gave rise to
religious persecution and Wars the
internet gave rise to decentralized
media and the age of
conspiracy what will AI give rise to I
think there are several possibilities
there so one is that the future is just
uh shaped by and dominated by AI Minds
that have kind of disconnected
themselves ultimately from their human
origination um in in roughly the same
way that we've kind of disconnected
ourselves from I don't know the great
apes uh that that we or or the IND
theander thals uh this kind of um but if
we imine a kind of human society with
these AI tools I think there are like
maybe um there are certainly dynamics
that could increase
centralization and make centralization
more extreme right now if you have a a
totalitarian system like a dictator like
the dictator can't rule on his own they
they you still even if you're dictator
need the buying of some fraction of you
need of the population like at minimum
the security forces the military like
like some some key families maybe so so
maybe you need 10% support or something
like that to rule uh but with automation
of police forces and military forces you
could imagine an even hter concentration
of power and and better abilities to
surveil what is going on out in the land
to keep track of what everybody's
opinion is about the ruler and what they
are doing their sort of political
sentiments um so that that could enable
kind of increasing levels of
centralization of power um that's that's
one possible Dynamic um and another is
that like just this sort of AI
amplification of current Dynamics in our
metics just become more powerful that we
develop sort of hyper stimuli that
hijack our minds as it were like super
memes or virtual reality worlds that are
so compelling that people kind of check
out of real reality to spend all their
time just we kind of already are to a
significant extent like with television
people spending hours and now in front
of like their social media
feeds um and and this could maybe be
kicked to the next level if you had like
just a higher level of Technology doing
that um so those would be some of the
kind of negative dynamics that one could
like worry
about so you wrote a book called Deep
Utopia it's a very interesting
exploration of what if AI goes
right when I encounter the ideas though
I do start to ask whether I would want
to live in that kind of utopia and
whether Utopia would be a positive thing
at all can you define Utopia before we
get to deep Utopia can you define Utopia
uh so we can contrast that to this idea
of a deep
Utopia yeah generally in sort of utopian
literature it's usually people coming up
with like a blueprint for the ideal
Society they have some vision of what
like what would a perfectly Fair Society
be like where everybody has enough and
everything is nice and uh um and so this
kind of utopian literature has kind of
Fallen a little bit into disrepute uh
partly for good reasons in that often
times people with these social Visions
uh if they actually gain power to
implement them have created a trail of
havoc and misery um and so people became
a little bit skeptical uh in in like the
second half of the last century in
particular after sort of the um Soviet
experiment and the Nazi experiment and
some other other regimes that ran with
crazy
ideologies uh like resulted in massive
human tragedies people became well now
maybe this Grand Vision for society
thing is uh is actually really dangerous
well so that that's kind of what Utopia
traditionally mean now deep Utopia I
really just use the word because I I am
interested in the philosophical question
of what would what would the
ideal what would a Great human life be
like if you abstract away from a lot of
the constraints that currently limit
what we can do so imagine you had super
advanced technology you have AIS and
robots who can do all the work you have
like super Advanced biotechnology that
give us unprecedented control over our
bodies and minds and psychological
States in in in this kind of condition
and and let's suppose also that
somehow um government work really well
as as well so we don't have like Wars
and oppression but just like wave the
magic wand and imagine like a really
great society
all right so then under those conditions
what would like a really great human
life look like and so that actually
brings us into and and I'm glad you you
you you you sort of um started thinking
about this question would I actually
want to live in this kind of world
because once you think about a lot of
what we base our sense of dignity and
worth and and fill our lives with are
because we our our efforts are needed
currently so you might pride yourself of
beinging a bread winner or of making a
positive contribution to the The World
At Large or or maybe just within your
family like you're a valued person who
contribut something and to the extent
that we Define ourselves by our ability
to make some instrumentally useful
contribution right then in this world
where AIS can do everything better than
we can do that is a kind of a threat to
our sense of self-worth we would
certainly need to rethink a lot of uh
the fundamentals of our values if if
we're moving closer to this world of of
human redundancy
do you think those values are malleable
or are they an echo of what I'll call an
evolutionary algorithm running in our
brain our values are what they are um
but I think you can maybe
distinguish superficial values and and
the deeper values underneath that that
uh justify or underpin them and so my My
Hope Is that although a lot of The
Superficial values we have we'll need to
maybe give up because they no longer
make sense in in this kind of solved
world and that there are deeper values
that could be more fully realized that
is possible today if we remove some of
these
constraints so you can kind of go
through a bunch of different candidate
values that people might have and see
whether they would be instantiable in
this kind of Sol world so like if you
take the simplest Value First like
purely hedonistic subjective well-being
pleasure let's say uh like having fun in
the sense of being in a certain
subjective state so that would be
trivially easy to realize in a solv
world in that you would have very
Advanced neurot Technologies that if if
all you really wanted was pleasure you
have like a super drug without side
effects that could give you as much
pleasure as you want or maybe more
direct ways of uh interfacing with the
human brain so you could check that box
like if that's like actually enjoying
life and feeling good about it like
that's a check mark right off the bat um
you can then go through a bunch of
others where it starts to get more
um problematic is when we get to values
like um meaning purposefulness
significance um where if if all problems
can be better solve by Machine then like
what would give us purpose in in our
lives if there's nothing we need to
do um so then to the extent that you
think your life goes better if it has
meaning in it or purpose then maybe to
that extent uh it would actually be a
worse life in this whole world because
it wouldn't be useful um now not that
you have to distinguish it between the
subjective and the objective sense of
say purpose like so the subjective sense
of feeling imbued with a sense of
motivation and drive and there is like
something you're striving towards that
you really want and that energizes your
like so that again would be trivial
right if you have these very Advanced
neurot
Technologies um but some people think
there is like an additional element like
what you might call objective purpose
that not just is there something you
feel you want to do but there is
something that actually needs doing uh
and and that is a lot less um clear
whether you could have that in a solved
World a most problems would be solved
right if it's really a Utopia B
the problems that
remain would be better solved by
machines than they would by by
you uh in in a technologically mature
Society all of my anxiety around AI
hinges on one idea that I believe
there's no way for humans to get around
unless we rewire our biology and that is
uh it isn't about the pursuit of
happiness happiness derives from Pursuit
so I think think from an evolutionary
standpoint that we have been designed
over God knows how many hundreds of
thousands of years
generations to have to do very hard
things in order to survive and so
Evolution only has Pleasure and Pain to
get us to do the things and so I think
when you work hard in pursuit of
something that's valuable to not only
you but to other people and you feel
like you're about to be successful even
before you're actually successful that
moment is like the greatest thing that
life has to offer you and I think that
when people are in pursuit of something
and it gives the meaning that you were
talking about man that's when it feels
good and the second you're either not
working hard you're just being given
things or you are working hard but it
doesn't matter because everybody already
has everything that they need and there
is literally nothing that you could
contribute to the group that would uh
make the group better off then you ask a
fundamentally corrosive existential
question which is why do I matter why
exist it all and I think if we end up
with a social structure that leaves
people asking that question we are
really in trouble do you see a flaw in
that fundamental base assumption I think
there is some truth to that as a
psychological observation about uh our
current
Minds um that some of the malayas in
modern societ might be from uh the
absence of certain kind of survival
pressures or opportunities
or that that were always there in in our
evolutionary past I mean just as kind of
obesity is probably a function of
refrigerators and plentiful food and
stuff and fast food it wasn't really a
problem when when you were a hunter
gatherer and so there is a kind of
mismatch between our
like physiology and our current
circumstances and I think that's also
true mentally to some extent and
although is amazing just how adaptive
humans have been that to we can still
Thrive as much as we can in in these
very different environments than than we
kind of people but I think a key
question here is whether we value this
purposeful striving you describe because
it uh creates mental health and good
feelings um or whether it we value it
for its own sake so right now it's not
important to differentiate these because
they the only way you can get like maybe
the uh the mental health and and
well-being is by actually doing these
hard things that you describe and then
feeling the satisfaction but um but in
this hypothetical that I explore in deep
Utopia these two elements could come
apart you would have this kind of the
perfect drug that could induce exactly
the same sense of satisfaction and
fulfillment and you know energized
relaxedness or whatever it is that like
hard effort produces
but without actually having to uh to
make any
effort um yeah another way to get it it
maybe is um to what extent you think our
artificial purpose would be uh a good
enough substitute for the natural thing
so so real purpose might be like you're
being chased by a tiger and you really
have to run as fast as you can otherwise
you get eaten and there are very real
Stakes there right it's not something
you just randomly make up that you have
happen to like to run away from the
tiger once the tiger is there chasing
you like you know what you have to do
and it's a given and it's a very real
purpose but contrast that to somebody
who is playing a game uh like maybe they
really want to
win uh but in some sense the game itself
is like an artificial purpose there's no
re if you're playing golf there's no
reason why the ball has to go into these
sequence of holes other than that we
just decided let's try to do this we
make up this random goal and then once
you accept the goal then you have the
purpose of like trying hard to achieve
it and people can work you know decades
to try to perfect their golf game and
and find a lot of satisfaction right but
in some sense the over the whole thing
kind of made up it's artificial right
arbit it it is however run the thought
experiment of imagine if you could play
golf but no one would ever know about it
no one would ever see no one would ever
know that you were better than they are
um it I think is a proxy for social
status so you are trying to rise within
a hierarchy and so the question
becomes uh this is really there's
there's two things here that we have to
put on the table because there's a huge
hurdle before what I'm about to get to
but let's say that everything's taken
care of uh Utopia is here as you defined
it earlier we don't want for anything
everything's equal um and now will
people be interested enough in status
games that that will be fun or will they
be like but this is all just a status
game and so there's a deep emptiness I
think we have Clues right now that
answer that question so right now you
have video games that are unbelievable
as somebody that discovered Minecraft in
my 40s I will just tell you that game is
unbelievable I cannot believe that kids
get to grow up in a world where that
game just exists but nonetheless if I'm
playing it and it doesn't feel like it's
going anywhere other than I'm playing it
it does have an emptiness which I think
is part part it is a small part this is
a huge problem but a small part of the
sort of male sense of meaninglessness
that certainly is sweeping across the
West okay that that's the question
you're putting forth I think we have
enough of an inkling to say it probably
doesn't pan out as well as we would want
it to but I think there's probably a
more important question that has to be
asked before we even get to that which
is humans with their current value set
now their current brain wiring now how
are they going to um are they going to
accept or radically push back against AI
when they see that it will lead us to a
world where they are irrelevant by
today's standards if I had to guess I
think more likely it be we'd be seduced
by it and
uh uh and
the displacement I mean to to in part
this is economic people losing their
jobs maybe right or downwards pressure
and wages that will arise as
automation uh advances interestingly in
this case it might initially hit more
certain kind of white colar work that
like traditionally automation mostly
like affected lower skilled workers but
this this this kind of current language
model technology seems to like hit right
at the sort of midlevel White Collar
work like people summarized documents
for living and stuff like that um so
interestingly exactly if the AI succeeds
at automating a wide range of jobs there
will also be massive economic growth um
which to some set extent might offset
the the kind of
U uh economic impact of unemployment
like if you have a a booming economy
there are more tax revenues it's easier
to there's more demand in other sectors
that haven't yet been automated there's
more people can spend more money on you
know hiring gardeners or nurses for
their grandparents or or whatnot um but
it still leaves this question about
meaning and purpose and and social
status
uh I'm wondering about social companions
in this context um like
AI uh social companion Bots this might
become another kind of is that a very
nice way of saying sex bot well it would
Encompass that but it could also just be
uh friends and uh fans and uh all all
the different elements of social
interaction uh like maybe even some sort
of fake status uh like like you were
saying that if if people want to feel
that there are high status in the real
world maybe they aren't and it's just a
frustrating experience and rather than
work for years and decades to get like
one notch up on the status hierarchy by
I don't know
like stressing yourself out in the gym
to get a slightly better body or like
educating like all these achievements
they are hard and take a long time right
imagine if you could instead tap into
this virtual world where you have
perfectly realistic uh virtual
characters and where you are playing
you're the king or something like that
and you have these
admiring um digital characters that
maybe that if that becomes good enough
would like be extremely compelling to to
people um for for the same reason like I
mean like drugs are really compelling
and and often in case the word your real
life is the
more attractive like a kind of the
alternative of some opiate or something
is right that can sort of um so um I I
think that yeah this whole AI social
companion technology will uh Advance
very rapidly over the next couple of
years do you see that being like online
dating where at first people are weird
about it and then it just becomes the
norm yeah I wonder so on the one hand
there is like
something uh it's like slly dystopian
seeming but like if you imagine a world
where more and more of our time is spent
not with real people but interacting
with these like Bots um on the other
hand it might be one of those things
where there's like a generational thing
where like uh uh I'm the old fuzzy duddy
like Grandpa you don't get it and then
people who grew up with this is like
yeah of course it's just much more
interesting this AI bot is much Wittier
and like really pays attention and like
these these humans are kind of a drag
and we all just using these AI Bots now
and like is
that is that because if if that happens
is that because this whole generation
will have made like a big error or is it
just that uh um they they they would
have more familiarity with this and like
on its marit chosen to spend their life
in that way rather than by hanging out
as much with the other fellow humans
it's hard so it's easy to have opinions
about these things but it's hard for
those opinions to actually be grounded
in in some kind of objective truth as
opposed to just merely reflecting your
own personality or upbringing agreed um
I think there's certainly value in both
but one of the more interesting things
about you and the way that you've
approached these problems is with the
anthropic principle and finding ways to
at least ground things in
probabilities um I'd like to talk about
your probabilistic look on what does AI
do in the next three to five years and
then what does AI do in the long run
obviously we're talking probables here
um but as you've used that to great
effect I'd love to see how you think
through that using anthropic
principles well uh yeah I don't know
about using anthropic principles I I do
feel um AI timelines appear relatively
short from this point on I mean we are
really far along the the path towards AI
already the things that are now possible
if you had asked people 20 years ago uh
it I think many people would have
assumed well if you can do these things
if you actually can have like an AI that
can uh have a conversation with you in
ordinary language and like you can't
even tell whether it's an AI or a human
like unless you're really an expert to
know exactly how to Pro that that that
seems like AGI
computers that can write code like at
the level of like maybe an entry
programmer
um so um we have this tendency I think
with each advance in artificial
intelligence to to move the gold post uh
and and to immediately discount and take
for granted what's like the same thing
like when when deep blue uh beat Gary
Kasparov in chess like before that
people saw chess as this
great game of the human mind like the
most complex thing the human mind could
do is to learn to play chess as a high
level right really deep logic and then
after computers could do it we say ah
it's just a game of chess like there are
simple rules and then the same happened
with go and then like when you could
have AIS look at pictures and actually
visually understand what what's in the
picture um and now with natural
language um I feel there are not that
many um of these steps left because
before before you have ai that can do
all kinds of AI research better than
than humans can do at at which case you
have I think an intelligence explosion
because that then you have the AI
research being done by digital Minds at
digital time
scales um and and then you get a very
rapid feedback loop right with each
subsequent Improvement the the the force
that is doing the improving gets
stronger and uh you might then have some
kind of
Singularity um now exactly how many
years that is away is hard to tell but I
think we are no longer in a position
where we can be confident that it
couldn't happen even within some very
short period of time like a year or two
uh I'm not saying it will but we are not
in a position where we can be like
really sure that it won't it's like it
might just be somebody make some other
breakthrough of the uh level of the
Transformer
architecture and applying that to the
already really large models we have you
know maybe that will be enough to sort
of unlock a lot of latent potential
there or maybe they will be like need
two or three more of these advances um
or or more scaling up uh of the size of
the data centers
um um we we just don't know exactly but
we are sort of I think close enough that
we can't be confident that it couldn't
happen any
time poor sleep can sabotage everything
in your life including your success and
that is why I am obsessed with
optimizing ing my sleep from blackout
curtains to sound machines to mouth tape
which I do every night I've tried it all
now I'm excited to add eight sleeps
game-changing pod 4 ultra into the mix
it's clinically proven to give you up to
an extra hour of quality sleep every
night plus the Pod 4 ultra has an
adjustable base that fits between your
mattress and bed frame and if snoring is
an issue it detects it and automatically
lifts your head to improve air flow and
stop the snoring now my wife Lisa and I
are temperature opposites so we're
excited about the Pod 4 ultra's
personalized Heating and Cooling
features head to 8sleep.com
impact and use code impact to get
$350 off your pod 4 ultra they currently
ship to the US Canada the UK Europe and
Australia every dollar counts when
you're running a business and that's why
it is so important to cut costs wherever
you can without impacting performance
netsuite by Oracle is one very smart way
to do that netw Suite is the number one
Cloud Financial system bringing
accounting financial management
inventory and HR into one platform and
one source of Truth it reduces it costs
and cuts the cost of maintaining
multiple systems plus you can improve
efficiency by bringing all your major
business processes into one platform
slashing manual tasks and errors do not
be left behind over 37,000 companies are
using netsuite right now and by popular
demand netsuite has extended its
one-of-a-kind flexible financing program
for a few more weeks click the link in
the show notes or head to netsuite.com
Theory right now that's netsuite.com
Theory again it's netsuite.com SL
Theory what would you advise to somebody
that is a junior in high school scho now
these are American terms but Junior and
high school now they've got to get
really serious about where they're going
to go to college or whether they're
going to go to college what they're
going to study um how can if if we are
and I heard you this is not a guarantee
but if we are potentially within a year
of AGI um how can somebody even plan for
the future it just seems like such a big
question mark yeah I mean but to be
clear it could also be 10 years or 15
years or like it's just um yeah I think
I mean I'm always wey of giving a
general advice to everybody I feel
that's like giving advice on what's the
best shoe size uh like what's good for
one is like you know so some people
maybe are too too hard on themselves and
good advice to them might be to you know
ease up a little bit like go easy on
yourself and then for other people that
might be exactly the wrong advice they
they might actually need a Stern message
you really need to pull yourself
together here like you're just wasting
your time discipline so the same message
might be completely right for one person
and wrong for the other depending on how
they are currently going wrong and
similarly with career advice you know it
depends a lot on what your talents are
and what you your passions are in life I
think more than I meant looking for
something specific I'm looking for a
guiding philosophical principle so I
know that you used to run um The
Institute for the future of humanity so
you I'm sure I've thought a lot about
where we go how we deal with it well and
so yes we're not going to say you should
be a dentist but there I'm guessing you
have a framework that people would
benefit from in terms of facing such a
rapidly changing environment yeah I mean
so it depends like so there's like a
small fraction of people might actually
you know be be looking to directly
contribute by researchers or AI
scientists and stuff like that that's
like one
Avenue um I think in general probably
useful to familiarize onel with the
current tools and and the Next
Generation so that you kind of know
roughly where things are and what they
can and cannot do to be
adaptable um but then for other people
who are not really technically minded uh
um I mean it might be that going in the
opposite direction you know being really
developing your skill with people I
think uh there's like enormous needs for
various Care Professionals I think like
say with elder care like if we just had
more resources in in theory every old
person should have like their own
full-time person Al uh would be great
right some some like younger person who
could live with them fulltime and just
help them if they fall help them up like
we we we can't afford that but like in
principle the need is kind of unlimited
there um I would also say that don't
forget to actually enjoy life um right
now like I wouldn't sort of plan on a
4year career and make big sacrifices now
for 10 or 20 years in the hope of then
it paying off like when you're in your
50s and 60s because you know maybe the
future doesn't exist um at that point um
it would it would risk being a kind of a
um yeah I would maybe focus a little bit
more on short-term strategies when you
say the future may not exist what do you
mean well I mean several different
things actually at the same time but in
one one thing I meant was that if this
AI Revolution happens within the next
five or 10 years or something then these
long-term investments in human capital
that we might make now with a payback
time of 20 30 years might not pay off
because by that time maybe human uh
Capital will have depreciated as a
result of AIS uh supplanting us across
the board so that's one sense in which
the future would not exist there are
other senses in which it might not exist
as well um related to the simulation uh
hypothesis which we don't need to get
into but yeah so enjoying things now
also like with college education I think
if you're would like really enjoy your
time at College that's one thing then
maybe do it but if it's just something
you have to drag yourself through for
the sake of getting a diploma I would
like maybe seriously consider if there
are not ways to sort of cut out those
three or four years to get straight to
what you want to do um and similarly
with like PHD programs that can take
like in the US five or six years that's
a long
time um if I I think in many cases uh it
may be too long for it to be worth it
these days just because the rate of
change is so accelerated yeah because
the timelines might be shorter and so
uh um I mean if if you imagine if if you
had like uh so suppose you had the view
that there was like a 10% chance every
year that the world will uh blow up and
be destroyed so then U like you wouldn't
make 20year investments really right
like you you'd kind of focus on things
that have a shorter Payback
time so like having a def facto higher
interest rate or hurdle rate for your
own long-term Investments maybe would
make sense in this picture now I I would
like hedge a little bit because this
could all be wrong and if if the AI
think doesn't happen or if it's like you
know banned or it stalls out you you
don't want to end up completely dry
either right where you have nothing
you're 30 years old you lived for the
day uh planning that the AI Revolution
would happen somehow it fizzled out that
was like a global ban and then you're
now a 30-y old with nothing no skills no
job no nothing uh so you might want to
depending on like what your sort of
social safety net is you might might
want to hat your bets a little bit
there now I know that a lot of people
have what they call a a p Doom number so
How likely you think we are to basically
blow up the world whether with AI or
something else uh as I'm listening to
you it does beg the question what is
your p Doom
number yeah I don't actually have a
specific number but maybe one way to
think about it is if you could divide it
into so there's like ways in which
things could go really badly right we
kind of you know blow up the world or
some dystopia then there are sort of the
more utopian scenarios everything is
clearly we Cur a lot of diseases like
wonderful Prosperity right so so each of
those would have some probability but I
think there's like a third bucket in the
middle which is actually
perhaps maybe the most prob which is
that the world is such that even if we
could actually see what would happen
even if you had like a little binocular
so you could look at the future and and
stud it right you wouldn't really know
whether to count that as a success or a
failure it would like maybe be very
different from the current condition um
better in some ways worse in some like
strange there's like some kind of Minds
doing stuff there like they are not
exactly human minds but you know they're
sort of a little bit doing the same
things so you count that as as as there
being humans around or or is like are we
all dead and just replaced or did we
sort of grow into this new life form um
so I think it's not obvious that the
future would be such that if we could
see it we would necessarily know even
what to think of it um you can think of
in an individual life so right now we
have children like say a four or
fiveyear old who eventually becomes a
25y old and and the 25-year-old is quite
different from the 5-year-old in many
ways like mentally they have different
interests they're no longer interested
in the toy train right they they are
interested in like their romantic
partner or their job prospects or us
politics or whatever it is like the
Roman
Empire um so in many ways what was there
at age five is all gone and yet we don't
think it's bad for the child to grow up
I mean in fact most of us would probably
think it would kind of be something sad
and unfortunate if a 5-year-old never
grew up to become a 25-year-old if they
remained at the level of a five so I
wonder if there is like a similar thing
where we now are basically children in
the SK we never not none of us ever get
the chance to truly grow up because we
we just like biologically develop to for
20 years and then stagnate and then then
we sort of rot away and die after a few
more decades just biologically we are
not we we can't live for 500 years
continuously growing and expanding and
learning new things like it's we are
kind of cut short and and maybe like 80
years is just not enough to really uh
fully uh realize our in inherent
potential like we are kind of Zapped by
our rotting biology and uh so there
might be different kinds of lives that
would become possible if if you could
live for a million years and if you
could gradually upgrade your
capabilities um that might be really
wonderful but that would maybe change us
as much as like the year old is changed
when he or she grows into a 25y old or
more what that perhaps suggest is that
especially when we're zooming out and
thinking about these more radical
scenarios we should not
really focus so much on comparing two
states like the current state and some
later state but maybe thinking more in
terms of
trajectories um leading out from the
current state and then evaluating how
desirable those are like maybe it's fine
if ultimately
we end up in a very different weird
postum condition like in 10,000 years
from now like but if if if we went there
slowly and we sort of had a chance to
grow into it properly um that that kind
of trajectory might I think be more
attractive than one where we just remain
humans and keep doing the humanlike
things for 500,000 more years I don't
know five million more years like at
what point is enough at at some point
you'd want to maybe sort of unlock the
next level right it's like playing the
same level of a computer game at some
point you want you need to move on and
maybe similarly uh the kinds of values
and lives that can be lived with our
current human physiology is like a
limited set of all the possible values
maybe we haven't yet exhausted it we
might want to spend some more time and
go slowly through the level rather than
just skip to the sort of the the final
level right that might be another
mistake but still thinking in terms of a
trajectory that eventually leads to to
Greater forms of development including
ones that ultimately take us out of the
human Ro okay you're playing with a lot
of ideas here and I want to start
pinning some of them down so uh one is
the idea of trajectories and I think
people right now today are going to care
a lot about that so uh through
regulation through what people end up
pursuing as entrepreneurs we're going to
have a tremendous amount of influence
over what gets developed what doesn't
get developed and so I think that's the
big question of today is what trajectory
do we want to see this go on and so I'm
very curious to hear your take on how
much can we control the trajectory and
do you see an ideal
trajectory most of the uncertainty about
how AI pans out is um uncertainty about
how hard the challenge is that we will
confront uh rather than uncertainty
about the degree to which we will be
getting our act together and make a good
effort um like we don't know how hard
this is we've never had a machine
intelligence transition before right we
haven't studied a million other planets
where some humanlike species developed
Ai and we can like studies the
statistics we we have you know we're
coming to this a fres we have no idea
whether it's like relatively easy or
like
fishlyn obviously we can at least nudge
the odds in a better direction if we
really make a good effort we work on
this collaboratively you know we are
really smart about it we studied hard
like be careful then like we can improve
the odds a bit but but most of it is
just I think baked in so in that sense
I'm kind of fatalistic like you could
say I'm a moderate fatalist in in the
sense the moderate coming from well we
can still affect the odds at least a
little bit on the margin by getting our
act together but fatalist in the sense
of for the most most part it's probably
just baked into our situation and and
like the technology
itself when you say it's baked in what
do you mean that it's going to happen
the outcome for for Humanity like for
example whether we end up destroying our
like AI kills us all or we achieve
alignment and uh manage to align it to
human
values
um I
think some of those things might be
baked in uh in in the current situation
Elon Musk has said that he thinks of AI
as a demon summoning Circle and that we
should be very careful about what we
wish for um I'm hearing tones of that in
what you're saying now and he said his
life got a lot better when he became
more fatalistic about AI um what do you
think about his take his level of
anxiety about AI warranted not
warranted um yeah it seems
warranted he is also uh the founder of
uh xai uh which is an AI startup and as
well as Tesla which has major AI
operations um and um one of the original
investors and founders of open AI uh so
um so I guess his his attitude is
complex like I think he recognizes that
there will be big dangers but um it
doesn't necessarily follow from that
that the conclusion is that each person
should unilaterally remove themselves
from the race okay so when you look at
this you have a similar uh this is going
to happen this stuff is baked in um I
see a world where we end up bifurcating
as a species so I I consider myself
wildly technologically optimistic uh I
have a natural bent towards somehow
things will just work out um but I also
look at what I can feel brewing in
culture right now which is a massive
Resurgence of um religious fervor people
reconnecting to that refining Faith
accounts that are focused on faith on
podcasts and YouTubes YouTube uh are
starting to dramatically increase in
popularity and I think in many ways this
is a response to a hyper technological
world where even just us humans are
using technology a lot whether it's O
zic and losing weight whether it's
anti-depression medication uh whether
it's AI that they see this influx of um
what I think many will read as
antihuman things and there's this
feeling a desire to connect with
something traditional and certainly
Divine and I see that creating a
bifurcation in society and what I
predict timeline gets a little bit fuzzy
because it's all going to be predicated
on the rapidity with which AI disrupts
our normal life uh so on whatever ever
time scale that is I think what you will
see is a group of people that spring up
that I'll call Puritans that will not
want to use AI uh they won't engage with
art that's created by AI they won't
support companies that use AI to create
their product or their marketing um and
then other people that will um sign up
for neurolink when it becomes available
and they will literally augment
themselves they will use AI whenever and
wherever humanly possible they will
fantasize about free energy and the
Utopia that AI is going to bring and I
think over time those guys will end up
pulling apart uh especially if AI helps
some people augment themselves and you
could be augmenting yourself directly
you could be augmenting your children
just through genan selection let alone
Gene editing um do you think that's
plausible likely
delusional yeah I think the debate is
likely to become polarized if we're
talking about the sort of public debate
about what should be done about this
we're already seeing a little bit uh you
know on the one hand the sort of the
doomers right and then the EA that like
kind of go forward with all maximal
speed on everything crew that are sort
of dividing themselves up into two
different tribes that can now start to
um hate on one another
um um and I think maybe broader segments
of society will be recruited into this
debate as as the impact s start to be
more widely
felt
um I could
see yeah like this it's interesting to
think about how the speed of development
might impact the degree to which this
happens as I think maybe there are
actually three different regimes so like
extremely sudden and
fast a like super intelligence is
invented like next week just comes out
of the blue right then okay that there
won't be much more polarization than
there currently is because
people didn't see it coming enough and
so now I think also maybe if it's
extremely slow and happens over many
decades then it might be kind of the
boiling of the Frog phenomenon where
people
like are using this technology and of
course every little increment makes it
better like if you're going to have like
a a medical diagnosis spot surely you
want it to make slightly fewer errors
rather than you know more errors and so
every little step along the way will
just be better if you have self-driving
car you want it to be slightly smarter
so it crashes less often like for every
application it's clear that more capable
means better and so if you just follow
that long enough you eventually end up
with super intelligence but at no point
is there like a clear jumping off like
an alarm signal so that now in the
intermediate scenario where you have
kind of turbulence where people feel uh
dislocated because like every other
month there's like a new thing and now a
big sector of workers were like laid off
and now there is this other thing that
has created these propaganda Bots that
are running around and then there's like
deep uh fakes or like then and and then
some big disaster happening because the
AIS were running the power grid and it
all sort of malfunctioned and like in
that kind of world or drone swarms
coming in and killing a bunch of people
in war you can imagine maybe more that's
that turbulence creating a kind of
increased resistance
um now I think you were also asking not
just about like the conversation around
this but also where the different
communities will form that sort of like
the Amish decide to only use certain
Technologies and that whe whether many
people will opt out of this this uh AI
technology um very much so yeah I don't
know
um it it's kind of so I mean unless you
go
full full like unless you're like really
hardcore about it like some of these
communities are where where you don't
even want cars and
stuff otherwise it's pretty sort of
integrated into the modern economy like
if you're
using Google like you're using AI right
in the future every car will have ai the
electricity grid will be optimized with
AI
algorithms um you know all all these
different systems that that you interact
with like the doctor they will have like
probably use some AI bot and you have
some weird weird mold they'll take a
picture of it and scan it and then some
like skin cancer diagnosis system will
look at it and flag it like they will
just be everywhere so so so it might it
might not be easy to opt out unless
you're really willing to sort of
completely rip tear yourself off uh the
kind of the the fabric of modern
society do you see that happening that
seems self-evident to me that that's
going to happen
well the question is on what scale right
so I mean there are people who live Off
the Grid or who are Amish and stuff but
they are still a small fraction of the
world
population uh you know with a higher
growth rate because fertility rates are
larger so if you imagine rolling the
tape forward hundreds of years then
eventually uh those groups would expand
and and others would kind of dwindle
into nonexistence unless they changed
their ways but I'm just thinking the
time scales for that kind of population
dynamic stuff to to play out is is
multigenerational whereas the technology
is moving forward year by year and so so
I'm I'm thinking like yeah like it it it
will not have that there will not be
enough time for these slower processes
to really would be my guess to really
have a big impact so here's how I see it
and maybe you can um pull me back off
the brink uh which I would love
but It ultimately when humans feel
either emotionally distressed or
financially distressed which usually the
two are intertwined uh they will go all
the way to killing their fellow humans
with just absolutely no problem
whatsoever um if you take the French
Revolution right things got bad enough
economically they just pulled people
into the streets and started beheading
them uh I don't think we are
fundamentally different than that
version of humanity and I think if AI
begins to disrupt enough jobs and
creates enough turmoil that it's not
like the Industrial Revolution where
yeah you had a generation that had it
kind of rough because they weren't able
to uh rapidly change but there was just
such anep economic boom that everybody
the people that were winning from it far
outweighed the people that were losing
from it and so it ended up being fine
but I think what you're going to see is
a disruption that happens so quickly and
touches on the one thing that if you
break you're going to have a real
problem which is meaning and purpose and
the only hope we have and this is ultra
I opian is that we have enough
entertaining things that people are
numbed to the fact that they're no
longer climbing that life isn't going to
be better for them than it was for their
parents that they've lost their job or
whatever and so they drink do drugs um
online porn play video games and that
just becomes a a sort of get by
existence and they just sort of give up
on it that that's the hopeful outcome
but I think the more more likely outcome
is that this becomes a political divide
uh where the battle ends up being drawn
along the lines I was saying before were
people that utterly reject it and just
want to absolutely shut down AI put it
back uh in a bottle and then people that
wanted to develop it and just another
terrifying twist if AI comes out slowly
enough that we see let's just say China
makes a major advance but it's not a big
enough Advance where we would
automatically lose in a war I could see
a preemptive escalation of violence um
to shut it down to make sure that either
we're able to hit parody with them or
we're a by by elevating ourselves or by
tearing them down it's very hard to
predict these kind of sociopolitical
Dynamics and cultural Dynamics we don't
have the kind of scientific theory that
that can tell us how social sentiment
will change over the course of five or
10 years when you start to I think like
in the past a lot of revolutions I guess
is like or lack of food to eat I think
that hopefully would be relatively easy
to supply with some degree of political
mobilization especially in these rapid
growth scenarios so you could have bread
and
circus
um
um um the the the the me the purpose
purp uh issue might be harder to
remedy but then maybe the line is well
like let's be honest here most people's
lives today like just how grandiosely
purposeful are there really uh like you
go in you make a paycheck and then you
spend the rest of your hours you know
relaxing or having fun or playing with
the kids like most people are not really
trying to change the world or imagining
that they are like some historical
figure be striding Humanity to shape its
Destiny like it's like just not reality
and
so if you didn't have to go in and work
for eight hours every day using doing
some pretty boring stuff maybe that you
don't really want to do and you'd rather
sleep in and have fun and like you know
would that really be such a tragedy uh
you got the same paycheck let us say um
but uh but but but but without you know
doing these chores that that seemed like
a win uh
potentially uh hopefully a lot of the
energy that people put into work could
then be put into instead building up
leisure activities like to have clubs
and hobby organizations that that that
create sort of activities for people who
now have more free time to to do things
um and so there would have to be this
cultural reset that that seems like a
maybe better outlet for the Surplus uh
time and energy than uh trying to tear
everything
down it's interesting so uh have you
read a Brave New World
MH what do you think about that because
it feels like you have sort of dueling
dystopias you've got on the one hand
1984 massive suppression you could think
of this as an AI tool that's watching
you all the time like if you reread 1984
with the thinking of um AI doing the
surveillance this suddenly becomes super
real so you have that version of
dystopia where even wrong think gets you
punished and then you have over here uh
the other version which just keep taking
taking your drugs feeling good being
blissed out all the time uh both read as
dystopias um did you take a Brave New
World to be dystopic is there something
that I'm
missing in that
interpretation yeah I think it's it's
missing some elements that if they were
added would make a world lot better so
there's like no it was a long time since
I read but I think like no real romantic
love for example in Brave New
World um no appreciation for True art
and Beauty at at the higher level as as
opposed to sort of easy distraction and
shallow kind
of uh Flim Flam and so if you imagine um
a A Brave New World like scenario but
where people actually had a lot of free
time that they spent being with people
they loved and and cultivating hobbies
and like appreciating great literature
and learn like cultivating the art of
conversation um you know maybe taking
arts classes to deepen their
appreciation of great art like all kinds
of things like also less cerebral things
like people some people might
be doing more sporty things or being
into nature or whatever it is but that a
society where where people kind of were
um uh
yeah focusing on on sort of develop
developing a high culture of of living
well I think could be pretty utopic the
other thing with um Brave New World that
I think cast a kind of dystopian poll
over it is the really stratification of
their society where people are sort of
destined from birth to be a particular
class most people in Brave New World
have various degrees of engineered
mental retardation I think they add like
alcohol to
the um the fetus to sort of deliberately
brain damage certain people so that they
would be suitable then to work as
elevator operators or menial jobs
without having some kind of so so that
that obviously makes it pretty horrific
uh then if if you imagine a society
where everybody were like more allowed
to encouraged to sort of grow to their
Resume
Read
file updated 2026-02-12 01:37:35 UTC
Categories
Manage