Transcript
rfKiTGj-zeQ • Nick Bostrom: Simulation and Superintelligence | Lex Fridman Podcast #83
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0343_rfKiTGj-zeQ.txt
Kind: captions
Language: en
the following is a conversation with
Nick Bostrom a philosopher at University
of Oxford and the director of the future
of humanity Institute he has worked on
fascinating and important ideas in
existential risk simulation hypothesis
human enhancement ethics and the risks
of super intelligent AI systems
including in his book super intelligence
I can see talking to Nick multiple times
in this podcast many hours each time
because he has done some incredible work
in artificial intelligence in technology
space science and really philosophy in
general but we have to start somewhere
conversation was recorded before the
outbreak of the corona virus pandemic
that both Nick and I I'm sure will have
a lot to say about next time we speak
and perhaps that is for the best because
the deepest lessons can be learned only
in retrospect on the storm has passed I
do recommend you read many of his papers
on the topic of existential risk
including the technical report titled
global catastrophic risks survey that he
co-authored with Anders Sandberg for
everyone feeling the medical
psychological and financial burden of
this crisis
I'm sending love your way stay strong
we're in this together we'll beat this
thing this is the artificial
intelligence podcast you can enjoy it
subscribe on YouTube review it with five
stars on a podcast supported on patreon
or simply connect with me on Twitter
Alex Friedman spelled Fri D ma n as
usual I'll do one or two minutes of ads
now and never any ads in the middle that
can break the flow of the conversation I
hope that works for you and doesn't hurt
the listening experience this show is
presented by cash app the number-one
finance app in the App Store when you
get it
use code lex podcast cash Apple s you
said mind your friends buy Bitcoin and
invest in the stock market with as
little as one dollar since cash app does
fractional share trading let me mention
that the order execution algorithm that
works behind the scenes to create the
abstraction of fractional orders is an
algorithmic marvel so big props to the
cash app engineers for solving a hard
problem that in the end provides an easy
interface that takes a step up to the
next layer of abstraction over the stock
market making trading more accessible
for new investors and diversification
much easier so again
you get cash out from the App Store
Google Play and use the collects podcast
you get $10 and cash Apple also donate
$10 the first an organization that is
helping to advance robotics and STEM
education for young people around the
world and now here's my conversation
with Nick Bostrom at the risk of asking
the Beatles to play yesterday or the
Rolling Stones to play satisfaction let
me ask you the basics what is the
simulation hypothesis that we are living
in a computer simulation
what is the computer simulation how
we're supposed to even think about that
well so the hypothesis is meant to be
understood in a literal sense not that
we can kind of metaphorically view the
universe as an information processing
physical system but that there is some
advanced civilization who built a lot of
computers and that what we experience is
an effect of what's going on inside one
of those computers so that the the world
around us our own brains everything we
see in perceive and think and feel would
exist because this computer is running
certain programs do you think of this
computer as something similar to the
computers of today these deterministic
sub touring machine type things is that
what we're supposed to imagine or we're
supposed to think of something more like
a like a like a quantum mechanical
system something much bigger something
much more complicated something much
more mysterious from our current
perspective so the ones we have today
would you find them in bigger certainly
you'd need more memory and more
processing power I don't think anything
else would be required now it might well
be that they do have addition maybe they
have quantum computers and other things
that would give them even more
implausible but I don't think it's a
necessary assumption in order to get to
the conclusion that a technology mature
civilization would be able to create
these kinds of computer simulations with
conscious beings inside them so do you
think the simulation hypothesis is an
idea that's most useful in philosophy
computer science physics sort of where
do you see it having valuable kind of
start a starting point in terms of a
thought experiment of it is it useful I
guess it's more in in informative and
interesting and maybe important it's not
designed to be useful for something else
okay interesting
sure but is it philosophically
interesting or is there some kind of
implications of computer science and
physics I think not so much for computer
science or physics per se certainly it
would be of interest in philosophy I
think also to say cosmology or physics
in as much as you're interested in the
fundamental building blocks of the world
and the rules that govern it and if we
are in a simulation there is then the
possibility that say physics at the
level were the computer running the
simulation could could be different from
the physics governing phenomena in the
simulation so I think might be
interesting from point of view of
religion or just from for a kind of
trying to figure out what what the heck
is going on so we mentioned the
simulation hypothesis so far there is
also the simulation argument which I
tend to make a distinction so simulation
hypothesis we are living in a computer
simulation simulation argument this
argument that tries to show that one of
three propositions is true one of which
is the simulation hypothesis but there
are two alternatives in the original
simulation argument which which we can
get to yeah let's go there by the way
confusing terms Picasa people will I
think probably naturally thinks
simulation argument equals simulation
hypothesis just terminology wise but
let's go there so simulation hypothesis
means that we are living in simulations
the hypothesis that we're living in
simulation simulation argument has the
three complete possibilities that cover
all possibilities so what yeah so it's
like a disjunction it says at least one
of these three is true yeah although it
doesn't on its own tell us which one so
the first one is that almost all
civilizations at our current stage of
technological development go extinct
before they reach technological maturity
so there is some great filter that makes
it so that basically none of the
civilizations throughout you know maybe
vast cosmos will ever get to realize the
full potential of technological develop
and this could be theoretically speaking
this could be because most civilizations
kill themselves too eagerly or destroy
themselves early or it might be super
difficult to build a simulation so the
the span of time theoretically it could
be both now I think it looks like we
would technically be able to get there
in a time span that is short compared to
say the lifetime of planets and other
sort of astronomical processes so your
intuition is the build simulation is not
well so this is interesting concept of
technological maturity it's kind of an
interesting concept to have other
purposes as well we can see even based
on our current limited understanding
what some lower bound would be on the
capabilities that you could realize by
just developing technologies that we
already see are possible so for example
one one of my research fellows here eric
drexler back in india teas studied
molecular manufacturing that is you
could analyze using theoretical tools
and computer modeling the performance of
various molecularly precise structures
that we didn't then and still don't did
I have the ability to actually fabricate
but you could say that well if we could
put these atoms together in this way
then the system would be stay
and it would you know rotate with at
this speed and have what these
computational characteristics and he
also outlined some pathways that would
enable us to get to this kind of
molecularly
manufacturing in the fullness of time
you could do other other studies we have
done you can look at the speed at which
say it would be possible to colonize the
galaxy if you had mature technology we
have an upper limit which is the speed
of light we have sort of a lower current
limit which is how fast current Rockets
go we know we can go faster than that by
just you know making them bigger and
have more fuel and stuff and and you can
then start to describe the technological
affordances that would exist once a
civilization has had enough time to
develop Eva at least those technologies
we're already not possible then maybe
they would discover other new physical
phenomena as well that we haven't
realized that would enable them to do
even more but but at least there is this
kind of basic set of capabilities in
Jilin garnett well how do we jump from
molecular manufacturing to deep-space
exploration to mature technology like
what's the connection well so these
would be two examples of technological
capability sets that we can have a high
degree of confidence or physically
possible in our universe under that a
civilization that was allowed to
continue to develop its science and
technology would eventually attain you
can Intuit like we can kind of see the
set of breakthroughs they're likely to
happen so you can see like what did you
call the technological set with
computers maybe at easiest I mean the
one is we could just imagine bigger
computers using exactly the same parts
that we have so you can kind of scale
things that way right but you could also
make processors bit faster if you had
this molecular nanotechnology that
director x2 described he characterized a
kind of crude computer built with these
parts that that would perform you know
at a million times the human brain while
being
we can be smaller the size of a sugar
cube and he made no claim that that's
the optimum computing structure like
fraud you know we could build a faster
computers that would be more efficient
but at least you could do that if you
had the ability to do things that were
atomically precise yes
means you can combine these two you
could have this kind of nanomolecular
ability to build things at the bottom
and then say at this as a spatial scale
that would be attainable through space
colonizing technology you could then
start for example to characterize a
lower bound on the amount of computing
power that technology material
civilization would have if it could grab
resources you know planets and so forth
and then use this molecular
nanotechnology to optimize them for
computing you'd get a very very high
lower bound on the amount of compute so
sorry define some terms so
technologically mature civilization is
one that took that piece of technology
to its to its lower bound
what is it technological matures well
yeah so that mean it's a strong concept
and we really need for the simulation
hypothesis I just think it's interesting
in its own right so it would be the idea
that there is some stage of
technological development for you
basically maxed out that you developed
all those general-purpose widely useful
technologies that could be developed or
at least kind of come very close to the
my you know 99.9 percent there or
something so that's that's that's an
independent question you can think
either that there is such a ceiling or
you might think it just goes the
technology tree just goes on forever
where where is your sense for I would
guess that there is I I'm a maximum that
you would start to asymptotes towards so
new things won't keep springing up new
ceilings in terms of basic technological
capabilities I think that yeah there's
like a finite set of those that can
exist in this universe more of our I
mean I wouldn't be that surprised if we
actually reached close to that level
fairly shortly after we have say machine
super intelligence so I don't think it
would take million
of years for a human originating
civilization to begin to do this it
think it's like more more likely to
happen on historical timescales but that
that's that's an independent speculation
from the simulation argument I mean for
the purpose of the simulation argument
it doesn't really matter whether it goes
indefinitely far up or whether there is
a ceiling as long as we know we could at
least get to a certain level and it also
doesn't matter whether that's gonna
happen in a hundred years or five
thousand years or 50 million years like
the timescales really don't make any
difference for the ceilin garna a little
bit like there's a big difference
between a hundred years and ten million
years you know so it doesn't really not
matter because you just said this is a
matter if we jump scales to beyond
historical skills so we described that
so for the simulation argument sort of
doesn't it matter that we if it takes
ten million years it gives us a lot more
opportunity to destroy civilization in
the mean time yeah well so it would
shift around the probabilities between
these three alternatives that is if we
are very very far away from being able
to create these simulations if it's like
say the billions of years into the
future then it's more likely that we
will fail ever to get there they're more
time for us to kind of you know give go
extinct along the way and similarly for
other civilizations so it's important to
think about how hard it is to build
simulation from in terms of yeah
figuring out which of the disk jockeys
but for the simulation argument itself
which is agnostic as to which of these
three alternatives is true okay it's
like you don't have to sit like this
immolation argument would be true
whether or not we thought this could be
done in five hundred years or it would
take five hundred million years so for
sure the simulation argument stands I'm
sure there might be some people who
oppose it but it doesn't matter I mean
it's it's very nice those three cases
covered but the fun part is at least not
saying what the probabilities are but
kind of thinking about kind of intuitive
reasoning about what's more likely what
what
the kind of things that would make some
of the arguments less and more so like
but let's actually I don't think we went
through them so number one is we destroy
ourselves before we ever create simulate
right so that's kind of sad but we have
to think not just what what might
destroy us I mean the day there could be
some whatever disastrous for me crowd
slamming the earth a few years from now
that that could destroy us right but
you'd have to postulate in order for
this first disjunct to be true that
almost all civilizations throughout the
cosmos also failed to reach
technological maturity and the
underlying assumption there is that
there is likely a very large number of
other intelligent civilizations well if
there are yeah then they would virtually
all have to succumb in the same way I
mean then that that leads off another I
guess there are a lot of little
digressions that you know there so there
yeah give me dragging us back there are
these there is a set of basic questions
that always come up in conversations
with interesting people yeah like the
Fermi paradox like there's like you
could almost define whether person is
interesting whether they're at some
point because there was a Fermi paradox
comes up like well so forward it's worse
it looks to me that the universe is very
big I mean in fact according to the most
popular current cosmological theory is
infinitely big and so then it would
follow pretty trivially that that it
would contain a lot of other
civilizations in fact infinitely many if
you have some locals stochasticity and
infinitely many is like you know
infinitely many lumps of matter one next
to another there's a kind of random
stuff in each one then you're going to
get all possible outcomes with
probability one infinitely repeated so
so then then certainly that would be a
lot of extraterrestrials out there I'd
maybe short of that if the universe is
very big there might be a finite but
large number if we literally
one yet and then of course if we went
extinct then all of civilizations at our
current stage would have gone extinct
before becoming technological material
so then it kind of becomes trivially
true that a very high fraction of those
Quantic things but if we think there are
many I mean it's interesting because
there are certain things that plausibly
could kill us like a certain if you look
at existential risks and it might be a
different like that that the best answer
to what would be most likely to kill us
might be a different answer than the
best answer to the question if there is
something that kills almost everyone
what would that be because that would
have to be some risk factor that was
kind of uniform over all possible
civilizations yeah so in this for the
for the seekers argument you have to
think about not just us but like every
civilization dies out before they create
this simulation yeah or something very
close to everybody okay so what's number
two in well so number two is the
convergence hypothesis that is that
maybe like a lot of some of these
civilizations do make it through to
technological maturity but out of those
who do get there they all lose interest
in creating these simulations so they
just they have the capability of doing
it but they choose not to yeah not just
a few of them decide not to but you know
you know out of a million you know maybe
not even a single one of them would do
it and I think when you say lose
interest that sounds like unlikely
because it's like they get bored or
whatever but it could be so many
possibility within that igniculus
I mean losing interest could be
it could be anything from it being
exceptionally difficult to do to
fundamentally changing the sort of the
fabric of reality if you do it as
ethical concerns all those kinds of
things could be exceptionally strong
pressures well certainly I mean yeah
ethical concerns I mean not really too
difficult to do I mean in a sense that's
the first adopter that you get to
technical maturity where you would have
the ability using only a tiny fraction
of your resources to create many many
simulations so it wouldn't be the case
that they would need to spend half of
their GDP forever in order to create one
simulation and the head is like
difficult debate about whether they
should you know invest half of their GDP
for this it would more be like well if
any little fraction of the civilization
feels like doing this at any point
during maybe they're you know millions
of years of existence then there would
be millions of simulations but but
certainly that could be many conceivable
reasons for why there would be this
convert many possible reasons for not
running ancestor simulations or other
computer simulations even if you could
do so cheaply by the way what's an
ancestor simulation well that would be
the type of computer simulation that
would contain people all like those we
think have lived on our planet in the
past and like ourselves in terms of the
types of experiences to have and and
where those simulated people are
conscious so it's like not just
simulated in the same sense that a a
non-player character would be simulated
in the current computer game where it's
kind of has you can have at our body and
then a very simple mechanism that moves
it forward or backwards or but but
something where the the simulated being
has a brain let's say that simulated at
a sufficient level of granularity that
that it would have the same subjective
experiences as we have so where does
consciousness fit into this do you think
simulation like is there are different
ways to think about how this can be
simulated
just like you're talking about now do we
have to simulate each brain within the
larger simulation is it enough to
simulate just the brain just the minds
and not the simulation I'm not the big
in the universe itself like is there
different ways to think about this yeah
I guess there is a kind of premise in
the simulation argument rolled in from
philosophy of mind that is that it would
be possible to create a conscious mind
in a computer and that what determines
whether some system is conscious or not
is is not like whether it's built from
our organic biological neurons but maybe
something like what the structure of the
computation is that it implements so we
can discuss that if we want but I think
it would be far worse worse might be
that it would be sufficient say if you
had a computation that was identical to
the computation in the human brain down
to the level of neuron so if you had a
simulation with 100 billion neurons
connected in the same ways to human
brain and you'd then roll that forward
with the same kind of synaptic weights
and so forth so you actually had the
same behavior coming out of this as a
human without brain would have done then
I think that would be conscious now it's
possible you could also generate
consciousness without having that
detailed simulation there I'm getting
more uncertain exactly how much you
could simplify or abstract away
canyonland garnett what do you mean I
missed where your place in consciousness
in a second well so that so if you are a
computational is do you think that what
creates consciousness is the
implementation of a computation some
property emergent property in the
computation itself yes the idea yeah you
could say that but then the question is
which what what's the class of
computations such that when they are
wrong consciousness emerges so if you
just have like something that I adds 1
plus 1 plus 1 plus 1 like a simple
computation you think maybe that's not
gonna have any consciousness if on the
other hand the computation is one like
our human brains are performing where as
part of the computation there is like
you know a global work space is
sophisticated attention mechanism there
is like self representations of other
cognitive processes and a whole lot of
other things that possibly would be
conscious and in fact if it's exactly
like ours I think definitely it would
but exactly how much less than the full
computation that the human brain is
performing would be required is a little
bit I think of an open question he asks
another interesting question as well
which is would it be sufficient to just
have say the brain or would you need the
environment right that's a nice way in
order to generate the same kind of
experiences that we have and there is a
bunch of stuff we don't know I mean if
you look at say current virtual reality
environments one thing that's clear is
that we don't have to simulate all
details of them all the time in order
for say that the human player to have
the perception that there is a full
reality and that you can have say
procedurally generated virtual might
only render a scene when it's actually
within the view of the player character
and so similarly if this if this if this
environment that that we perceive is
simulated it might be that all of the
parts that come into our view are
rendered at any given time and a lot of
aspects that never come into view say
the details of this microphone I'm
talking into exactly what each atom is
doing at any given point in time might
not be part of the simulation only a
more coarse-grained representation so
that to me is actually from an
engineering perspective why the
simulation hypothesis is really
interesting to think about is how much
how difficult is it to
sort of in a virtual reality context I
don't know fake is the right word but to
construct a reality that is sufficiently
real to us to be to be immersive in that
way that the physical world is I think
that's just that's actually probably an
answerable question of psychology of
computer science of how how where's the
line where it becomes so immersive that
you don't want to leave that world yeah
alright that you don't realize while
you're in it that it is a virtual world
yeah those are two actually questions
yours is the more sort of the good
question about the realism but mine from
my perspective what's interesting is it
doesn't have to be real but it how how
can we construct the world that we
wouldn't want to leave oh yeah I mean I
think that might be too low a bar I mean
if you think say when people first had
the pong or something like that like I'm
sure there were people who wanted to
keep playing it for a long time because
it was fun and I wanted to be in this
little world I'm not sure we would say
it's immersive I mean I guess in some
sense it is but like an absorbing
activity it doesn't even have to be but
they left that world though that's the
so like I think that bar is deceivingly
high so they eventually look so they you
can play pong or Starcraft or would have
more sophisticated games for hours for
four months you know Wow well the
Warcraft could be in a big addiction but
eventually they escape that ah so you
mean when it's uh absorbing enough that
you would spend your entire it would ya
choose to spend your entire life in
there and then thereby changing the
concept of what reality is but as your
reality your reality becomes the game
not because you're fooled but because
you've made that choice yeah and it may
be different people might have different
preferences regarding that some Saul
might even even if you had any perfect
virtual reality might still prefer not
to spend the rest of their lives there
meaning philosopher there's this
experience machine thought experiment
have you come across this so Robert
Nozick had this thought experiment where
you imagine some crazy super-duper
neuroscientist of the future have
created a machine that could give you
any experience you want if you step in
there and for the rest of your life you
can kind of pre-programmed it in
different ways so you're you know
fondest dreams could come true you could
whatever you dream you want to be a
great artist a great lover like have a
wonderful life all of these things mmm
if you step into the experience machine
will be your experiences constantly
happy and but we kind of disconnect from
the rest of reality and it would float
there in the tank and the Gnostic
thought that most people would choose
not to enter the experience machine I
mean many might want to go there for a
holiday but they wouldn't want to check
out of existence permanently and so he
thought that was an argument against
certain views of value according to what
we what we value is a function of what
we experience because in the experience
machine you can have any experience you
want and yet many people would think
that would not be much value so
therefore what we value depends on other
things than what we experience so ok can
you can you take that argument further
what about the fact that maybe what we
values the up and down of life so you
could have up and downs in the
experience machine right but what can't
you have in the experience machine well
I mean that then becomes an interesting
question to explore but for example real
connection with other people if the
experience machine is the solar machine
where it's only you like that's
something you wouldn't have there you
would have this objective experience
that would be like fake people yeah but
when if you gave somebody flowers that
wouldn't be any bother they were
actually got happy it would just be a
little simulation of somebody smiling
but the simulation would not be the kind
of simulation I'm talking about in the
simulation argument where
simulated creatures conscious it would
just be a kind of smiley face that would
look perfectly real to you so we're now
drawing a distinction between appear to
be perfectly real and actually being
real yeah so that could be one thing I
mean like a big impact on history maybe
it's also something you won't have if
you check into this experience machine
so some people might actually feel the
life I want to have for me is one where
I have a big positive impact on history
unfolds so let's see if you could kind
of explore these different possible
explanations for why this you wouldn't
want to go into the experience machine
if that's if that's what you feel and
what one interesting observation
regarding this Nozick thought experiment
and the conclusions he wanted to draw
from it is how much is a kind of a
status quo effect so a lot of people
might not want to jettison card reality
to plug in to this dream machine but if
they instead we're told well what you've
experienced up to this point was a dream
now
do you want to disconnect from this and
enter the real world when you have no
idea maybe what the real world is or
maybe you could say well you're actually
a farmer in Peru growing you know
peanuts and you could live for the rest
of your life
in this well or or would you want to
continue your your dream life as Alex
Friedman gone around the world making
podcasts and doing research so if the
status quo was that the that they were
actually in the experience machine
howling a lot of people might prefer to
live the life that they are familiar
with rather than sort of bail out into
something the change itself the leap
yeah it might not be so much the the
reality itself that we're after but it's
more that we are maybe involved in
certain projects and relationships and
we have you know a self-identity and
these things that's our values are kind
of connected with carrying that forward
and then whether
it's inside a tank or outside a tank in
Peru or whether inside a computer
outside a computer that's kind of less
important to what what we ultimately
care about yeah but still so just linger
on it it is interesting I find maybe
people are different but I find myself
quite willing to take the leap to the
farmer in Peru especially as the virtual
reality system become more realistic I
I find that possibility and I think more
people would take that leap but so in
this in this thought experiment just to
make sure we are understand so in this
case that the farmer in Peru would not
be a virtual reality that would be the
real the real that really real that your
life like before this whole experience
machine started well I kind of assumed
from that description
you're being very specific but that kind
of idea just like washes away the
concept of what's real I mean I'm still
a little hesitant about your kind of
distinction between real and illusion
because when you can have an illusion
that's feels I mean that looks real and
you know what III don't know how you can
definitively say something is real or
not like what's what's a good way to
prove that something is real in that
context well so I guess in this case
it's Morris depression in one case
you're floating in a tank with these
wires by the super-duper neuroscientists
plugging into your head giving you Lex
Friedman experiences in the other you're
actually tilling the soil in Peru
growing peanuts and then those peanuts
are being eaten by other people all
around the world by the exports and this
that's two different possible situations
in the one and the same real world that
that you could choose to occupy but just
to be clear when you're in a vat with
wires and the neuroscientists you can
still go farming in Peru right mmm but
like well you could you could if you
wanted to you could have the experience
of farming in Peru but what that
wouldn't actually be any peanuts grown
well but what makes a peanut so
so peanut could be grown and you could
feed things with that peanut and why
can't all of that be done in a
simulation
I hope first of all that they actually
have peanut farms in Peru I guess we'll
get a lot of comments otherwise angry I
was way up to the point you should know
you can't realize in that climate now I
mean I I think I mean I I in the
simulation I think there's a sense the
important sense in which it should all
be real nevertheless there is a
distinction between inside the
simulation and outside the simulation or
in the case of no.6 thought experiment
whether you're in the VAT or outside the
VAT and some of those differences may or
may not be important I mean that that
comes down to your values and
preferences so if they if the experience
machine only gives you the experience of
growing peanuts but you're the only one
in in the experience machines there's
other you can within the experience
machine others can plug in well they're
versions of the experience machine so in
fact you might want to have distinguish
different thought experiments different
versions of it so in in like in the
original thought experiment maybe it's
only right just you so and you think I
wouldn't want to go in there well that
tells you something interesting about
what you value and what you care about
then you could say well what if you add
the fact that there would be other
people in there and you would interact
with them well it starts to make it more
attractive right then you can add in
well what if you could also have
important long-term effects on human
history in the world and you could
actually do something useful even though
you were in there that makes it maybe
even more attractive like you could
actually have a life that had a purpose
and consequences and so as you sort of
add more into it it becomes more similar
to the the baseline reality that that
you were comparing it to yeah but I just
think inside the experience machine and
without taking those steps you just
mentioned you you you still have an
impact on long-term history
of the creatures that live inside that
of the quote-unquote fake creatures that
live inside that experience machine and
that like at a certain point you know if
there's a person waiting for you inside
that experience machine maybe your newly
found wife and she dies she has fears
she has hopes and she exists in that
machine when you plug out when you
unplug yourself and plug back in she's
still there going on about her life oh
well in that case yeah she starts to
have more of an independent existence i
independent existence but it depends I
think on how she's implemented in the
experience machine take one the mid case
where all she is is a static picture on
the wall of photograph right so you
think well I can look at her right but
that's it there's no that then you think
well it doesn't really matter much what
happens to that and any more than a
normal photographs if you tear it up
right it means you can't see it anymore
but you haven't harmed the person whose
picture you tore up to go home but but
if she's actually implemented say at a
neural level of details so that she's a
fully realized digital mind with the
same behavioral repertoire as you have
then very plausibly she would be a
conscious person like you are and then
you would what you do in in this
experience machine would have real
consequences for how this other mind
felt so you have to like specify which
of these experience machines you're
talking about I think it's not entirely
obvious that it will be possible to have
an experience machine that gave you a
normal set of human experiences which
include experiences of interacting with
other people without that also
generating consciousnesses corresponding
to those other people that is if you
create another entity that you perceive
and interact with that to you looks
entirely realistic not just when you say
hello they say hello back but you have a
rich interaction many days deep
conversations like it might be that the
only
possible way of implementing that would
be one that also has a side effect
instantiated this other person in enough
detail that you would have a second
consciousness there I think that's to
some extent an open question so you
don't think it's possible to fake
consciousness and say well it might be I
mean I think you can certainly fake if
you have a very limited interaction with
somebody you could certainly fake that
that is if all you have to go on is
somebody said hello to you that's not
enough for you to tell whether that was
a real person there or a pre-recorded
message or you know like a very
superficial simulation that has no
conscious Ness because that's something
easy to fake we could already fake it
now you can record a voice recording and
you know but but if you have a richer
set of interactions where you're allowed
to answer ask open-ended questions and
probe from different angles that
couldn't sort of be you could give can't
answer to all of the possible ways that
you could probe it then it starts to
become more plausible that the only way
to realize this thing in such a way that
you would get the right answer for many
which angle you probe it would be a way
of instance ating it we also
instantiated a conscious mind yeah movie
on the intelligence part but there's
something about me that says
consciousness is easier to fake like I
I've recently gotten my hands on a lot
of rubas don't ask me why or how but and
I've made them there's just a nice
robotic mobile platform for experiments
and I made them scream and/or moan in
pain so on just to see when they're
responding to me and it's just a sort of
psychological experiment myself and I
think they appear conscious to me pretty
quickly my guy to me at least my brain
can be tricked quite easily right I said
if I introspect and they it's harder for
me to be tricked that something is
intelligent so I just have this feeling
that inside this experience machine just
saying that you're conscious and having
certain qualities of the interaction
like being able to suffer like being
able to hurt like being able to wander
about the essence of your own existence
not actually I mean you know the
creating the illusion that you're
wandering about it is enough to create
the fit of consciousness and be create
the illusion of consciousness and
because of that create a really
immersive experience to where you feel
like that is the real world so you think
there's a big gap between appearing
conscious and being conscious or is it
not just that gets very easy to be
conscious I'm not actually sure what it
means to be conscious all I'm saying is
the illusion of consciousness is enough
for this to create a social interaction
that's as good as if the thing was
conscious meaning I'm making it about
myself right yeah I mean I guess there
are a few differences one is how good
the interaction is which might mean if
you don't really care about like probing
hard for whether the thing is conscious
maybe maybe it would be a satisfactory
interaction whether or not you really
thought it was conscious now if you
really care about it being contrasting
in like inside this experience machine
yes how easy would it be to fake it and
you say it sounds easy easy yeah then
the question is would that also mean
it's very easy to instantiate
consciousness like it's much more widely
spread in the world and we have thought
it doesn't require a big human brain
with a hundred billion neurons all you
need is some system that exhibits basic
intentionality and can respond and you
already have consciousness like in that
case I guess you still have a close
coupling they denied that did I guess
that a case would be where they can come
apart where we could create the
appearance of there being a conscious
mind without actually not being another
conscious mind I'm yeah I'm somewhat
agnostic exactly where these lines go I
think one one observation that makes it
possible that you could have very
realistic appearances relatively simply
which also is relevant for the
simulation argument and in terms of
thinking about how realistic with the
virtual reality model have to be in
order for the
creature not to notice that anything was
awry well just think of our own humble
brains during the wee hours of the night
when we are dreaming many times well
dreams are very mersive but often you
also don't realize that you're in a
dream and that's produced by simple
primitive three-pound lumps of neural
matter effortlessly so if a simple brain
like this can create a virtual reality
that seems pretty real to us then how
much easier would it be for a super
intelligent civilization with planetary
sized computers optimized over the eons
to create a realistic an environment for
you to interact with yeah and by the way
behind that intuition is that our brain
is not that impressive relative to the
possibilities of what technology could
bring it's also possible that the brain
is the epitome is the ceiling like just
because ceiling how it's not possible
meaning like this is the smartest
possible thing that the universe could
create so that's seems unlikely unlikely
to me yeah I mean for some of these
reasons we alluded to earlier in terms
of designs we already have four
computers that would be faster by many
orders of magnitude than the human brain
yeah but it could be that the
constraints the cognitive constraints in
themselves is what enables the
intelligence so the more the more
powerful you make the computer the less
likely is to become super intelligent
this is where I say dumb things to push
back and uh yeah I'm not sure I father
we might you know I mean so there are
different dimensions of intolerance yeah
a simple one is just speed like if you
could solve the same challenge faster in
some sense yes you're like smarter so
there I think we have very strong
evidence for thinking that you could
have a computer in this universe that
would be much faster than the human
brain and therefore have speed super
into
it's like be completely superior maybe a
million times faster then maybe there
are other ways in which you could be
smarter as well maybe more qualitative
ways right and there
the concepts are a little bit less
clear-cut so it's harder to make a very
crisp neat firmly logical argument for
why that could be qualitative
superintelligence as opposed to just
thinks that we're faster although I
still think it's very plausible and for
various reasons that that are less than
watertight arguments but when you can
sort of for example if you look at
animals and brains and even within
humans like there seems to be like
Einstein versus random person like it's
not just that Einstein was a little bit
faster but like how long would it take a
normal person to invent general
relativity it's like it's not twenty
percent longer than it took Einstein or
something like that it's like I don't
know whether that we do it at all or it
would take millions of years or some
totally bizarre so well you put your
tuition is that the computer size will
get you go the increasing the size of
the computer and the speed of the
computer might create some much more
powerful levels of intelligence that
would that enable some of the things
we've been talking about would like the
simulation being able to simulate an
ultra realistic environment ultra
realistic yes ception of reality yeah I
mean it's like they're speaking it would
not be necessary to have super
intelligence in order to he'll say the
technology to make these simulations
ancestor simulations or other kinds of
simulations and as a matter of fact that
thing if if there are if we are in a
simulation it would most likely be one
built by a civilization that had super
intelligence it certainly would help a
lot I mean it could build more efficient
large-scale structures if you had super
intelligence I also think that if you
had the technology to build these
simulations that's like a very advanced
technology it seems kind of easier to
get technology to super intelligence
yeah so I'd expect by the time that
could make these fully realistic
simulations of human history with human
brains in there like before that they
got to that stage I would have figured
out how to create machines super tall
or maybe biological enhancements of
their own brains if there were
biological creatures to start with so we
talked about the the three parts of the
simulation argument one we destroy
ourselves before we ever create the
simulation two we somehow everybody
somehow loses interest in creating
simulation three we're living in a
simulation so you've kind of I don't
know if your thinking has evolved on
this point but you kind of said that we
know so little that these three cases
might as well be equally probable
so probabilistically speaking where do
you stand on this yeah I know I mean I
don't think equal necessarily would be
the most supported probability
assignment so how would you without
assigning actual numbers wait wait
what's more or less likely in your in
your well I mean historically tended to
punt on the question of like has between
these three so maybe you ask me another
way is which kind of things would make
it each of these more or less likely
what cried VI certainly in general terms
if you think anything that say increases
or reduces the probability of one of
these we tend to slosh probability
around on the other so if if one becomes
less probable like the other would have
to cuz gotta add up to one yes
so if we consider the first hypothesis
the first alternative that there's this
filter that makes it so that virtually
no civilization reaches technological
maturity in particular our own
civilization if that's true then it's
like very unlikely that we would reach
technical maturity just because if
almost no civilization at our stage does
it then it's unlikely that we do it so
hang on sorry again longer than that for
a second well if it's the case that
almost all civilizations at our current
stage of technological maturity fails at
failed at our current stage of technical
development failed to reach maturity
that would give us very strong reason
for thinking we will
to reach technical material and also so
the flipside of that is the fact that
we've reached it means that many other
civilizations yeah so that means if we
get closer and closer to actually
reaching technological maturity there's
less and less distance left where we
could go extinct before we are there and
therefore the probability that we will
reach increases as we get closer and
that would make it less likely to be
true that almost all civilizations at
our current stage failed to get there
like we would have this what the one
case we started ourselves would be very
close to getting there that would be
strong evidence it's not so hard to get
too technical maturity so to the extent
that we you know feel we are moving
nearer to technology maturity that that
would tend to reduce the probability of
the first alternative and increase the
probability of the other - it doesn't
need to be a monotonic change like if
every once in a while some new threat
comes into view some bad news thing you
could do with some novel technology for
example you know that that could change
our probabilities in the other direction
but that the technology again you have
to think about as that technology has to
be able to equally in an even way affect
every civilization out there yeah pretty
much I mean that strictly speaking is
not real I mean that could that could be
two different existential risk and every
civilization you know you know one or
the other like but none of them kills
more than 50% like yeah but that
incidentally so in some of my the work I
mean on machine super intelligence like
so I wanted some existential risks where
they did sort of super intelligence AI
and how we must make sure you know to
handle that wisely and carefully it's
not the right kind of existential
catastrophe to make
first alternative true though like it
might be bad for us if the future lost a
lot of value as a result of it being
shaped by some process that optimized
for some completely non human value but
even if we got killed by machine
superintendence is that machine super
intelligence might still attain
technical maturity so I see so you're
not very you're not human exclusive this
could be any intelligent species that
achieves like it's all about the
technological maturity it's not that the
humans have to attain it right like
super intelligence replace us and that's
just as well fascination as well yeah
yeah I mean it could interact with the
second high pop foul turn ative like if
the thing that replaced us was either
more likely or less likely than we would
be to have an interest in creating
ancestor simulations you know that that
could affect probabilities but yeah to a
first-order like if we all just die then
yeah we won't produce any simulations
because we are dead but if we all die
and get replaced by some other
intelligent thing that then gets the
technical maturity the question remains
of course if my not that thing that
needs some of its resources to to do
this stuff so can you reason about this
stuff this is given how little we know
about the universe is it reasonable to
to reason about these probabilities so
like how little
well maybe you can disagree but to me
it's not trivial to figure out how
difficult it is to build a simulation we
kind of talked about it a little bit we
also don't know like as we tried to
start building it like start creating
virtual worlds and so on how that
changes the fabric of society like
there's all these things along the way
that can fundamentally change just so
many aspects of our society about our
existence that we don't know anything
about like the kind of things we might
discover when we understand to a greater
degree the fundamental the physics like
the theory if we have a
break through have a theory and
everything how that changes stuff how
that changes deep space exploration and
so on so like is it still possible to
reason about probabilities given how
little we know yes I think though there
will be a large residual of uncertainty
that we'll just have to acknowledge and
I think that's true for most of these
big-picture questions that we might
wonder about it's just we are small
short-lived small brained cognitively
very limited humans with little evidence
and it's amazing we can figure out as
much as we can really about the cosmos
but it okay so there's this cognitive
trick that seems to happen where I look
at the simulation argument which for me
it seems like case one and to feel
unlikely I want to say feel unlikely as
opposed to sort of in like it's not like
I have too much scientific evidence to
say that either one or two are not true
it just seems unlikely that every single
civilization destroys itself and it
seems like feels unlikely that the
civilizations lose interest so naturally
the without necessarily explicitly doing
it but this illumination the simulation
argument it basically says it's very
likely we're living in a simulation like
to me my mind mm-hmm naturally goes
there I think the mind goes there for a
lot of people is that the incorrect
place for it to go well not not not
necessarily I think the second
alternative which has to do with the
motivations and interest of
technologically mature civilizations I
think there is much we don't understand
about that can you talk about that a
little bit what do you think I mean this
question that pops up when you have when
you build an AGI system or build the
general intelligence or how does that
change our motivations do you think of
fundamentally transform our motivations
well it doesn't seem that implausible
that once you
take this leap to the technological
maturity I mean I think like it involves
creating machine superintelligence
possibly that would be sort of on the
path for basically all civilizations
maybe before they are able to create
large numbers of ancestor simulations
they would that possibly could be one of
these things that quite radically
changes the orientation of what a
civilization is in fact optimizing for
there are other things as well so at the
moment we have not perfect control over
our own being our own mental states our
own experiences are not under our direct
control so for example if if you want to
experience a pleasure and happiness you
might have to do a whole host of things
in the external world to try to get into
the stage into the mental state where
you experience pleasure you look like
some people get some pleasure from
eating great food well they can just
turn that on they have to kind of
actually go to a nice restaurant and
then they have to make money too so
there's like all this kind of activity
that maybe arises from the fact that we
are trying to ultimately produce mental
states but the only way to do that is by
a whole host of complicated activities
in the external world now at some level
of technological development I think
will become other potent in the sense of
gaining direct ability to choose our own
internal configuration and enough
knowledge and insight to be able to
actually do that in a meaningful way so
then it could turn out that there are a
lot of instrumental goals that would
drop out of the picture and be replaced
by other instrumental goals because we
could now serve some of these final
goals in more direct ways and who knows
how all of that shakes out after
civilizations reflect on that and
converge and different attractors and so
on and so forth
and and that that could be new new
instrumental considerations that come
into view as well that that we are just
oblivious to that would maybe have a
strong shaping effect on actions like
very strong reasons to do something or
not to do something and we just don't
realize they're there because we are so
dumb tumbling through the universe but
if if almost inevitably on on route to
attaining the ability to create many
other simulations you do have this
cognitive enhancement or advice from
super intelligences or you yourself then
maybe there's like this additional set
of considerations coming into view and
yesterday I it's obvious that the thing
that makes sense is to do X whereas
right now it seems so you could X Y or Z
and different people will do different
things and we're kind of random in that
sense yeah because at this time with our
limited technology the impact of our
decisions is minor I mean that's
starting to change some in some ways but
well I'm not sure it follows that the
impacts of our decisions is minor
well it's starting to change I mean I
suppose 100 years ago was minor it's
starting to so it depends on how you
viewed so what people did 100 years ago
still have effects on the world today Oh
as a I see as a as a civilization or in
the together yeah so it might be that
the greatest impact of individuals is
not at technical maturity or very far
down it might be earlier on when there
are different tracks civilization could
go down I mean maybe the population is
smaller things still haven't settled out
if you count indirect effects that that
that those could be bigger than the
direct effects that people have later on
so part 3 of the argument says that so
that leads us to a place where
eventually somebody creates a simulation
that I think you you had a conversation
Joe Rogan's I think there's some aspect
here where you got stuck a little bit
how does that lead to were likely living
in a simulation so this kind of
probability argument if somebody
eventually creates a simulation why does
that mean that we're now in a simulation
but what you get to if you accept
alternative three first is that would be
more simulated people with our kinds of
experiences than on simulated ones like
if in n kind of if you look at the world
as a whole by the end of time as it were
you just count it up that would be more
simulated once than on simulated ones
then there is a an extra step to get
from that if you assume that suppose for
the sake of the argument that that's
true how do you get from that to this
statement we are probably in a
simulation so here you are introducing
an indexical statement like it's that
this person right now is in a simulation
they're all these other people you know
that are in simulation so some that are
not in the simulation but what
probability should you have that you
yourself is one of the simulated ones in
a setup so so yeah so I call it the
bland principle of indifference which is
that in in cases like this when you have
to I guess sets of observers one of
which is much larger than the other and
you can't from any internal evidence you
have tell which that you belong to you
should design a probability that's
proportional to the size of these sets
so that if there are ten times more
simulated people with your kinds of
experiences you would be ten times more
likely to be one of those is that as
intuitive as it sounds in that that
seems kind of if you don't have enough
information you should rationally just
assign the same probability as the yeah
kind of the size of the set it seems
seems pretty plausible to me were the
holes in this is it at the at the very
beginning the assumption that everything
stretches sort of you have infinite time
essentially you don't need infinite time
you need what how long this is the time
what however long it takes I guess for a
universe to produce an intelligent
civilization that has intense the
technology to run some ancestor
simulations gotcha
at some point when the first simulation
is created that stretch of time just a
little longer than they're all start
creating simulations kind of like yeah I
mean that might that different it might
if you think of there being a lot of
different planets and some subset of
them have life and then some subset of
those get to intelligent life and some
of those maybe eventually start creating
simulations they might get started at
quite different times like maybe on some
planet it takes a billion years longer
before you get like monkeys or before
you get even bacteria then on another
planet so that like this might happen
kind of at different cosmological epochs
is there a connection here to the
Doomsday argument in that sampling there
if there is a connection in that they
both involve an application of anthropic
reasoning that is reasoning about these
kind of indexical propositions but the
assumption you need in the case of the
simulation argument it's much weaker
than the simulator the assumption you
need to make the Doomsday argument go
through what is the Doomsday argument
and maybe you can speak to the anthropic
reasoning in more general yeah that's
that's a big an interesting topic in its
own right and tropics but the Doomsday
argument is this really first discovered
by Brandon Carter was a theoretical
physicist and then developed by
philosopher John Wesley I think it might
have been discovered initially in the
70s or 80s and Lester wrote this book I
think in 96 and there are some other
versions as well
God is a physicist but let's focus on
the Carter Leslie version where it's an
argument that we have systematically
underestimated the probability that
humanity will go extinct soon now I
should say most people probably think at
the end of the day there is something
wrong with this doomsday argument that
it doesn't really hold it's like there's
something wrong with it but it's proved
hard to say exactly what is wrong with
it
and different people have different
accounts my own view is it seems
inconclusive but and I can say what the
argument is yeah yeah so maybe it's
easiest to explain via an analogy to
sampling from urns so imagine you have a
big imagine you have two urns in front
of you and they have balls in them that
have numbers so there's the tourist look
the same but inside one there are ten
balls ball number 1 2 3 up to ball
number 10 and then in the other urn you
have a million balls numbered one to a
million and somebody puts one of these
urns in front of you and asked you to
guess what what's the chance it's the 10
ball and you say 50/50 they you know I
can't tell which urn it is um but then
you're allowed to reach in and pick a
ball at random from the urn and that's
suppose you find that it's ball number 7
said that strong evidence for the 10
ball hypothesis like it's a lot more
likely that you would get such a lobe
numbered ball if they're on the 10 balls
in the urn like it's in fact 10 percent
done right then if there are a million
balls it would be round likely you would
get number 7
so you perform a Bayesian update and if
your prior was 50/50 that it was the
temple urn you become virtually certain
after finding the random sample was 7
that it's only has 10 balls in it so in
the case of the urns this is on
controversial just elementary
probability theory the Doomsday argument
says that you should recent in a similar
way with respect to different hypotheses
about how many many balls there will be
in the urn of humanity I said for how
many humans that will have human being
by the time we go extinct so to simplify
let's suppose we only consider two
hypotheses either maybe 200 billion
humans in total or 200 trillion humans
in total you could fill in more
hypotheses but it doesn't change the
principle here so it's easiest to see if
we just consider these two so you start
with some prior based on ordinary
empirical ideas about threats to
civilization and so forth and maybe you
say it's a 5% chance that we will go
extinct by the time there will have been
200 billion only you're kind of
optimistic let's say you think probably
will make it through colonize the
universe in but then according to this
Tuesday argument you should think of
your own birth rank as a random sample
so your birth is your sequence in the
position of all humans that have ever
existed it turns out you're about a
human number of 100 billion you know
give or take that's talking roughly how
many people have been born before you
that's fascinating because I probably
yeah we each have a number wait wait
wait we would each have a number in this
I mean obviously the exact number will
depend on where you started counting
like witch ancestors start was human in
hasta Carol is human but the does those
are not really important - they're
relatively few of those so yeah so
you're roughly a hundred billion now if
they're only gonna be 200 billion in
total that's a perfectly unremarkable
number you're somewhere in the middle
right run-of-the-mill human completely
unsurprising yes now if they're gonna be
200 trillion you would be remarkably
early like you it's like what are the
chances out of these 200 trillion human
that you should be human number one
hundred billion that seems it would have
a much lower conditional probability and
so analogously taha in the urn case you
thought after finding this low numbered
random sample you updated in favor of
having few balls similarly in this case
you should update in favor of the human
species having a lower total number of
members that is doom soon you said doom
soon that's yeah well that would be the
hypothesis in this case that it will end
just a hundred billion I just like that
term for the hypothec and of crucially
relies on the Doomsday argument it's the
idea that you should reason as if you
were a random sample from the set of all
humans that will ever have existed if
you have that assumption then I think
the rest kind of follows the question is
why should you make that assumption in
fact you know you're 100 billion so so
where do you get this prior and then
there is like a literature on that with
different ways of supporting that or
something and it that's just one example
of a topic reasoning right there yeah
that seems to be kind of convenient when
you think about humanity when you when
you think about us of even like
existential threats and so on as it
seems that quite naturally that you
should assume that you're just an
average case yeah that you're a kind of
a typical or randomly sampled now in the
case of the Doomsday argument it seems
to lead to what intuitively we think is
the wrong conclusion or at least many
people have this reaction that there's
got to be something fishy about this
argument because from very very weak
premises it gets this very striking
implication that we have almost no
chance of reaching size 200 trillion
humans in the future and how can we
possibly get there just by reflecting on
when we were born it seems you would
need sophisticated arguments about the
impossibility of space colonization blah
blah so what might be tempted to reject
this key assumption I call it the self
sampling assumption the idea that you
should reason as if you were a random
sample from all observers or in your
some reference class however it turns
out that in other domains it looks like
we need something like this self
sampling assumption to make sense of
bona fide a scientific inferences in
contemporary cosmology for example you
have
these multiverse theories and according
to a lot of those all possible human
observations are made so I mean if you
have a sufficiently large universe you
will have a lot of people observing all
kinds of different things so if you have
two competing theories say about some
the value of some constant it could be
true according to both of these theories
that there will be some observers
observing the value that corresponds to
the other theory because there will be
some observers that have elucidation so
there is a local fluctuation or an
statistically anomalous measurement
these things will happen and if in us
observers making us different
observations that would be something
that sort of by chance make these
different ones and so what we would want
to say is well many more observers a
larger proportion of the observers will
observe as it were the true value and a
few will observe the wrong value if we
think of ourselves as a random sample we
should expect with a very improper
bility to observe the true value on that
well then allow us to conclude that the
evidence we actually have is evidence
for the theories we think are supported
it kind of done is a way of making sense
of these inferences that clearly seem
correct that we can you know make
various observations and infer what the
temperature of the cosmic background is
and and the the fine-structure constant
and all of this but it seems that
without rolling in some assumption
similar to the self sampling assumption
this inference just doesn't go through
and there are the examples so so there
are these scientific context so it looks
like this kind of anthropic reasoning is
needed and makes perfect sense and yet
in the case of the dupes argument it has
this weird consequence and people might
think there is something wrong with it
there so there's done this project that
would consistent try to figure out now
what are the legitimate ways of
reasoning about these indexical facts
when observer selection effects are in
play in other words
well being a theory of anthropic s-- and
that different views of looking at that
and it's a difficult methodological area
but to tie it back to the simulation
argument the the key assumption there
this land principle of indifference it's
much weaker than the self sampling
assumption so if you think about in the
case of the Doomsday argument it says
you should reason as if you're a random
sample from All Humans that will never
live even though in fact you know that
you are about number one hundred billion
human and you're alive in the year 2020
whereas in the case of the simulation
argument it says that well if you
actually have no way of telling which
one you are then you should assign this
kind of uniform probability yeah yeah
your role is the observer in the
simulation argument is different it
seems like who is the observer I mean I
keep assigning the individual
consciousness yeah I mean when I say you
want a lot of observers in the
simulation in the context of the
simulation argument but they're all
irrelevant the server's would be a the
people in original histories and be the
people in simulations so this would be
the class of observers that we need I
mean there also may be the simulators
but we can set those aside for this so
the question is given that class of
observers a small set of original
history observers and a large class of
simulated observers which one should you
think is you where are you amongst this
well observers I'm maybe having a little
bit trouble wrapping my head head around
the intricacies of what it means to be
an observer and this and this in the
different instantiations of the
anthropic reasoning cases that we
mention right now it I mean it may be an
easier way of putting it is just like
are you simulated or you're not
simulated you've given this assumption
that these two groups of people exist
yeah in the simulation case it seems
pretty straightforward it's yeah so
that's right they think the key point is
the methodological assumption you need
to make to get the simulation argument
to where it wants to go is much weaker
and less problematic then the
methodological assumption you make to
get the Doomsday argument to its
conclusion
maybe the dune star government is sound
or unsound but you need to make a much
stronger and more controversial
assumption to make it go through in the
case of the Doomsday argument a sorry
simulation argument I guess one maybe
way intuition pub to like support this
bland principle of indifference is to
consider a sequence of different cases
where the fraction of people who are
assimilated to non-simulated approaches
one so in the limiting case where
everybody assimilated I obviously can
deduce with certainty that you are
simulated right if everybody with your
experience is assimilated and you know
you're gotta be one of those you don't
need the probability at all you just
kind of logically conclude it right
right so then as we move from a case
where say 90% of everybody simulated 99%
99.9% it's impossible that the
probability of sine should sort of
approach one certainty as the fraction
approaches the case where everybody is
in a simulation yes exactly like you
wouldn't like expect that to be a
discrete well if there's one
non-simulated person then it's 50/50 but
if we move that and it's hundred percent
like it should kind of all right there
are other arguments as well one can use
to support this blind principle of
indifference but that might be enough to
but in general when you start from time
equals zero and go into the future the
fraction assimilated if it's possible to
create simulated worlds the fraction
similar worlds will go to one well I
mean it was a novelist kind of go all
the way to one in in reality that would
be some ratio although maybe a technical
material civilization could run a lot of
simulations using a small portion of its
resources it probably wouldn't be able
to run infinite demand yeah I mean if we
take say the observed the physics in the
observed universe if we assume that
that's also the physics at the level of
the simulators that would be limits to
the amount of information processing
that any one civilization could perform
in its future trajectory right and
there's like well first of all there's
limited amount of matter you can get
your hands off because with the positive
cosmological constant the universe is
accelerating there's like a finite
sphere of stuff even if you've traveled
with the speed of light that you could
ever reach you have a finite amount of
stuff and then if you think there is
like a lower limit to the amount of loss
you get when you perform an eraser of a
computation or if you think for example
just matter gradually over cosmological
timescales decay
you know maybe protons decay other
things and they radiate out
gravitational waves like there's all
kinds of seemingly unavoidable losses
that occur so eventually we'll have
something like like a heat death of the
universe or if it caused death or
whatever but it's finite but of course
we don't know which if if there's many
ancestral civil simulations we don't
know which level we are so there could
be couldn't there be like an arbitrary
number of simulation that spawned ours
and those had more resources there's a
physical universe to work with sorry I
mean that that could be sort of okay so
if simulations spawn other simulation
tries it seems like each new spawn has
fewer resources to work with yeah but we
don't know at which love which step
along the way we are at right any one
observer doesn't know whether we're in
level 42 or 100 or one or is that not
matter for the resources
I mean when it's true that they would
that would be all certainty asked you
could have stacked simulations yes and
that couldn't be a certainty as to which
level we are at as you remark tall so
all the computations performed in a
simulation within the simulation also
have to be expended at the level of the
simulation well today the computer in
basement reality where all these
simulations for the simulations with the
simulations are taking place like that
that computer ultimately it's it's its
CPU or whatever it is like that has to
power this whole tower right so if there
is a finite compute power in basement
reality that would impose a limit to how
tall this tower can be and if if each
level kind of imposes a large extra
overhead you might think maybe the tower
would not be very tall that most people
would be lower down in the tower I love
the term basement reality let me ask one
of the popularizers you said there's
many through this when you look at sort
of the last few years of the simulation
hypothesis just like you said it comes
up every once in a while some new
community discovers it and so on but I
would say one of the biggest popular
artists of this idea is Elon Musk do you
have any kind of intuition about what
Elon thinks about when you think about
simulation why is this of such interest
is it all the things we've talked about
or is there some special kind of
intuition about simulation that he has I
mean you might have a better I think I
mean why it's of interest I think it's
it's like seems Fred Albus why if it to
the extent that one think the argument
is credible why it would be of interest
it would if it's correct tell us
something very important about the world
you know one way or the other whichever
of the three alternatives for a
simulation that seems like arguably one
of the most fundamental discoveries
right now interestingly in the case of
someone like Elon so there is like the
standard arguments for why you might
want to take the simulation hypothesis
seriously the simulation argument right
in the case that if you are actually
Elon Musk let us say there's a kind of
an additional reason
in that what are the chances you would
be Elon Musk right it seems like maybe
that would be more interested in
simulating the lives of very unusual and
remarkable people so if you consider not
just assimilations where all of human
history or the whole of human
civilization are simulated but also
other kinds of simulations which only
include some subset of people like in
the industry in those simulations that
only include a subset it might be more
likely that that would include subsets
of people with unusually interesting or
consequential like your Elon Musk
you got a wonder right more like yeah or
if you're like if you're Donald Trump or
if you are Bill Gates or you're like
some particularly yeah like distinctive
character you might think that that ad I
mean if you just think of yourself into
the shoes right it's got to be like an
extra reason to think that's kind of so
interesting so on a scale of like farmer
in Peru to you a musk the more you get
towards the almost the higher the
probability you're dividing that would
be some extra boost from that there's an
extra boost so he also asked the
question of what he would ask an AGI
saying the question being what's outside
the simulation do you think about the
answer to this question if we are living
a simulation what is outside the
simulation so the programmer of the
simulation yeah I mean I think it
connects to the question of what's
inside the simulation in that if you had
views about the Craters of the
simulation it might help you make
predictions about what kind of
simulation it is what what might what
might happen what you know happens after
the simulation if there is some after
but also like the kind of setup so these
these two questions would be quite
closely intertwined but do you think
you'll be very surprising to it like is
the stuff inside the simulation is it
possible for it to be fundamentally
different than the stuff outside yeah
like I got another way to put it can the
creatures inside the simulation
and be smart enough to even understand
or have the cognitive capabilities or
any kind of information processing
capabilities enough to understand the
mechanism that created them they might
understand some aspects of it I mean
it's a love all of its kind of there are
levels of explanation like degrees to
which you can understand so does your
dog understand what it is to be human
well it's got some idea like humans are
these physical objects that move around
and do things and I a normal human would
have a deeper understanding of what it
is to be a human and maybe some very
experienced psychic psychologist or
great novelist might understand a little
bit more about what it is to be human
and maybe super intelligence could see
right through your soul so similarly I I
do think that that we are quite limited
in our ability to understand all of the
relevant aspects of the larger context
that we exist in but there might be hope
first I think we understand some aspects
of it but you know how much good is that
if there's like one key aspect that
changes the significance of all the
other aspects so we understand maybe
it's seven out of ten key insights that
you need but but the answer actually
like varies completely depending on what
like number eight nine and ten insight
is it's like whether you wanna suppose
that the big tasks were to guess whether
a certain number was odd or even like a
ten digit number and if it's if it's
even the best thing for you to do in
life is to go north and if it's odd the
best thing for you to go south now we
are in a situation where maybe through
our science and philosophy we figured
out what the first seven digits are so
we have a lot of information right most
of it we figured out but we are clueless
about what the last three digits are so
we are still completely clueless about
whether the number is odd or even and
therefore whether we should go nor
go south I feel that's that's an analogy
but I feel we're somewhat in that
predicament we know a lot about the
universe we've come maybe more than half
of the way there to kind of fully
understanding it but the parts were
missing or plausibly ones that could
completely change the overall upshot of
the thing and including change our
overall view about what the scheme of
priorities should be or which strategic
direction would make sense to pursue it
yeah I think your analogy of us being
the dog trying to understand human
beings as a as an entertaining one and
probably correct
the closer the understanding tends from
the dog's viewpoint to us human
psychologist viewpoint the steps along
the way there will have completely
transformative ideas of what it means to
be human so the dog has a very shallow
understanding it's interesting to think
that and analogize that a dog's
understanding of a human being is the
same as our current understanding of the
fundamental laws of physics in the
universe man okay we spent an hour 40
minutes talking about the simulation I
like it let's talk about super
intelligence at least for a little bit
and let's start at the basics what tu is
intelligence yeah I didn't not to get
too stuck with the definitional question
I mean I the common sense understanding
like the ability to solve complex
problems to learn from experience to
plan to reason some combination of
things like that it's got this mixed up
into that or no it's consciousness mixed
up into that as well I don't think I
think it could be fairly intelligent at
least without being conscious probably
and so then what is super intelligence
so yeah that would be like something
that was much more had much more general
cognitive capacity than we humans have
so if we talk about general super
intelligence it would be much faster
learner be able to recent much better
MIT plans that are more effective at
achieving its goals say in a wide
of complex challenging environments in
terms of as we turn our eye to the idea
of sort of existential threats from
super intelligence do you think super
intelligence has to exist in the
physical world or can it be digital only
sort of we think of our general
intelligence as us humans as an
intelligence that's associated with the
body that's able to interact with the
world that's able to affect the world
directly with physically I mean digital
always perfectly fine I think I mean you
you could you it's physical in the sense
that obviously the computers and the
memories are physical but its capability
to affect the world sort of could be
very strong even if it has a limited set
of actuators if it can types text on the
screen or something like that that would
be I think ample so in terms of the
concerns of existential threat of AI how
can any AI system that's in the digital
world have existential risk sort of what
what are the attack vectors for a
digital system well I mean I guess maybe
to take one step back so I should
emphasize that I also think there's this
huge positive potential from machine
intelligence including super
intelligence and I want to stress that
because like some of my write writing
has focused on what can go wrong and
when I wrote the book super intelligence
at that point I felt that was a kind of
neglect of what would happen if AI
succeeds and in particular a need to get
a more granular understanding of where
the pitfalls are so we can avoid them I
think that since since the book came out
in 2014 there has been a much wider
recognition of that and a number of
research groups are not actually working
on developing say AI alignments
techniques and so on and so forth so
that's I'd I'd like yeah I think now
it's important to make sure we bring
back onto the table the upside as well
and there's a little bit of a neglect
now on the upside which is I mean if you
look at
talking to a friend if you look at the
amount of information there's available
or people talking and people being
excited about the positive possibilities
of general intelligence that's not it's
far outnumbered by the negative
possibilities in in terms of our public
discourse possibly yeah it's hard to
measure so but what are you kneeling on
that's a little bit what are some to you
possible big positive impacts of general
intelligence super intense me super
excite end to also want to distinguish
these two different contexts of thinking
about AI and high impacts they're kind
of near term and long term if you want
both of which I think are legitimate
things to think about and people should
you know discuss both of them but but
they are different and they often get
mixed up and then then I get you get
confusion like I think you get
simultaneously I've maybe been
overhyping of the near term and and
under hyping of the long term and so I
think as long as we keep them apart we
can have like two good conversations but
or we can mix them together and have one
bad conversation can you clarify just oh
the two things we were talking about the
near term and long term yeah and what
are the distinction well it's a blurry
distinction but say the things I wrote
about in this book super intelligence
long term things people are worrying
about today with I don't know
algorithmic discrimination or even
things self-driving cars and drones and
stuff more near term and then then of
course you could you button some medium
term where that kind of overlap and they
want evolves into the other but I don't
write I think both yeah the dishes look
kind of somewhat different depending on
which of these contexts so I think I
think it'd be nice if we can talk about
the long term mm-hm and think about a
positive impact or a better world
because of the existence of the long
term super intelligent now do you have
the use of such a war yeah I mean it I
guess it's a little harder
chicklet because it seems obvious that
the world has a lot of problems as it
currently stands and it's hard to think
of any one of those which it wouldn't be
useful to have like a friendly aligned
super intelligence working on so from
health you know to the economic system
to be able to sort of improve the
investment and trade and foreign policy
decisions all that kind of stuff all
that kind of stuff and a lot more I mean
what's the killer app well I don't think
there is one I think AI I especially
artificial general intelligence is
really the ultimate general purpose
technology so it's not that there is
this one problem this one area where it
will have a big impact but if and when
it succeeds it will really apply across
the board in all fields where human
creativity and intelligence and
problem-solving is useful which is
pretty much all fields right there the
thing that it would do is give us a lot
more control over nature it wouldn't
automatically solve the problems that
arise from conflict between humans
fundamentally political problems some
subset of those might go away if you
just had more resources and cooler tech
but some subset would require
coordination that is not automatically
achieved just by having more technical
capability but but anything that's not
of that sort I think you just get like
an enormous boost with this kind of
cognitive technology what once it goes
all the way not again
that doesn't mean I'm like thinking or
people don't recognize what's possible
with current technology and like
sometimes things get over height but I
mean those are perfectly consistent
views to hold the ultimate potential
being enormous and then it's a very
different question of how far are we
from that or what can we do with
near-term technology so what's your
intuition about the idea of intelligence
explosion so there's this
you know when you start to think about
that leap from the near term to the long
term the natural inclination like for me
sort of building machine learning
systems today it seems like it's a lot
of work to get the general intelligence
but there's some intuition of
exponential growth of exponential
improvement of intelligence explosion
can you maybe try to elucidate to try to
talk about what's your intuition about
the possibility of an intelligence
explosion they won't be this gradual
slow process there might be a phase
shift yeah I think it's we don't know
how explosive it will be I think for
what it's worth I've seems fairly likely
to me that at some point I will be some
intelligence explosion like some period
of time where progress in AI becomes
extremely rapid roughly roughly in the
area where you might say it's kind of
human equivalent in coral cognitive
faculties that the concept of human
equivalent like this starts to break
down when you look too closely at it but
and just how explosive does something
have to be for it to be called an
intelligence expulsion like does it have
to be like overnight literally or a few
years or so but but overall I guess in
on if you had if you plotted the
opinions of different people in the
world I I guess I would be somewhat more
probability towards the intelligence
expulsion scenario than probably the
average you know AI researcher I guess
so and then the other part of the
intelligence explosion or just forget
explosion just progress is once you
achieve that gray area of human level
intelligence is it obvious to you that
we should be able to proceed beyond it
to get the super intelligence yeah that
seems I mean as much as any of these
things can be obvious given we've never
had one people have different views
smart people of different views is like
that it's like some some some degree of
uncertainty that always remains for any
big futuristic philosophical grand
John that just we realize humans are
fallible especially about these things
but it does him as far as I'm judging
things based on my own impressions that
it seems very unlikely that that would
be a ceiling at or near human cognitive
capacity but and this is a I don't know
this is a special moment and it says
both terrifying and exciting to create a
system that's beyond our intelligence so
maybe you can step back and and say like
how does that possibly make you feel
that we can create something it feels
like there's a line beyond which it
steps you'll be able to outsmart you and
therefore it feels like a step where we
lose control well I don't think that a
lot of follows that is you could imagine
and in fact this is what a number of
people are working towards making sure
that we could ultimately the project
higher levels of problem-solving ability
while still making sure that they are
aligned like they're in the service of
human values I mean so so it's losing
control I think is not enough even that
would happen now I asked how it makes me
feel I I mean to some extent I've lived
with this for so long since as this as
long as I can remember being being an
adult or even a teenager it seemed to me
obvious that at some point I I will
succeed and so I actually misspoke I
didn't mean control I meant because the
control problem is an interesting thing
and I think we the hope is at least we
should be able to maintain control over
systems that are smarter than us but
they're they we do lose our specialness
it's sort of we'll lose our place as the
smartest coolest thing on earth and
there's an ego involved that that humans
are very good at dealing with I mean I I
value my intelligence
human being it seems like a big
transformative step to realize you
there's something out there that's more
intelligent I mean you don't see that
today I think yes a lot I think it
really small I mean I think there
already a lot of things out there that
are I mean certainly if you think the
universe is big there's going to be
other civilizations that already have
super intelligences or that just
naturally have brains the size of beach
balls and they're like completely
leaving us in the dust and we haven't
our face to face we have some face to
face but I mean that's not my question
what what would happen in in a kind of
posthuman world like how much day-to-day
would these super intelligences be
involved in the lives of ordinary men
I you could imagine some scenario where
it would be more like a background thing
that would help protect against some
things but you wouldn't like that there
wouldn't be this intrusive kind of like
making you feel bad by like making
clever jokes on your ex but like there's
all sorts of things that maybe in the
human context would feel awkward about
that you don't want to be the dumbest
kid in your class everybody picks it
like a lot of those things maybe you
need to abstract away from if you're
thinking about this context where we
have infrastructure that is in some
sense beyond any or all humans I mean
it's a little bit like say the
scientific community as a whole if you
think of that as in a mind it's a little
bit of metaphor but I mean obviously
it's going to be like way more capacious
than any individual so in some sense
there is this mind like thing already
out there that's that just vastly more
intolerant and than a new individual is
and we think okay that's you just accept
that as a fact that's the basic fabric
of our existence intelligent yeah you
get used to a lot of I mean there's
already Google and Twitter and Facebook
these sister recommender systems that
are the basic fabric of our and I could
see them becoming
I mean do you think of the collective
intelligence of these systems as already
perhaps reaching super intelligence
level well means I hear it comes to this
the concept of intelligence and the
scale and what human level means the the
kind of vagueness and indeterminacy of
those concepts starts to dominate how he
would answer that question so the like
say the Google search engine has a very
high capacity of a certain kind like
retrieve it remember remembering and
retrieving information particularly like
text or images that are you have a kind
of string a word string key like
obviously superhuman at that but a vast
set of other things it can't even do at
all not just not do well but so so you
have these current AI systems that are
superhuman in some limited domain and
then like radically subhuman in all
other domains so it's same way that
chess like are just a simple computer
that can multiply really large numbers
right it's gonna have this like one
spike of super intelligence and then a
kind of a zero level of capability
across all other cognitive fields and
yeah I don't necessarily think the
journalist I mean I'm not so attached
with it but I could sort of it's a it's
a gray area and it's a feeling but to me
sort of alpha zero it's somehow much
more intelligent much much more
intelligent than deep blue hmm and just
say which tomato you could say well
these are both just board game that
they're both just able to play board
games who cares if they're good do
better or not but there's something
about the learning the self play
learning yeah that makes it crosses over
into that land of intelligence that
doesn't necessarily need to be general
in the same way Google is much closer to
deep blue currently in terms of its
search engine now then it is to sort of
alpha zero and the moment it becomes the
moment these recommender systems really
become more like alpha zero but being
able to learn
a lot without the constraints of being
heavily constrained by human interaction
that seems like a special moment in time
certainly learning ability seems to be
an important facet of general
intelligence that you can take some new
domain that you haven't seen before and
you weren't specifically pre-programmed
for and then figure out what's going on
there and eventually become really good
at it so that's something alpha 0 it has
much more often than deep blue had and
in fact I mean systems like alpha zero
can learn not just go but other in fact
probably beat deep blue in chess and so
forth right so that you say you do just
general and it matches the intuition we
feel it's more intelligent and it also
has more of this general purpose
learning ability and if we get systems
that have been more general-purpose
learning ability it might also trigger
an even stronger intuition that they are
actually starting to get smart so if you
were to pick a future what would eating
a utopia looks like with a GI systems
sort of is it the neural link brain
computer interface world where we're
kind of really closely interlinked with
AI systems is it possibly where a GI
systems replace us completely while
maintaining the the values and the the
consciousness is it something like it's
a completely invisible fabric like you
mentioned a society where just AIDS and
a lot of stuff that we do like carrying
diseases and so on what does utopia if
you get to pick yeah I mean it's a good
question and a deep and difficult one
I'm quite interested in it I don't have
all the answers yet but or might never
have but I think there are some
different observations one could make
one one is if this if the scenario
actually did come to pass it would open
up this vast space of possible modes of
being on one and material and resource
constraints would just be like expanded
dramatically so you there would be a lot
of a big pie let's
right also it would enable us to to do
things including to ourselves or not
like that do you eat it would just open
up this much larger design space options
based and and we have ever had access to
in in human history so I think two
things follow from that what one is that
we probably would need to make a fairly
fundamental rethink of what ultimately
we value like think things through more
from first principles in the context
would be so different from the familiar
that we could have just take what we've
always been doing and then like oh well
we have this cleaning robot that like
cleans the dishes in the sink and a few
other small things and like I think we
would have to go back to first
principles and so from even from the
individual level go back to the first
principles of what what is the meaning
of life what is happiness how it is
fulfillment yeah and then also connected
to this large space of of resources is
that it would be possible and I think
something we should aim for is to do
well by the lights of more than one
value system that is we wouldn't have to
choose only one value criterion and say
we're gonna do something that's course
really high on the metric of say even
ISM and then is like a zero by other
criteria like kind of wire headed brains
innovate and it's like a lot of pleasure
that's good but then like no no Beauty
you know achievement like no III or or
or pic and I think to some significant
not unlimited sense but the significant
sense it would be possible to do very
well by many criteria like maybe you
could get like 98% of the best according
to several criteria at the same time
given this this the secret expansion of
the option space and so so have
competing value systems
in criteria as a sort of firm just like
our Democrat versus Republican there
seems to be this always multiple parties
that are useful for our progress in
society even though might seem
dysfunctional inside the moment but
having the multiple value systems used
to be beneficial for I guess a balance
of power so that's yeah let's not not
exactly what I have in mind that it's
well although alchemy may be in an
indirect way it is but that if you had
the chance to do something that scored
well in several different isometrics our
first instinct should be to do that
rather than immediately leap to the
thing ah which one's of these value
systems are we gonna screw over like our
first in let's first try to do very well
by all of them yeah then it might be
that you can't get a hundred percent of
all and you would have to then like have
the hard conversation about which one
will only get ninety-seven particular
there's my cynicism that all of
existence is always a trade-off but you
say that maybe it's not such a bad trade
office first
he's right well this would be a
distinctive context in which at least
some of the constraints would be removed
probably stupid trade-offs in the end
it's just that we should first make sure
we at least take advantage of this
abundance so in terms of thinking about
this like yeah what one should think I
think in this kind of frame of mind of
generosity and a inclusiveness to
different value systems and and see how
far one can get there first
and I think one could do something that
that would be very good according to
many different criteria we kind of
talked about AGI fundamentally
transforming the the value system of our
existence the mean the meaning of life
but today what do you think is the
meaning of life what are you the serious
or perhaps the biggest question what's
the meaning of life what's the meaning
of existence what makes what gives your
life fulfillment purpose
happiness meaning yeah I think these are
like I guess a bunch of different but
related questions in there that one can
ask happiness meaning yeah I mean it
like he's pretty bad and somebody
getting a lot of happiness for something
that they didn't think was meaningful
like mindless like watching reruns of
some television series avoiding junk
food like maybe some people that gives
pleasure but they wouldn't think it had
a lot of meaning whereas conversely
something that might be quite loaded
with meaning might not be very fun
always like some difficult achievement
that really helps a lot of people maybe
requires self-sacrifice and hard work
and so so these things can I think come
apart which is something to bear in mind
also when if you're thinking about these
utopia questions that you might actually
start to do some constructive thinking
about that you might have to isolate and
distinguish these different kinds of
things that might be valuable in
different ways make sure you can sort of
clearly perceive each one of them and
then you can think about how you can
combine them and just as you said
hopefully come up with a way to maximize
all of them together yeah maximize or or
get like a very high score on on a wide
range of them even if not literally all
you can always come up with values that
are exactly opposed to one another right
but I think for many values they're kind
of a post with m'p lace them in in a
certain dimensionality of your face like
there are shapes that are kind of you
can't untangle like in a given
dimensionality but if you start adding
dimensions then it might in many cases
just be that they are easy to pull apart
and you could so we'll see how much
space there is for that but I think that
there could be a lot in this context of
radical abundance if ever we get to that
I don't think there's a better way to
end it Nick you've influenced the huge
number of people too
work on what could very well be the most
important problems of our time so it's a
huge honor and thank you so much for
talking for coming by likes that's fun
thank you
thanks for listening to this
conversation with Nick Bostrom and thank
you to a presenting sponsor cash app
please consider supporting the podcast
by downloading cash app and using code
lex podcast if you enjoy this podcast
subscribe on youtube review it with five
stars a NAPA podcast supporter on
patreon or simply connect with me on
Twitter and lex
friedman and now let me leave you with
some words from nick bostrom
our
existential risks cannot be one of
trial-and-error there's no opportunity
to learn from errors the reactive
approach see what happens limit damages
and learn from experience is unworkable
rather we must take a proactive approach
this requires foresight to anticipate
new types of threats and a willingness
to take decisive preventive action and
to bear the costs moral and economic of
such actions thank you for listening and
hope to see you next time
you