Transcript
vx7DLImJ1Mw • Nick Bostrom on the Joe Rogan Podcast Conversation About the Simulation | AI Podcast Clips
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0344_vx7DLImJ1Mw.txt
Kind: captions Language: en so part three of the argument says that so that leads us to a place where eventually somebody creates a simulation that I think you you had a conversation with Joe Rogan I think there's some aspect here where you got stuck a little bit how does that lead to well likely living in a simulation so this kind of probability argument if somebody eventually creates a simulation why does that mean that we're now in a simulation but what you get to if you accept alternative three first is that would be more simulated people with our kinds of experiences and non simulated ones like if in in kind of if you look at the world as a whole by the end of time as it were just count it up that would be more simulated once than on simulated ones then there is a an extra step to get from that if you assume that suppose for the sake of the argument that that's true how do you get from that to this statement we are probably in a simulation so here you are introducing an indexical statement like it's that this person right now is in a simulation there are all these other people you know that are in simulation so some that are not in the simulation but what probability should you have that you yourself is one of the simulated ones right it's a setup so so yeah so I call it the bland principle of indifference which is that in in cases like this when you have to I guess sets of observers one of which is much larger than the other and you can't from any internal evidence you have tell which set you belong to you should design a probability that's proportional to the size of these sets so that if there are ten times more simulated people with your kinds of experiences you would be ten times more likely to be one of those is that as intuitive as it sounds I mean that seems kind of if you don't have enough information you should rationally just assign the same probabilities yeah kinda size of the set it seems it seems pretty plausible to me were the holes in this is it at the at the very beginning the assumption that everything stretches sort of you have infinite time essentially you don't need infinite time you should need what how long this is the time but however long it takes I guess for a universe to produce an intelligent civilization that has intense the technology to run some ancestry simulations gotcha at some point when the first simulation is created that stretch of time just a little longer than they're all start creating simulations kind of like yeah well I mean there might the different it might if you think of there being a lot of different planets and some subset of them have life and then some subset of those get to intolerant life and some of those maybe eventually start creating simulations they might get started at quite different times like maybe on some planet it takes a billion years longer before you get like monkeys or before you get even bacteria then on another planet so that like this might happen it kind of at different cosmological epochs is there a connection here to the Doomsday argument in that sampling there if there is a connection in that they both involve an application of anthropic reasoning that is reasoning about these kind of indexical compositions but the assumption you need in the case of the simulation argument it's much weaker than the simulator the assumption you need to make the Doomsday argument go through what is the Doomsday arguing and maybe you can speak to the anthropic reasoning in more general yeah that's that's a big an interesting topic in its own right and tropics but the Doomsday argument is this really first discovered by Brandon Carter it was a theoretical physicist and then developed by philosopher John Leslie I think it might have been discovered initially in the 70s or 80s and Leslie wrote this book I think in 96 and there are some other versions as well by Richard golf is a physicist but let's focus on the Carter Leslie version where it's an argument that we have systematically underestimated the probability that humanity will go extinct soon now I should say most people probably think at the end of the day there is something wrong with this doomsday argument that it doesn't really hold it's like there's something wrong with it but it's proved hard to say exactly what is wrong with it and different people have different accounts my own view is it seems inconclusive but and I can say what the argument is yeah yeah so maybe it's easiest to explain via an analogy to sampling from urns so you imagine you have a big imagine you have two urns in front of you and they have balls in them that have numbers so there's the deterrence looked the same but inside one there are 10 balls Paul number 1 2 3 up to ball number 10 and then in the other urn you have a million balls numbered one to a million and now somebody puts one of these urns in front of you and asked you to guess what what's the chance it's the 10 ball and you say 50/50 they you know I can't tell which one it is um but then you're allowed to reach in and pick a ball at random from the urn and that's suppose you find that it's ball number 7 so that's strong evidence for the 10 ball hypothesis like it's a lot more likely that you would get such a low numbered ball if they're on the 10 balls in the urn like it's in fact 10 percent done right then if there are a million balls it would be round likely you would get number 7 so you perform a Bayesian update and if your prior was 50/50 that it was the temple earn you become virtually certain after finding the random sample was seven that it's only has ten balls in it so in the case of the urns this is on controversial just elementary probability theory the Doomsday argument says that you should reason in a similar way with respect to different hypotheses about how many many balls there will be in the urn of humanity I said for how many humans that will have human being by the time we go extinct so to simplify let's suppose we only consider two hypotheses either maybe 200 billion humans in total or 200 trillion humans in total you could fill in more hypotheses but it doesn't change the principle here so it's easiest to see if we just consider these two so you start with some prior based on ordinary empirical ideas about threats to civilization and so forth and maybe you say it's a 5% chance that we will go extinct by the time there will have been 200 billion only you're kind of optimistic let's say you think probably will make it through colonize the universe in but then according to this Tuesday argument you should think of your own birth rank as a random sample so your birth is your sequence in the position of all humans that have ever existed it turns out you're about a human number of 100 billion you know give or take that's like in roughly how many people have been born before you that's fascinating because I probably yeah we each have a number wait wait wait we would each have a number in this I mean obviously the exact number will depend on where you started counting like which ancestor start was human in hasta countless human but the does those are not really important - they're relatively few of those so yeah so you're roughly a hundred billion now if they're only gonna be 200 billion in total that's a perfectly unremarkable number you're somewhere in the middle right just run-of-the-mill human completely unsurprising yes now if they're gonna be 200 trillion you would be remarkably early like you it's like what are the chances out of these 200 trillion human that you should be human number one hundred billion that seems it would have a much lower conditional probability and so analogously taha in the urn case you thought after finding this low numbered random sample you updated in favor of the urn having few balls similarly in this case you should update in favor of the human species having a lower total number of members that is doom soon well you said doom soon that's yeah well that would be the hypothesis in this case that it will end just a hundred billion I just like that term for the hypothec and of crucially relies on the Doomsday argument it's the idea that you should reason as if you were a random sample from the set of all humans that will ever have existed if you have that assumption then I think the rest kind of follows the question is why should you make that assumption in fact you know you're 100 billion so so where do you get this prior and and then there is like a literature on that with different ways of supporting that or something and that's just one example of a topic reasoning right there yeah that seems to be kind of convenient when you think about humanity when you when you think about us of even like existential threats and so on as it seems that quite naturally that you should assume that you're just an average case yeah that you're a kind of a typical randomly sample now in the case of the Doomsday argument it seems to lead to what intuitively we think is the wrong conclusion or at least many people have this reaction that there's got to be something fishy about this argument because from very very weak premises it gets this very striking implication that we have almost no chance of reaching size 200 trillion humans in the future and how can we possibly get there just by reflecting on when we were born it seems you would need sophisticated arguments about the impossibility of space colonization blah blah so what might be tempted to reject this key assumption I call it the self sampling assumption the idea that you should reason as if you're a random sample from all observers or in your some reference class however it turns out that in other domains it looks like we need something like this cell sampling assumption to make sense of bonafide a scientific inference in contemporary cosmology for example you have these multiverse theories and according to a lot of those all possible human observations are made so I mean if you have a sufficiently large universe you will have a lot of people observing all kinds of different things so if you have two competing theories say about some the value of some constant it could be true according to both of these theories that there will be some observers observing the value that corresponds to the other theory because there will be some observers that have elucidation so there is a local fluctuation or an statistically anomalous measurement these things will happen and if in us observers make in us different observations that would be something that sort of by chance make these different ones and so what we would want to say is well many more observers a larger proportion of the observers will observe as it were the true value and a few will observe the wrong value if we think of ourselves as a random sample we should expect with a very improper bility to observe the true value and that well then allow us to conclude that the evidence we actually have is evidence for the theories we think are supported it kind of done is a way of making sense of these inferences that clearly seem correct that we can you know make various observations and infer what the temperature of the cosmic background is and and the the fine-structure constant and all of this but it seems that without rolling in some assumption similar to the self sampling assumption this inference just doesn't go through and there are the examples so so there are these scientific context so it looks like this kind of anthropic reasoning is needed and makes perfect sense and yet in the case of the dupes argument it has this weird consequence and people might think there is something wrong with it there so there's done this project that would consistent try to figure out now what are the legitimate ways of reasoning about these indexical facts when observer selection effects are in play in other words developing a theory of anthropic s-- and their different views of looking at that and it's a difficult methodological area but to tie it back to the simulation argument the the key assumption there this land principle of indifference it's much weaker than the self sampling assumption so if you think about in the case of the Doomsday argument it says you should reason as if you're a random sample from All Humans that would have lived even though in fact you know that you are about number 100 billion human and you're alive in the year 2020 whereas in the case of the simulation argument all it tested well if you actually have no way of telling which one you are then you should assign this kind of uniform probability yeah yeah your role is the observer in the simulation argument is different it seems like like who is the observer I mean I keep assigning the individual consciousness yeah I mean when I say yeah when a lot of observers in the simulation in the context of the simulation argument but they're all irrelevant the server's would be a the people in original histories and be the people in simulations so this would be the class of observers that we need I mean there also may be the simulators but we can set those aside for this so the question is given that class of observers a small set of original history observers and a large class of simulated observers which one should you think is you where are you amongst this well observers I'm maybe having a little bit trouble wrapping my head a head around the intricacies of what it means to be an observer and this and this in the different instantiations of the anthropic reasoning cases that we mentioned I mean it now it I mean it may be an easier way of putting it is just like are you simulated or you're not simulated you've given this assumption that these two groups of people exist yeah in the simulation case it seems pretty straightforward it's yeah so I think the key point is the method logical assumption you need to make to get the simulation argument to where it wants to go is much weaker and less problematic then the methodological assumption you do make to get the Doomsday argument to its conclusion may be the dune star government is sound or unsound but you need to make a much stronger and more controversial assumption to make it go through in the case of the Doomsday argument sorry it's simulation argument I guess one maybe way intuition pub to like support this bland principle of indifference is to consider a sequence of different cases where the fraction of people who are simulated to non-simulated approaches one so in the limiting case where everybody is simulated I obviously you can deduce with certainty that you are simulated right if everybody with your experiences is simulated and you know you're got to be one of those you don't need the probability at all you just kind of logically conclude it right right so then as we move from a case where say ninety percent of everybody simulated 99% 99.9% it's impossible that the probability of sine should sort of approach one certainty as the fraction approaches the case where everybody is in a simulation yeah like you wouldn't like expect that to be a discrete well if there's one non simulated person then it's 50/50 but if we move that and it's hundred percent like it should kind of all right there are other arguments as well one can use to support this blind principle of indifference but that might be not - but in general when you start from time equals zero and go into the future the fraction of simulated if it's possible to create simulated worlds the fraction simulated worlds will go to one well I mean it one's a novelist kind of go all the way to one in in reality that would be some ratio although maybe at economical mature civilization could run a lot of simulations using a small portion of its resources it probably wouldn't be able to run infinite demand yeah I mean if we take say the observed the physics in the observed universe if we assume that that's also the physics at the level of the simulators that would be limits to the amount of information processing that any one civilization could perform in its future trajectory right and there's like well first of all there's limited amount of matter you can get your hands off because with the positive cosmological constant the universe is accelerating there's like a finite sphere of stuff even if you've traveled with the speed of light that you could ever reach you have a finite amount of stuff and then if you think there is like a lower limit to the amount of loss you get when you perform an eraser of a computation or if you think for example just matter gradually over cosmological timescales decay you know maybe protons decay other things and they radiate out gravitational waves like there's all kinds of seemingly unavoidable losses that occur so eventually we'll have something like like a heat death of the universe or if it caused death or whatever but it's finite but of course we don't know which if if there's many ancestral civil simulations we don't know which level we are so there could be couldn't there be like an arbitrary number of simulation that spawned ours and those had more resources there's a physical universe to work with sorry what I mean that that could be sort of okay so if simulations spawn other simulation tries it seems like each new spawn has fewer resources to work with yeah but we don't know at which love which step along the way we are at right anyone observer doesn't know whether we're in level 42 or 100 or 1 or is that not matter for the resources I mean I mean it's true that they would that would be all certainty asked you could have stacked simulations yes and that couldn't be a certainty as to which level we are at as you remark tall so all the computations performed in a simulation within the simulation also have to be expended at the level of the simulation well today the computer in basement reality where all these simulations for the simulations with the simulations are taking place like that that computer ultimately it's it's its CPU or whatever it is like that has to power this whole tower right so if there is a finite compute power in basement reality that would impose a limit to how tall this tower can be and if if if each level kind of imposes a large extra overhead you might think maybe the tower would not be very tall that most people would be lower down in the tower I love the term basement reality you