Max Tegmark: Life 3.0 | Lex Fridman Podcast #1
Gi8LUnhP5yU • 2018-04-19
Transcript preview
Open
Kind: captions Language: en as part of MIT course six as $0.99 artificial general intelligence I've gotten the chance to sit down with max tegmark he is a professor here at MIT is a physicist spent a large part of his career studying the mysteries of our cosmological universe but he's also studied and delved into the beneficial possibilities and the existential risks of artificial intelligence amongst many other things he's the co-founder of the future of life Institute author of two books both of which I highly recommend first our Mathematica universe second is life 3.0 he's truly an out-of-the-box thinker and fun personality so I really enjoyed talking to him if you would like to see more of these videos in the future please subscribe and also click the little bell icon to make sure you don't miss any videos also Twitter linked in AGI that MIT that I do if you want to watch other lectures or conversations like this one better yet go read Max's book life 3.0 chapter 7 on goals is my favorite it's really where philosophy and engineer and come together and it opens with a quote by dusty s key the mystery of human existence lies not and just stayin alive but in finding something to live for lastly I believe that every failure rewards us with an opportunity to learn in that sense I've been very fortunate to fail in so many new and exciting ways and this conversation was no different I've learned about something called radio frequency interference or RFI look it up apparently music and conversations from local radio stations can bleed into the audio that you're recording in such a way that almost completely ruins that audio it's an exceptionally difficult sound source to remove so I've gotten the opportunity to learn how to avoid RFI in the future during recording sessions of also gotten the opportunity to learn how to use Adobe Audition and isotope rx6 to do some noise some audio repair of course this is exceptionally difficult noise to remove I am an engineer I'm not an audio engineer neither is anybody else in our group but we did our best nevertheless I thank you for your patience and I hope you're still able to enjoy this conversation do you think there's intelligent life out there in the universe let's open up with an easy question I have a lien minority of you here actually when I give public lectures Alfred asked for show of hands who thinks there's intelligent life out there somewhere else and almost everyone put their hands up and when I ask why they'll be like oh there's so many galaxies out there there's gonna be but I'm a numbers nerd right so when you look more carefully at it it's not so clear at all if we when we talk about our universe first of all we don't mean all of space did we actually mean I don't you can throw me in the universe if she wants behind you there it's we simply mean the spherical region of space from which light has a time to reach us so far during the fourteen point eight billion year 13.8 billion years since our Big Bang there's more space here but this is what we call a universe because that's all we have access to mm-hmm so is there intelligent life here that's gotten to the point of building telescopes and computers my guess is no actually no the probability of it happening on any given planet is some number we don't know what it is and what we do know is that the number can't be super-high because there's over a billion earth-like planets in the Milky Way galaxy alone many of which are billions of years older than Earth and aside from some UFO believers in other reason is much evidence that any super 20 civilization has come here at all and so that's the famous Fermi paradox right and then if you if you work the numbers what you find is that if you have no clue what the probability is of getting life on a given planet could be 10 to the minus 10 and the minus 20 or 10 minus to any power tensor equally likely if you want to be really open-minded that translates into it being equally likely that our nearest neighbor is 10 to the 16 meters away 10 to the 17 meters away 10 to the 18 now by the time he gets much less than than 10 to the 16 already we pretty much know there is nothing else that's close and when you get the other would have discovered us they yeah they would have been discovered as long or if they're really close we would have probably know that some engineering projects that they're doing and if it's beyond 10 to the 26 meters that's already outside of here so my guess is actually that there are we are the only life in here they've gotten the point of building advanced tech which i think is very um puts a lot of responsibility on our shoulders not screw up you know I think people who take for granted that it's okay for us to screw up have an accident in a nuclear war or go extinct somehow because there's a star trek-like situation out there with some other life forms are gonna come and bail us out and doesn't matters what I think they're lulling us into a false sense of security I think it's much more prudent to say you know let's be really grateful for this amazing opportunity we've had and make the best of it just in case it is down to us so from a physics perspective do you think intelligent life so it's unique from a sort of statistical view of the size of the universe but from the basic matter of the universe how difficult is it for intelligent life to come about the kind of advanced tech building life I in is implied in your statement that is really difficult to create something like a human species well I think I think what we know is that going from no life to having life that can do ARCA a level of tech there's some sort of - going beyond that than actually settling our whole universe with life there is some road major roadblock there which is great filter as that's sometimes called which which tough to get through it's either that roadblock is either beef behind us or in front of us I'm hoping very much that it's behind us I'm super excited every time we get a new report from NASA saying they failed to find any life on Mars like just awesome because that suggests that the hard part maybe what maybe he was getting the first ribosome or or some some very low-level kind of stepping stone so they were home free cuz if that's true then the future is really only limited by our own imagination it'd be much luckier if it turns out that this level of life is kind of a dime a dozen but maybe there is some other problem like as soon as a civilization gets advanced technology within a hundred years they get into some stupid fight with themselves and poof yeah no that would be a bummer yeah so you've explored the mysteries of the universe the cosmological universe the one that's sitting between us today I think you've also begun to explore the other universe which is sort of the mystery the mysterious universe of the mind of intelligence of intelligent life so is there a common thread between your interest or in the way you think about space and intelligence oh yeah when I was a teenager yeah I was already very fascinated by the biggest questions and I felt that the two biggest quite mysteries of all in science where our universe out there and our universe in here yeah so it's quite natural after having spent a quarter of a century on my career thinking a lot about this one I'm now indulging in the luxury of doing research on this one it's just so cool I feel the time is right now for you Trane's greatly deepening our understanding of this just start exploring this one yeah because I think I think a lot of people view intelligence as something mysterious that can only exist and biological organisms like us and therefore dismiss all talk about artificial general intelligence is science fiction but from my perspective as a physicist you know I am a blob of quarks and electrons moving around in a certain pattern and processing information in certain ways and this is also a blob of quarks and electrons I am NOT smarter than the water bottle because I'm made of different kind of works I'm made of up quarks and down quarks exact same kind as this it's a there's no secret sauce I think in me it's all about the pattern of the information processing and this means that there's no law of physics saying the way that we can't create technology which can have helped us by being incredibly intelligent and helped us crack mysteries that we couldn't in other words I think we really only seen the tip of the intelligence iceberg so far yeah so the perceptron ium yeah so you can't go in this amazing term it's a hypothetical state of matter sort of thinking from a physics perspective what is the kind of matter that can help as you're saying a subjective experience emerged consciousness emerge so how do you think about consciousness from this physics perspective very good question so again I think many people have underestimated our ability to make progress on this and by convincing themselves it's hopeless because somehow we're missing some ingredient that we need or some new consciousness particle or whatever I happen to think that we're not missing anything and and that it's or not the interesting thing about consciousness that gives us this amazing subjective experience of colors and sounds and emotions and so on is rather something at the higher level about the patterns of information processing that's why that's why I am like to think about this idea of perceptron Neum what does it mean for an arbitrary physical system to be conscious in terms of what its particles are doing or its information is doing I don't think I don't hate carbon chauvinism you know this attitude you have to be made of carbon atoms to be smart or conscious something about the information processing yes kind of matter performs yeah and you know yeah I have my favorite equations here describing various fundamental aspects of the world I feel that I think one day maybe someone who's watching this will come up with the equations that information processing has to satisfy to be conscious I'm quite convinced there is big discovery to be made there yeah because let's face it sumit we know that some information processing is conscious because we are yeah conscious but we also know that a lot of information processing is not conscious like most of the information processing happening in your brain right now is not conscious there is like 10 megabytes per second coming in and even just through your visual system you are not conscious about your heartbeat regulation or or most things by even even like if I just ask you to like read what it says here you look at it and then oh now you know what it said but you don't aware of how the computation actually happened you're like the your consciousness is like the CEO that got an email at the end we leave with a final answer so what is it that makes the difference I think that's a both of us great science mystery we're actually starting it a little bit in my lab here at MIT but I also I think it's just a really urgent question the answer for started I mean if you're an emergency room doctor and you have an unresponsive patient coming in and wouldn't it be great if in addition to having a CT scan you had a consciousness scanner mm-hmm that could figure out whether this person is actually having locked-in syndrome or is actually comatose and in the future imagine if we build the robots or the machine that we can have really good conversations which I think it's mostly very likely to happen right wouldn't you want to know like if your home helped a robot is actually experiencing anything or just like a zombie I mean would you prefer what would you prefer would you prefer that it's actually unconscious so that you don't have to feel guilty about switching it off or giving me boring chores or would you prefer well the certainly would we would prefer I would prefer the appearance of consciousness but the question is whether the appearance of consciousness is different than cost consciousness itself and sort of ask that as a question yeah do you think we need to you know understand what consciousness is solve the hard problem of consciousness in order to build something like an a GI system no I don't think that and I think we we will probably be able to build things even if we don't answer that question but if we want to make sure that what happens is a good thing we better solve it first so it's a wonderful controversy you're raising there there where you have basically three points of view about the heart problem sir there are two different points of view that both conclude that the hard problem of consciousness is BS you're on one hand you have some people like Daniel Dennett who say this is our consciousness is just BS because consciousness is the same thing as intelligence there's no difference so anything which acts conscious is conscious just like like we are and then there are also a lot of people including many top AI researchers I know you say all conscience is just because of course machines should never be conscious tonight they're always gonna is gonna be zombies never have to feel guilty about how you treat them and then there's a third group of people including Giulio Tononi for example and and another just of chakana brothers I would put myself Falls on this middle camp who say that actually some information processing is conscious and in some is not so let's find the equation which can be used to determine which it is and I think we've just been a little bit lazy kind of running away from this problem for a long time it's been almost taboo would even mention the c-word a lot of circles because look but we should stop making excuses this is a science question and we can the rock there are ways we can even test test any theory that makes predictions for this and coming back to this helper robot I mean so you said you'd want to help a robot to certainly act conscious and treat you like to have conversations with us I think so wouldn't you would you feel would you feel a little bit creeped out if you realize that it was just glossed up the tape recorder they know there was just Sambi and there's some faking emotion would you prefer that it actually had an experience or will you prefer that it's actually not experiencing anything so you feel you don't have to feel guilty about what you do to it it's such a difficult question because you know it's like when you're in a relationship and you say well I love you and the other person I love you back it's like asking well do they really love you back or are they just saying they love you back do you don't you really want them to actually love you I it's hard to it's hard to really know the difference between everything seeming like there's consciousness present there's intelligence present there's affection passion love and and actually being there I'm not sure do you have a question let's just like to make it a bit more pointed so Mass General Hospital is right across the river right yes suppose suppose you're going in for a medical procedure and they're like you know furnish the agent what we're gonna do is we're gonna give you a muscle relaxant so you won't be able to move and you're gonna feel excruciating pain during the whole surgery but you won't be able to do anything about it but then we're gonna give you this drug that erases your memory of it would you be cool about that no what difference that you're conscious about it or not if there's no behavioral change right right that's a really that's a really clear way to put it that's yeah it feels like in that sense experiencing it is a valuable quality so actually being able to have subjective experiences at least in that cases is valuable and I think we humans have a little bit of a bad track record also of making these self-serving arguments that other entities aren't conscious you know people often say oh these animals can't feel pain right it's okay to boil lobsters because we asked them if it hurt and they didn't say anything and now there was just the paper out saying lobsters did do feel pain when you boil them and they're banning it in Switzerland it and and we did this with slaves too often and say oh they don't mind they don't maybe or aren't conscious or women don't have souls or whatever I'm a little bit nervous when I hear people just take as an axiom that machines can't have experience ever I think this is just this really fascinating science question is what it is let's research it and try to figure out what it is it makes the difference between unconscious intelligent behavior and conscious intelligent behavior so in terms of so if you think of a Boston Dynamics human robot being sort of with a broom being pushed around the its starts it starts pushing on his consciousness question so let me ask do you think an AGI system like a few neuroscientists believe needs to have a physical embodiment needs to have a body or something like a body no I don't think so you mean to have to have a conscious experience to have consciousness I do think it helps a lot to have a physical embodiment learn the kind of things about the world that they're important to us humans for sure but I don't think bah diamond is necessary after you've learned it just have the experience think about when you're dreaming right your eyes are closed you're not getting any sensory input you're not behaving or moving in any way but there's still an experience there right and so there's clearly the experience that you have when you see something tool in your dreams isn't coming from your eyes it's just the information processing itself in your brain which is that experience right but if I put another way I'll say because it comes from neuroscience is the reason you want to have a body and a physical something like a physical like a you know a physical system is because you want to be able to preserve something in order to have a self you could argue would you you need to have some kind of embodiment of self to want to preserve well now we're getting a little bit on Drop amorphic that's inter and super more fising things miss Mamie tossing like self-preservation instincts I mean we are evolved organisms right right so Darwinian evolution endowed us and other involve all organism with the self-preservation instinct as those that didn't have those self-preservation genes are clean out of the gene pool right right but if you build an artificial general intelligence the mind space that you can design is much much larger than just a specific subset of minds that can evolve that happen so they CERN a GI mind doesn't necessarily have to have any self-preservation instinct it also doesn't necessarily have to be so individualistic as I'd like imagine if you could just first of all it or we're also very afraid of death you know I suppose you could back yourself up every five minutes and then your airplane is about to crash you like shucks I'm just counted I'm gonna lose the last five minutes of experiences it's my last cloud backup you're dying you know it's not this big a deal or if we could just copy experiences between our minds easily like me which we could easily do if we were silicon based right then maybe we would feel a little bit more like a hive mind actually but maybe is he so so there's a so I don't think we should take for granted at all that AG I will have to have any of those sort of competitive as alpha male instincts right on the other hand you know this is really interesting because I think some people go too far and say oh of course we don't have to have any concerns either that advanced AI will have those instincts because we can build anything you want that there's there's a very nice set of arguments going back to Steve Omohundro and Nick Bostrom and others just pointing out that when we build machines we normally build them with some kind of goal you know win this chess game drive this car safely or whatever and as soon as you put in a goal into machine especially if it's kind of open-ended goal and the machine is very intelligent it'll break that down into a bunch of sub goals and one of those gold will almost always be self-preservation because if it breaks or dies in the process it's not gonna accomplish the goal right like suppose you just build a little you have a little robot and you tell it to go down the Starmark get here and and and get you some food make your cookin Italian dinner you know and then someone mugs it and tries to break it down the way that robot has an incentive to not destroy it and defend itself or run away because otherwise it's gonna fail and cooking you dinner it's not afraid of death but it really wants to complete the dinner cooking gold so it will have a self-preservation instinct continue being a functional Asian yeah and and similarly if you give any kind of warm and they she's go to an AGI it's very likely they want to acquire more resources so it can do that better and it's exactly from those sort of sub goals that we might not have intended that but some of the concerns about AGI safety come you give it some goal which seems completely harmless and then before you realize it it's also trying to do these other things which you didn't want it to do and it's moment be smarter than us so so lastly and let me pause just because I in a very kind of human centric way see fear of death is a valuable motivator haha so you don't think do you think that's an artifact of evolution so that's the kind of mind space evolution created they were sort of almost obsessed about self-preservation kind of genetic well you don't think that's necessary to be afraid of death so not just a kind of sub goal of self-preservation just so you can keep doing the thing but more fundamentally sort of have the finite thing like this ends for you at some point the interesting do I think it's necessary before what precisely for intelligence but also for consciousness so for those for both do you think really like a finite death and the fear of it is important so before I can answer well before we can agree on whether it's necessary for intelligence or for consciousness we should be clear or how we define those two words because share a lot of really smart people to find them in very different ways I was in this on this panel and with AI experts and they couldn't they couldn't agree on how to define intelligence even so I define intelligence simply as the ability to accomplish complex goals I like your broad definition because again I don't want to be a carbon chauvinist right and in that case no it certainly certainly doesn't require fear of death I would say alpha go alpha zero is quite intelligent I don't think alpha zero has any fear of being turned off because it doesn't understand the concept of that even and and similarly consciousness I mean you could certainly imagine very simple kind of experience if you know if certain plants have any kind of experience I don't think they were afraid of dying if there's nothing they can do about it anyway much so there wasn't that much value and but more seriously I think if you ask not just about being conscious but maybe having what you would we might call an exciting life for you feel passion and I didn't really appreciate the little things maybe there but somehow maybe there perhaps it does help having having my backdrop today it's finite you know let's let's make the most of this this live to the fullest so if you if you knew you were gonna slip forever if you think you would change your yeah in some perspective it would be an incredibly boring life living forever so in the sort of loose subjective terms that you said of something exciting and something in this that other humans would understand I think is yeah it seems that the the finiteness of it is important well the good news I have for you then is based on what we understand about cosmology everything is in our universe is Pro ultimately probably finite alone although pay crunch or bit or big what's to expand anything yeah we couldn't have a Big Chill or a Big Crunch or a big rip or that's the big snap or death bubbles all over more than a billion years away so we should we certainly have vastly more time than our ancestors thought but there is still it's still pretty hard to squeeze in an infinite number of compute cycles even though there are some loophole let's just might be possible but I think I you know some people like to say that you should live as if you're about you're gonna die in five years or something that sort of optimal maybe it's a good it subs we should build our civilization as if it's all finite to be on the safe side right exactly so you mentioned in defining intelligence as the ability solve complex goals where would you draw a line how would you try to define human level intelligence and superhuman level intelligence where this consciousness part of that definition no consciousness does not come into this definition so so I think your intelligence is it's a spectrum but there are very many different kinds of goals you can have you can have a goal to be a good chess player a good goal player a good car driver a good investor good poet etc so intelligence that bind by its very nature isn't something you can measure but it's one number overall goodness no no there are some people who are more better at this some people are better than that um right now we have machines that are much better than us at some very narrow tasks like multiplying large numbers fast memorizing large databases playing chess playing go and soon driving cars but there's still no machine that can match a human child in general intelligence but but artificial general intelligence AGI in the name of your course of course that is by its very definition the the quests the build a mission in seen that can do everything as well as we can up to the old holy grail of AI from from back to its inception and then 60s if that ever happens of course I think it's gonna be the biggest transition in the history of life on earth but it but it doesn't necessarily have to wait the big impact about until machines are better than us at knitting the really big change doesn't come exactly the moment they're better than us at everything the really big change comes first there big changes when they start becoming better at us at doing most of the jobs that we do because that's it can takes away much of the demand for human labor and then the really whopping change comes when they become better than us at AI research right right because right now the timescale of AI researcher is limited by the human research and development cycle of years typically at all along the tape from one release of some software or iPhone or whatever to the next but once once we have once Google can replace 40,000 engineers by 40,000 equivalent pieces of software or whatever right then that doesn't there's no reason that has to be years it can be in principle much faster and the timescale of future progress in AI and also all of science and technology will will be driven by machines not so it's this point simple point which lives right this incredibly fun controversy about whether it can be an intelligence explosion so-called singularities Vernor Vinge called it that the idea is articulated by IJ good obviously way back fifties but you can see Alan Turing and others thought about it even earlier not did you ask me what exactly what I define England's yeah so this the the glib answer is it to say something which is better than us at all cognitive tasks will look better than any human and all cognitive tasks but the really interesting bar I think goes a little bit lower than that actually it's when they can when they're better than us it AI programming and can a general learning so that they can can if they want to get better than I said anything by just studying so they're better is a keyword and better as towards this kind of spectrum of the complexity of goals it's able to accomplish yeah so another way to so no and that's certainly a very clear definition of human law so there's it's almost like a sea that's rising you could do more and more and more things as a graphic that you show it's really nice way to put it so there's some Peaks that and there's an ocean level elevating and you saw more and more problems but you know just kind of to take a pause and we took a bunch of questions and a lot of social networks and a bunch of people asked a sort of a slightly different direction on creativity and and things like that perhaps aren't a peak the it's you know human beings are flawed and perhaps better means having being a having contradiction being fought in some way so let me sort of yeah start and start easy first of all so you have a lot of cool equations let me ask what's your favorite equation first of all I know they're all like your children but like which one is that it's the master key of want the mechanics of the microworld this equation to protect you like everything to do with atoms molecules and all that we have yeah so okay it's a quantum mechanics is certainly a beautiful mysterious formulation of our world so I'd like to sort of ask you just as an example it perhaps doesn't have the same beauty as physics does but in mathematics abstract the Andrew Wiles who proved the firm as last theta so he just saw this recently and it kind of caught my eye a little bit this is three hundred fifty eight years after it was conjectured so this very simple formulation everybody tried to prove it everybody failed and say here's this guy comes along and eventually it proves it and then fails to prove it and proves it again in 94 and he said like the moment when everything connected into place the in an interview said it was so indescribably beautiful that moment when you finally realized the connecting piece of two conjectures he said it was so indescribably beautiful it was so simple and so elegant I couldn't understand how I'd missed it and I just stared at it in disbelief for twenty minutes then then during the day I walked around the department and at Keamy keep coming back to my desk looking to see if it was still there it was still there I couldn't contain myself I was so excited it was the most important moment on my working life nothing I ever do again will mean as much so that particular moment and it kind of made me think of what would it take and I think we have all been there at small levels maybe let me ask have you had a moment like that in your life where you just had an ideas like wow yes I wouldn't self and the same breath as Andrew wilds but I've certainly had a number of um aha moments mo when I realized something very cool about physics just as completely made my head explode in fact some of my favorite discoveries I made later I later realize if they had been discovered earlier someone who sometimes got quite famous for it so I find this too late for me to even publish it but that doesn't diminish in any way an emotional experience you have when you realize it like yeah Wow yeah so what would it take and at that moment that wow that was yours in a moment so what do you think it takes for an intelligent system and a GI system an AI system to have a moment like that that's a tricky question because there are actually two parts to it right one of them is cannot accomplish that proof it cannot prove that you can never write a to the N plus B to the N equals 3/2 that equals e to the N for all integers well etc etc when when n is bigger than 2 the simply in any question about intelligence can you build machines that are that intelligent and I think by the time we get a machine that can independently come up with that level of proofs probably quite close to AGI the second question is a question about consciousness when will we will willins how likely is it that such a machine will actually have any experience at all as opposed to just being like a zombie and would we expect it to have some sort of emotional response to this or anything at all I can to human emotion work no but when it accomplishes its machine goal it did the views it to somehow it's something very positive and right and and sublime and and and and deeply meaningful I would certainly hope that if in the future we do create machines that are our peers or even our dis since yeah I would certainly hope that they do have this sort of sublime sublime appreciation of life in a way my absolutely worst nightmare would be that in at some point in the future the distant future maybe I cost much as teeming with all this post biological life doing all the seemingly cool stuff and maybe the fun last humans or the time era our species eventually fizzles out we'll be like well that's ok because we're so proud of our descendants here and look what I like my most nightmare is that we haven't solved the consciousness problem and we haven't realized that these are all the zombies they're not aware of anything anymore than the tape recorders it has an any kind of experience so the whole thing has just become a play for empty benches that would be like the ultimate zombie apocalypse me III would much rather in that case that we have these beings which just really appreciate how how amazing it is and in that picture what would be the role of creativity we had a few people ask about creativity do you think when you think about intelligence I mean certainly the the story told the beginning of your book involved you know creating movies and so on yeah sort of making making money you know you can make a lot of money in our modern world with music and movies so if you are intelligent system you may want to get good at that yeah but that's not necessarily what I mean by creativity is it important on that complex goals where the sea is rising for there to be something creative creative or am I being very human centric and thinking creativity somehow special relative to intelligence my hunch is that we should think your creativity simply as an aspect of intelligence and [Music] we we have to be very careful with with human vanity we had we have this tendency to very often one and say as soon as machines can do something we try to diminish it that's a long but that's not like real intelligence you know you're the night trader or there were or this or that or the other thing maybe if we ask ourselves to write down a definition of what we actually mean by being creative what we mean by Andrew Wiles what he did there for example don't we often mean that someone takes you very unexpected leap mm-hmm it's not like taking feet 573 and multiplying in my 224 by justice step of straightforward cookbook like rules right if this you may be making you even make a connection between two things that people have never thought was connect very surprising or something like that I think I think this is an aspect of intelligence and this is some actually one of the most important aspect of it maybe the reason we humans are tend to be better at it than traditional computers is because it's something that comes more naturally if you're a neural network then if you're a traditional logic gate based computer machine you know we physically have all these connections and you activate here activator here activate here ping you know I my hunch is that if we ever build a machine where you could just give it the task hey hey you say hey you know I just realized that I have I want to travel around the world instead this months can you teach my eight a GI course for me and it's like ok I'll do it and it does everything that you would have done and they provides us and so yeah that that would in my mind involve a lot of creativity yeah so I had such a beautiful way to put it I think we do try to grab grasp at the you know the definition of intelligence is everything we don't understand how how to build so like so we as humans try to find things well that we have on machines don't happen maybe creativity is just one of the things one of the words we use to describe that that's really interesting where to put it out think we need to be that defensive I don't think anything good comes out of saying oh we're somehow special you know I it's contrariwise there are many examples in history of we're trying to pretend that were somehow superior to all other intelligent beings has led the pretty bad results right Nazi Germany they said that they were somehow superior to other people today we still do a lot of cruelty to animals by saying that we're social superiors and how and the other they can't feel pain slavery was justified by the same kind of really weak weak arguments and and I don't think if we actually go ahead and build artificial general intelligence it can do things better than us I don't think we should try to found our self-worth on some sort of bogus claims of superiority in in terms of our intelligence I think it's we shouldn't stand Joe find our calling and then the meaning of life from from experiences that we have right you know I can have I can have very meaningful experiences even if there are other people who are smarter than me you know when I go to faculty meeting here and I was talking about something that I certainly realize oh boy he has a Nobel Prize he has a Nobel Prize he has no pride I don't have what does that make me enjoy life any less or would enjoy talking those people less of course not see my and the contrariwise I I feel very honored and privileged to get to interact with with other very intelligent beings that are better than me a lot of stuff so I don't think there's any reason why we can't have the same approach with with intelligent machines that's a really interesting so people don't often think about that they think about when there's going if there's machines that are more intelligent you naturally think that that's not going to be a beneficial type of intelligence you don't realise it could be you know like peers of Nobel Prizes that that would be just fun to talk with and they might be clever about certain topics and you can have fun having a few drinks with them so well another example is we can all relate to it why it doesn't have to be a terrible thing to be impressed the friends of people are even smarter than us all around is when when you and I were both two years old I mean our parents were much more intelligent than us right here worked out okay yeah because their goals were aligned with our goals yeah and that I think is really the number one T issue we have to solve its value align the value alignment problem exactly because people who see too many Hollywood movies with lousy science fiction plot lines they worry about the wrong thing right they worry about some machines only turning evil it's not malice they wish that the issue probably concerned its competence by definition intelligent makes you makes you very competent if you have a more intelligent goal playing mr. computer playing as the less intelligent one and when we define intelligence is the ability to accomplish go winning right it's gonna be the more intelligent one that wins my and if you have a human and then you have an AGI and that's more intelligent in all ways and they have different goals guess who's gonna get their way right so I was just reading about I was just reading about this particular rhinoceros species that was driven extinct just a few years ago bummer is looking at this cute picture mommy run oestrus with it's it's child you know and why did we humans private extinction wasn't because we were evil Rhino haters right as a whole it was just because we our goals weren't aligned with those of the rhinoceros and it didn't work out so well for the rhinoceros because we were more intelligent right so I think it's just so important that if we ever do build AGI before we we have to make sure that it it learns to understand our goals that it adopts our goals and it retains those goals so the cool interesting problem there is being able us as human beings trying to formulate our values so you know you could think of the United States Constitution as a as a way that people sat down at the time a bunch of white men but which is a good example I should we should say they formulated the goals for this country and a lot of people agree that those goals actually hold up pretty well that's an interesting formulation of values and failed miserably in other ways so for the value alignment problem and a solution to it we have to be able to put on paper or in in in a program human values how difficult do you think that is very but it's so important we really have to give it our best and it's difficult for two separate reasons there's the technical value alignment problem of figuring out just how to make machines understand our goals adopt them and retain them and then there's a separate part of it the philosophical part whose values anyway and since we it's not like we have any great consensus on this planet on values how what mechanism should we create them to aggregate and decide okay what's a good compromise right at that second discussion can't this be left the tech nerds like myself right that's right and if we refuse to talk about it and then AGI gets built who's gonna be actually making the decision about whose values it's gonna be a bunch of dudes and some tech company yeah yeah and are they necessarily - it's it's so representative of all humankind that we want to just entrusted to them or they even is uniquely qualified to speak the future human happiness just because they're good at programming any I I'd much rather have this be a really inclusive conversation but do you think it's possible sort of so you create a beautiful vision that includes so the diversity cultural diversity and various specs on discussing rights freedoms human dignity but how hard is it to come to that consensus do you think it's certainly a really important thing that we should all try to do but do you think it's feasible I I think there's no better way to guarantee failure than to try to refuse to talk about it or or refuse to try and I also think it's a really bad strategy to say okay let's first have a discussion for a long time and then once we reach complete consensus then we'll try to load it into the Machine know it we shouldn't let perfect be the enemy of good instead we should start with the kindergarten ethics - pretty much everybody agrees on and put that into our machines now we're not doing that even look at the you know anyone who builds this passenger aircraft wants it to never under any circumstances fly into a building or mountain right yet the September 11 hijackers were able to do that and even more embarrassingly you know and that he has Lubitz this depressed Germanwings pilot when he flew his passenger jet into the Alps killing over a hundred people he just told the autopilot to do it he told the freaking computer to change the altitude 200 meters and even though it had the GPS maps everything the computer was like okay no so which we should take those very basic values though where the problem is not that we don't agree that maybe the problem is just we've been too lazy to try to put it into our machines and make sure but from now on air airplanes will just which all have computers in them but we'll just never just refuse to do something like that go into safe mode maybe lock the cockpit door or than here at the airport and and there's so much other technology in our world as well now where it's really quite becoming quite timely to put in some sort of very basic values like this even in cars we were have enough vehicle terrorism attacks by now of you love different trucks and bands into pedestrians that it's not at all a crazy idea to just have that hardwired into the car just yeah there are a lot of there's always gonna be people who for some reason want to harm others most of those people don't have the technical expertise to figure out how to work around something like that so if the car just won't do it it helps it let's start there so there's a lot of that's a great point so not not chasing perfect there's a lot of things that a lot that most of the world agrees on yeah and this look there let's start there and and then once we start there we'll also get into the habit of having these kind of conversations about okay what else should we put in here and I have these discussions this should be a gradual process then great so but that also means describing these things and describing it to a machine so one thing we had a few conversation was Stephen Wolfram I'm not sure if you're familiar with Stephen but yeah I know quite well so he is you know he played you know works with a bunch of things but you know cellular automata are these simple computable things these computation systems and he kind of mentioned that you know we probably have already within these systems already something that's AGI meaning like we just don't know it because we can't talk to it so if you give me this chance to try to try to release form a question out of this is I think it's an interesting idea to think that we can have intelligent systems but we don't know how to describe something to them and they can't communicate with us I know you're doing a little bit work an explainable AI trying to get AI to explain itself so what are your thoughts of natural language processing or some kind of other communication how how does the AI explain something to us how do we explain something to it to machines or you think of it differently so there are two separate parts to your question there are them one of them has to do with communication which is super interesting you don't get that insect the other is whether we already have AGI but we just haven't noticed it yeah right there I beg to differ right and don't think there's anything in any cellular automaton or anything or the internet itself or whatever that has artificial it didn't really do exactly everything we humans can do better I think today if the day that happens when that happens we will very soon notice we will probably notice even before andif because in a very very big way but for the second part though sorry so the because you you have this beautiful way to formulating consciousness as as a you know as information processing you can think of intelligence and information processing and this you can think of the entire universe there's these particles and these systems roaming around that have this information processing power you don't you don't think there is something with the power to process information in the way that we human beings do that's out there that that needs to be sort of connected to it seems a little bit philosophical perhaps but there's something compelling to the idea that the power is already there would you know yes the focus should be more on these I'm being able to communicate with it mhm well I agree that that and some in a certain sense the hardware processing power is already out there because our universe itself can think of it as being a computer already right it's constantly computing what water waves have evolved the water waves and the river Charles and how to move the air molecules around that s Lloyd has pointed out my colleague here that you can even in a very rigorous way think of our entire universe as being a quantum computer it's pretty clear that our universe supports this amazing processing power because you can even the within this physics computer that we live in right we can even build actually laptops and stuff so clearly the power is there it's just that most of the compute power that nature has it's in my opinion kind of wasting on boring stuff like simulating yet another ocean wave somewhere where no one is even looking right so in a sense of what life does what we are doing when we build computers is where we channeling all this compute that nature is doing anyway into doing things that are more interesting than just yet another ocean wave you know and let's do something cool here so the raw hardware power and sherbet and then and even just like computing what's gonna happen for the next five seconds in this water ball you know it takes in a ridiculous amount of compute if you do it on a human computer in yeah this water ball was did it but that does not mean this water bottle has AGI and because AGI means it should also be able to like have written my book during his interview yes and I don't think it's just communication problems as far as you know don't think it can do it and other Buddhists say when they watch the water and that there is some beauty that there's some depth and being sure that they can communicate with communication that's also very important here because I mean look part of my job is being a teacher and I know some very intelligent professors even who just have a better hard time communicating they come up with all these brilliant ideas but but to communicate with somebody else you have to also be able to simulate their own mind yes and pettite build well enough and understand model of their mind that you can say things that they will understand and that's quite difficult and that's why today it's so frustrating if you have a computer that makes some cancer diagnosis and you ask it well why are you saying I should have a surgery if it and if they don't know can only reply or I was trained on five terabytes of data and this is my diagnosis boop boop beep beep yeah I didn't doesn't really instill a lot of confidence right right so I think we have a lot of work do one on communication there so what kind of what kind of I think you're doing a little bit work and explainable eh uh yeah what do you think are the most promising avenues is it mostly about sort of the Alexa problem of natural language processing of being able to
Resume
Categories