Kind: captions Language: en welcome to MIT course 6 s 0 9 9 artificial general intelligence today we have Ray Kurzweil he is one of the world's leading inventors thinkers and futurists with a 30-year track record of accurate predictions called the Restless genius by The Wall Street Journal and the ultimate thinking machine by Forbes magazine he was selected as one of the top entrepreneurs by Inc magazine which described him as the rightful heir to Thomas Edison PBS selected him as one of the 16 revolutionaries who made America Ray was the principal investigator of the first ccd flatbed scanner the first omni font optical character recognition the first point to speech reading machines for the blind the first text-to-speech synthesizer the first music synthesizer capable of creating the grand piano and other orchestral instruments and the first commercially marketed large vocabulary speech recognition among his many honors he received a Grammy Award for outstanding achievements in music technology he's the recipient of the National Medal of Technology was inducted into the National Inventors Hall of Fame holds 21 honorary doctorates and honors from three u.s. presidents Ray has written five national best-selling books including the New York Times bestsellers The Singularity is near from 2005 and how to create a mind from 2012 he is co-founder and Chancellor of singularity University and a director of engineering at Google heading up a team developing machine intelligence and natural language understanding please give ray a warm welcome [Applause] [Music] it's good to be back I've been in this lecture hall many times and walked the infinite Carter I came here as an undergraduate in 1965 within a year of my being here they started a new major called computer science it did not get its own course number that's 6 1 even biotechnology recently got its own course number but how many of you are CS majors ok how many of you do work in deep learning how many of you have heard of deep learning here I came here first in 1952 when I was 14 I became excited about artificial intelligence it had only gotten its name six years earlier the 1956 Dartmouth conference by Marvin Minsky and John McCarthy so I wrote Minsky a letter there was no email back then and he invited me up he spent all day with me as if he had nothing else to do he was a consummate educator I then and the AI field had already bifurcated into two warring camps the symbolic school which Minsk II was associated with and the connectionist school was not widely known in fact I think it's still not widely known that Minsk II actually invented the neural net in 1953 but he had become negative about it largely because there was a lot of hype that these giant branes could solve any problem so the first popular neural nets the perceptron was being promulgated by Frank Rosenblatt at Cornell so Minsky set out what are you going now and saying I said to see Rosenblatt at core now is that don't bother doing that and I went there and Rosenblatt was touting the perceptron that it ultimately would be able to solve any problem so I brought some printed letters that had the camera and it did a perfect job of recognizing them as long as they were courier ten different types I didn't work at all and he said but don't worry we can take the output of the perceptron or feed it as the input to another perceptron and take the output of that and feed it to a third layer and as we add more layers it'll get smarter and smarter and generalize and so that's interesting if you even tried that well no but it's high on our research agenda things did not move quite as quickly back then as they do now he died nine years later never having tried that idea turns out to be remarkably prescient I mean he never tried multi-layer neural nets and all the excitement we see now about deep learning comes from a combination of two things both many layer neural Nets and the law of accelerating returns which I'll get to a little bit later which is basically the exponential growth of computing so that we can run these massive nets and handle massive amounts of data it would be decades before that idea was tried several decades later three level neural nets were tried there were a little bit better they could deal with multiple type styles still weren't very flexible that's not hard to add other layers it's a very straightforward concept there was a math problem the disappearing gradient or the exploding gradient which I'm sure many of you are familiar with basically you need to take maximum advantage of the range of values in the gradients and not let them explode or disappear and lose the resolution that's a fairly straightforward mathematical transformation with that insight we could now go 200 layer neural nets and that's behind sort of all the fantastic gains that we've seen recently alphago trained on every online game and then became a fair go player it then trained itself by playing itself and soared past the best human alphago zero started with no human input at all within hours of iteration sort Pascal phago also soared past the best just programs they had another innovation basically you need to evaluate the quality of the board at each point they used another hundred layer neural nets to do that evaluation so there's still a problem in the field which is there's a motto that life begins at a billion examples one of the reasons I'm at Google is we have a billion examples for examples of pictures of dogs and cats that are labeled so you got a picture of a cat and it says cat and then you can learn from it and you need a lot of them alphago trained on a million online moves that's how many we had of master games and that only created a sort of fair go player a good amateur could defeated so they worked around that in the case of go by basically generating an infinite amount of data by having the system play itself had a chat with Denver's house office you know what kind of situations can you do that with you have to have some way of simulating the world so go or chess are even though go is considered a difficult game it's a-you know the definition of it can exist on one page so you can simulate it that applies to math I mean amass axioms are can be contained on a page or two it's not very complicated it gets more difficult when you have real-life situations like biology so we have biological simulators but the simulators on perfect so learning from the simulators will only be as good as the simulators that's actually the key to being able to do deep learning on biology autonomous vehicles you need real-life data so the way mo systems have gone three and a half million miles that's good that's enough data to then create a very good simulator so the simulator is really quite realistic because they had a lot of real-world experience and the they've got a billion miles in the simulator but we don't always have that opportunity to either create the data or have the data around humans can learn from a small number of examples your significant other your professor your boss your investor can tell you something once or twice and you might actually learn from that some humans have been reported to do that and that's kind of the remaining advantage of humans now there's actually no back propagation in the human brain it doesn't use deep learning it uses a different architecture that same year in 1962 I wrote a paper how I thought the human brain worked there was actually very little neuroscience to go on there was one neuroscientist Vernon mount Castle that had something relevant to say which as he did I mean there was a the common wisdom at the time and there's still a lot of neuroscience that says say this that we have all these different regions of the brain they do different things they must be different there's v1 in the back of the head where the optic nerve spills into that can tell that that's a curved line that that's a straight line does these simple feature extractions on visual images it's actually a large part of the neocortex does the fusiform gyrus up here which can recognize faces we know that because if it gets knocked out through injury or stroke people can't recognize faces they will learn it again with a different region of the neocortex is the famous frontal cortex which does language in poetry and music so these must work on different principles he did autopsies on the neocortex and all these different regions and found they all looked the same they had the same repeating pattern same interconnections he said neocortex is neocortex so I had that hint otherwise I can actually observe human brains in action which I did from time to time and there's a lot of hints that you can get that way for example if I ask you to recite the alphabet you actually don't do it from A to Z you do it as a sequence of sequences ABCD efg hijk so we learn things that secret forward sequences of sequences forward because if I ask you to recite the alphabet backwards you can't do it unless you learn that as a new sequence so these are all interesting hints I wrote a paper that I that the neocortex is organized as a hierarchy of modules in each module can learn a simple pattern and that's how I got to meet President Johnson and that initiated a half-century of thinking about this issue I came to MIT to study with Marvin Minsky actually came for two reasons one the Minsky became my mentor which was a mentorship that lasted for over 50 years the fact that MIT was so advanced it actually had a computer which the other colleges I considered didn't have it was an IBM 7090 for 32 K of 36 bit words so it's 150 K of course storage to microsecond cycle time two cycles for instructions or a quarter of a myth and that thousands of students and professors shared that one machine in 2012 I wrote a book about this thesis is now actually an explosion of neuroscience evidence to support it the European brain reverse engineering project has identified a repeating module about a hundred neurons it's repeated three hundred million times it's about 30 billion neurons in the neocortex the neocortex is the outer layer of the brain that's part where we do our thinking and they can see in each module axons coming in from another module and then the output acts the single output accent of that Jil goes as the input to another module so we can see it organized as a hierarchy it's not a physical hierarchy it's the hierarchy comes from these connections the neocortex is a very thin structure it's actually one module thick there's six layers of neurons but it constitutes one module and we can see that it learns in simple pattern and various reasons I cite in the book the pattern recognition model that's using is basically a hidden Markov model how many of you have worked with Markov models okay that's usually no hands go open I asked that question but Markov model is not it is learned but it's not back propagation it can learn local features so it's very good for speech recognition and the speech recognition network I did in the 80s used these Markov models that became the standard approach because it can deal with local variations so the fact that a vowel is stretched you can learn that in a Markov model it doesn't learn long distance relationships that's handled by the hierarchy and something we don't fully understand yet is exactly how the neocortex creates that hierarchy but we have figured out how it can connect this module to this module does it then grow I mean there's no virtual communication or wireless communication it's actually connection so does it grow an axon you know from one place to another which could be inches apart actually they all all these connections are there from birth like the streets and avenues of Manhattan there's vertical and horizontal connections so if the it decides and how it makes that decision it's still not fully understood that it wants to connect this module to this module there's already a vertical horizontal and a vertical connection it just activates them we can actually see that now and I can see that happening in real time on non-invasive brain scans so there's a current amount of evidence that's in fact the neocortex is a hierarchy of modules that can learn each module learns a simple sequential pattern and even though the patterns we perceived don't seem like sequences they may seem three-dimensional or even more complicated they are in fact represented as sequences but the complexity comes in with the hierarchy so the neocortex emerged 200 million years ago with mammals all mammals have a neocortex it's one of the distinguishing features of mammals these first mammals were small they were rodents but they were capable a new type of thinking other non-mammalian animals had fixed behaviors but those fixed behaviors were very well adapted for their ecological niche but these new mammals could invent a new behavior so creativity and innovation was one feature of the neocortex so a mouse is escaping a predator its usual escape path is blocked it will invent a new behavior to deal with it probably wouldn't work but if it did work it would remember it and would have a new behavior and that behavior could spread virally through the community another Mouse watching this was with say to itself that was really clever going around that rock I'm gonna remember to do that and it would have a new behavior didn't help these early mammals that much because as I say the non-mammalian animals were very well adapted to their niches and nothing much happened for a hundred and thirty five million years but then 65 million years ago something did happened there was a sudden violent change to the environment we now call it the Cretaceous extinction event there's been debate as to whether it was a media or an asteroid I mean a meteor or a volcanic eruption the asteroid or meteor hypothesis is in the ascendancy but if you dig down to an area of rock reflecting 65 million years ago the geologists will explain that it shows a very violent sudden change to the environment we see it all around the globe so is a worldwide phenomenon the reason we call it an extinction event is that's when the dinosaurs went extinct that's when 75% of all the animal and plant species went extinct and that's when mammals overtook their ecological niche so to anthropomorphize biological evolution said to itself this neocortex is pretty good stuff and it began to grow it so-now mammals got bigger their brains got bigger at an even faster pace taking up a larger fraction of their body the neocortex got bigger even faster than that and developed these curvatures that are distinctive of a primate brain basically to increase its surface area but if you stretched it out the human neocortex is still a flat structure it's about the size of a table napkin just as thin and it's basically created primates which became dominance in their ecological niche then something else happened two million years ago biological evolution decided to increase the neocortex further and increase the size of the enclosure and basically filled up the frontal cortex with our big skulls with more neocortex and up until recently it was felt that as I said that this was the frontal cortex was different because it does these qualitatively different things but we now realize that it's really just additional neocortex so remember what we did with it we're already doing a very good job of being primates so we put it at the top of the neocortical hierarchy and we increased the size of the hierarchy it was maybe 20% more neocortex but it doubled it tripled the number of levels because as you go up the hierarchy it's kind of like a pyramid there's fewer and fewer modules and that was the enabling factor for us to invent language and art music every human culture we've ever discovered has music no primary culture really has music there's debate about that but it's really true invention technology technology required another evolutionary adaptation which is this humble appendage here no other animal has that if you look at a chimp and see it looks like they have a similar hand but the thumb is actually down here doesn't work very well if you watch them trying to grab a stick so we could imagine creative solutions yeah I could take that branch and strip off the leaves and put a point on it and we could actually carry out these ideas and create tools and then use tools to create new tools and it started a whole nother evolutionary process of tool-making and that all came with the with the neocortex so Larry Page read my book in 2012 and liked it so I met with him in Essen for an investment in a company I'd started actually a couple weeks earlier to develop those ideas commercially because that's how I went about things as a serial entrepreneur and said well we'll invest but let me give you a better idea what you do it here at Google we have a billion pictures of dogs and cats and we've got a lot of other data and lots of computers and lots of talent all of which is true and says well I don't know I just started this company to develop this is well by your company and how you got a value a company that hasn't done anything just started a couple weeks ago and he said we can value anything so I took my first job five years ago and I've been basically applying this model this hierarchical model to understanding language which i think really is the holy grail of AI I think Turing was correct in designating basically text communication as what we now call a turing-complete problem that requires there's no simple NLP tricks it you can apply to pass a valid Turing test with an emphasis on the word valid mitch kapor and i had a six month debate on what the rules should be because if you read Turing's 1950 paper he describes this in a few paragraphs and doesn't really describe how to go about it but if it's a valid Turing test meaning it's really convincing you through an interrogation and dialogue that it's a human that requires a full range of human intelligence and I think that test has to the test of time we're making very good progress on that I mean just last week you may have read that two systems asked paragraph comprehension test it's really very impressive winning came to Google we were trying to past these paragraph comprehension tests we aced the first the first grade test second grade tests were kind of got average performance and the third grade test had too much inference already you had to know some common-sense knowledge as it's called and make implications of things that were in different parts of the paragraph and there's too much inference and it really didn't didn't work so this is now adult level it's just slightly surpassed average human performance but we've seen that once something an AI does something it average human levels it doesn't take long for it to soar past average human levels I think it'll take longer in language and it did in some simple games like go but it's actually very impressive that it surpasses now average human performance used at LST M long short temporal memory but if you look at the adult test in order to answer these questions it has to put together inferences and implications of several different things in the paragraph with some common sense knowledge is not explicitly stated so that's I think a pretty impressive milestone so I I've been developing I've got a team of about 45 people and we've been developing this hierarchical model we don't use Markov models because we can use deep learning for each module and so we create an embedding for each word and we create an embedding for each sentence this is we have a I can talk about it because we have a published paper on it it can take into consideration context if you use smart reply on G confused email on your phone you'll see it gives you three suggestions for responses that's called Smart reply there are simple suggestions but it has to actually understand perhaps a complicated email and the quality of the suggestions is really quite good quite on point that's for my team using this kind of hierarchical model so instead of Markov models that uses embeddings because we can use back propagation we might as well use it but I think what's missing from deep learning is this hierarchical aspect of understanding because the world is hierarchical that's why evolution developed a hierarchical brain structure to understand the natural hierarchy in the world and there are several problems with big deep neural nets one is the fact that you really do need a billion examples and we don't sometimes we can generate them it's in the case of NGO or if we have a really good simulator as in the case of autonomous vehicles not quite the case yet in biology very often you don't have a billion example if you suddenly have billions of examples of language but they're not annotated and how would you annotate it anyway with more language that we can't understand in the first place so that's kind of a chicken and an egg problem so I believe this hierarchical structures needed another criticism of deep neural Nets they don't explain themselves very well it's a big black box that gives you pretty remarkable answers I mean in the case of these games demos described it's playing in both go and chess is almost an alien intelligence because we do things that were shocking to you and experts like sacrificing a queen and a bishop at the same time or in close succession which shocked everybody but then went on to win or early in a go game putting a piece at the corner of the board which is kind of crazy to most experts because you really want to start controlling territory and yet it on reflection that was the brilliant move that enabled it to win that game but it doesn't really explain how it does these things so if yeah if you have a hierarchy it's much better at explaining it because you could look at the content of the of the modules in the hierarchy and they'll explain what they're doing and just and on the first application of applying this to health and medicine this will get into high gear and we're going to really see us break out at the linear extension to longevity that we've experienced I believe we're only about a decade away from longevity escape velocity we're adding more time than is going by not just the infant life expectancy but to your remaining life expectancy I think if someone is diligent they can be there already I think I've at longevity escape velocity now a word on what life expectancy means it used to be assumed that not much would happen so whatever your life expectancy is with or without scientific progress it really didn't matter now it matters a lot so life expectancy really means you know how long would you live what's the in terms of a statistical likelihood if there were not continued scientific progress but that's a very inaccurate assumption that scientific progress is extremely rapid I mean just as an AI in biotech there are advances now every week is quite stunning now you can have a computed life expectancies let's say 30 years 50 years 70 years from now you can still be hit by the proverbial bus tomorrow we're working on that with self-driving vehicles but we'll get we'll get to a point I think if you're diligent you can be there now in terms of basically advancing your own statistical life expectancy at least to keep pace with the passage of time I think it would be there for most of the population at least if they're diligent within about a decade so if we can hang in there we may get to see the remarkable century ahead thank you very much no question please raise your hand we'll get your mic hi so you mentioned both neural neural network models and symbolic models and I was wondering how far have you been thinking about combining these two approaches creating a symbiosis between neural models and symbolic ones I don't think we want to use symbolic models as they've been used how many are familiar with the psych project that was a very diligent effort in Texas to define all of common-sense reasoning and it kind of collapsed on itself and became impossible to debug because you fix one thing and it break three other things that complexity ceiling has become typical of of trying to define things through logical rules now it does seem that humans can understand logical rules we have logical rules written down for things like law and game playing and so on but you can actually define a connectionist system to have such a high reliability on a certain type of action that it looks like it's a symbolic rule even though it's represented in a connectionist way and connection systems can both capture the soft edges because many things in life are not sharply defined they can also generate exceptions so you you don't want to sacrifice your queen in chess accept certain situations that might be a good idea so you can capture that kind of complexity so we do want to be able to learn from accumulated human wisdom that looks like it's symbolic but I think we'll do it with a connection system but again I'm think the connection systems should develop a sense of hierarchy and not just be one big massive neural net so I understand how we want you know use the neocortex to extract useful stuff and commercialize that but I'm wondering how you know our middle brain and organs that are below the neocortex will be useful for you know turn that into what you want to do something well the cerebellum is an interesting case in point it actually has more neurons than the neocortex and it's used to govern most of our behavior some things if you write a signature that's actually controlled by the cerebellum so a simple sequence is stored in the cerebellum but there's not many reasoning to it it's basically a script and most of our movement now has actually been migrated from the center vellum to the neocortex cerebellum is still there some people the entire cerebellum is destroyed through disease they still function fairly normally their movement might be a little erratic as our movements is largely controlled by the neocortex but some of the subtlety is a kind of pre-programmed script and so they'll look a little clumsy but they're actually function okay a lot of other areas of the brain control autonomic functions like breathing and but our thinking really is is controlled by the neocortex in terms of mastering intelligence I think the neocortex is the brain region we want to study I'm curious what you think might happen after the singularity is reached in terms of this exponential growth of information yes do you think it will continue or will there be a whole paradigm shift what do you predict well in the singularities near I talked about the atomic limits based on molecular computing as we understand it and it can actually go well past 2045 and actually go to trillions of trillions of times greater computational capacity than we have today so I don't see that's stopping anytime soon and we'll go you know way beyond what we can imagine and it becomes an interesting discussion what the impact on human civilization will be so take it may be slightly more mundane issue that comes up as a kind of eliminates most jobs or jobs a point I make is it's not the first time in human history you've done that how many jobs circa 1900 exist today and that was the feeling of the Luddites which was an actual society that formed in 1800 the automation of the textile industry in England they looked at all these jobs going away and felt that employment is going to be just limited to an elite indeed those jobs didn't go away but new jobs were created so if I were oppression Futures to 1900 I would say well 38% of you work on farms and 25% work in factories it's 2/3 of the working force but I predict by 2015 115 years from now it's going to be 2% on farms and 9% factories and everybody would go oh my God we're gonna be out of work and I said well don't worry for all these jobs we eliminate through automation we're gonna invent new jobs and say oh really what new jobs and I'd say well I don't know we haven't invented them yet that's the political problem we could see jobs very clearly going away fairly soon like driving a car or truck and the new jobs haven't been invented I mean just look at the last five or six years as many a lot of the increase in employment has been through mobile app related types of ways of making money that just weren't contemplated even six years ago if I really prescient I would say well you're gonna get jobs creating mobile apps and websites and doing data analytics and self-driving cars cars what's a car and nobody would have any idea what I'm talking about now the new job some people say yeah we created new jobs but it's not as many actually we've gone from 24 million jobs in nineteen hundred 242 million jobs today for 30 percent of the population to forty five percent of the population the new jobs pay eleven times as much in constant dollars and they're more interesting and as I talk to people starting out their career now they really want a career that gives them some life definition and purpose and gratification we're moving up Maslow's hierarchy hundred years ago you were happy if you had a back-breaking job to put food on your family's table so and we couldn't do these new jobs without enhancing our intelligence so we've been doing that well for most of the last 100 years through education we've expanded to K through 12 and constant dollars tenfold we've gone from 38,000 college students in 1870 to 15 million today more recently we have brain extenders and not yet connected directly in our brain but they're very close at hand when I was here that my tía to take my bicycle across campus to get to the computer and show an ID to get in the building now we carry them well you know in our in our pockets and on our belts they're going to go inside our bodies and brains I think that's a notic really important distinction but so we're basically going to be continuing to enhance our capability through merging with AI and that's the I think ultimate answer to the kind of dystopian view we see in futures movies where it's the AI versus a brave band of humans for control of humanity we don't have one or two a eyes in the world today we have several billion three billion smartphones and last count will be six billion in just a couple of years according to the projections so we're already deeply integrated with this and I think that's going to continue and it's gonna continue to do things that you can't even imagine today just as we are doing today things we couldn't imagine you know even twenty years ago you showed many graphs that goes through exponential growth but I haven't seen one that isn't so I would be very interested in hearing you haven't seen that what that is not exponential so tell me about regions that you've investigated that have not seen exponential growth and why do you think that's the case well price performance and capacity of information technology invariably follows a exponential when it impacts human society it can be linear so for example the growth of democracy has been linear but still pretty steady you can count the number of democracies on the fingers of one hand a century ago two centuries ago you can count the number of democracies in the world on the fingers of one finger now there are dozens of them that this and it's become kind of a consensus that that's how we should be governed so the and I attributed all this to the growth and information technology communication in particular for progression of social cultural institutions but information technology because it ultimately depends on a vanishingly small energy and material requirement grows exponentially and will for a long time there's recently a criticism that well test scores have it's actually a remarkably straight linear progression so humans think it's like twenty eight hundred and it just sort passed out in 1997 with the blue and it's kept going and remarkably straight and saying well this is linear not exponential but the chess score is a logarithmic measurement so it really is exponential progression so if you're lhasa furs like to think a lot about the meaning of things especially in the 20th century so for instance Martin Heidegger gave a couple of speeches and lectures on the relationship of human society to technology and he particularly distinguished between the mode of thinking which is calculating thinking and a mode of thinking which is reflective thinking or meditative thinking and he posed this question what is the the meaning and purpose of technological development and he couldn't find an answer he he recommended to remain open to what he called and he called this an openness to the mystery I wonder whether you have any thoughts on this is there is there a meaning of purpose to technological equipment and and is there a way for a human success access that meaning well we started using technology to shore up weaknesses and our own capabilities so physically I mean who here could build this building so we've leveraged the power of our muscles with machines and we're in fact very bad at doing things that you know the simplest computers can do like factor numbers or even just multiply two eight digit numbers computers can do that trivially we can't do it so we originally started using computers to make up for that weakness I think the essence of what I've been writing about is to master the unique strengths of humanity creating loving expressions in poetry and music and the kinds of things we associate with the better qualities of humanity with machines that's the to promise of AI that we're not there yet but we're making pretty stunning progress just in the last year there's so many milestones that are really significant including in language and but I think of technology as an expression of humanity it's part of who we are and the human species is already a biological technological civilization and it's part of who we are an AI is it's part of humans so AI is human and it's it's part of the technological expression of humanity and we use technology to extend our reach you know I couldn't reach that fruit at that higher branch a thousand years ago so we invented a tool to extend our physical reach we now extend our mental reach we can access all of human knowledge with a few keystrokes and we're going to make ourselves literally smarter by merging with AI hi first of all honor to hear you speak here so I first read The Singularity is near nine years ago or so and it changed the way I thought entirely but something I think it caused me to over steeply discount was tail risk in geopolitics in systems that span the entire globe and my concern is that there are there is obviously the possibility of tail risk existential level events swamp in all of these trends that are otherwise war proof climate proof you name it so my question for you is what steps do you think we can take in designing engineered systems in designing social and economic institutions to kind of minimize our exposure to these tail risks and and and survive to make it to UM you know a beautiful mind filled future yeah well the world was first introduced to a human-made existential risk when I was in elementary school we would have these civil defense drills to get under our desk and put our hands behind our head to protect this from a thermonuclear war and it worked we made it through but that was really the first introduction to an existential risk and those weapons are still there by the way and they're still on a hair-trigger and they don't get that much attention there's been a lot of discussion much of which I've been in the forefront of initiating the existential risks of what sometimes referred to as GN rg4 genetics which is biotechnology and for nanotechnology and gray goo robotics which is a and I've been accused of being an optimist I think you have to be an optimist to be an entrepreneur if you knew all the problems you were going to encounter you'd never start any project but I've written a lot about the downsides I remain optimistic there are specific paradigms and not foolproof that we can follow to keep these technologies safe so for example over 40 years ago some visionaries recognized the revolutionary potential both for promise and peril of biotechnology neither the promise no peril was feasible 40 years ago but they had a conference at the Asilomar conference center in California and to develop both professional ethics and strategies to keep biotechnology safe and they've been known as the Asilomar guidelines they've been refined through successive sell more conferences much of that's baked into law and it in my opinion it's worked quite well we're now as I mentioned getting profound benefit it's a trickle today it'll be a flood over the next decade and the number of people who have been harmed either through intentional or accidental abuse of biotechnology so far zero actually I take that back there was one boy who died in gene therapy trials but 12 years ago and there's congressional hearings and they cancelled all research for gene therapy for a number of years you could do an interesting master's thesis and demonstrate that you know 300,000 people died as a result of that delay but you can't name them they can't go on CNN so we don't know who they are but it has to do with the balancing of risk but in large measure virtually no one has been hurt by biotechnology now that doesn't mean you can cross on our front list okay we took care of that one because the technology keeps getting more sophisticated and Christopher's great opportunity there's hundreds of trials of Christopher's technologies overcome disease but it could be abused you can describe scenarios so we have to keep reinventing it January we had our first Asilomar conference on AI ethics and so I think this is a good paradigm it's not foolproof I think the best way we can assure a democratic future that includes our ideas of Liberty is to practice that in the world today because the future world of the singularity which is a merger of biological non-biological intelligence it's not going to come from Mars I mean it's going to emerge from our society today so if we practice these ideals today it's going to have a higher chance of us practicing them as we get more enhanced with technology if that doesn't sound like a foolproof solution it isn't but I think that's the best approach in terms of technological solutions I mean AI is the most daunting you can imagine there are technical solutions to biotechnology and nanotechnology there's really no subroutine you can put in your AI software there will assure that it remains safe intelligence it's inherently not controllable there's some AI that's much smarter than you that's out for your destruction the best way to deal with that is not to get in that situation in the first place if if you are in that situation and find some AI that will be on your side but basically it's going to eyeb Aleve we have been headed through technology to event to a better reality look around the world and people really think things are getting worse and I think that's because our information about what's wrong with the world is getting exponentially better I say oh this is the most peaceful time on you in history if you say what are you crazy didn't you hear about the event yesterday and last week and well a hundred years ago there could be a battle that wiped out the next village in you wouldn't even hear about it for months of all these graphs on education and literacy has gone from like 10% to 90% over a century and health wealth poverty's declined 95% in Asia over the last 25 years document about the World Bank all these trends are very smoothly getting better and everybody thinks things are getting worse but but but you're right like on violence that curve could be quite disrupted there's an existential event as I say I'm optimistic but I think that is something if we need to deal with that a lot of it is not technological it's dealing with our social cultural institutions so you mentioned also exponential growth of software and IDs I guess related to software so one of the reasons for which you said that all that information technology costs this exponential is because of fundamental properties of matter and energy but in the case of ideas why would it have to be exponential well a lot of ideas produce exponential gains they don't increase performance linearly there's actually study during the Obama administration by his scientific advisory board on assessing this question how much gains on 23 classical engineering problems were gained through hardware improvements over the last decade and software improvements and there's about a thousand to one improvement it's about doubling every year from Hardware there was an averages of like twenty six thousand to one through softer improvements algorithmic improvements so we do see both and apparently if you come up with in advance its it doubles the performance or multiplies it by ten we see basically exponential growth from each innovation so and we certainly see that in deep learning the architectures are getting better while we also have more data and more computation and more memory to throw in these at these algorithms thank you for being