Transcript
maAJXNDjIcQ • Yann LeCun: Human-Level Artificial Intelligence | AI Podcast Clips
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0138_maAJXNDjIcQ.txt
Kind: captions Language: en what do you think it takes to build a system with human level intelligence you talked about the AI system in the movie her being way out of reach our current reach this might be outdated as well but this is still way out of reach what would it take to build her do you think so I can tell you the first two obstacles that we have to clear but I don't know how many obstacles they are after this so the image I usually use is that there is a bunch of mountains that we have to climb and we can see the first one but we don't know if there are 50 mountains behind it or not and this might be a good sort of metaphor for why AI research was in the past I've been overly optimistic about the result of way I you know for example a New Orleans Simon Wright wrote the general problem solver and they call it the general problems you have problems okay and of course the 15 you realize is that all the problems you want to solve is financial and so you can actually use it for anything useful but you know yes oh yeah all you see is the first peak so in general what are the first couple of peaks for her so the first peak which is precisely what I'm working on is self supervisor running high how do we get machines to run models of the world by observation kind of like babies and like young animals so I we've been working with you know coming to scientists so this Amanda depuis who is a at fair and in Paris is half-time is also a researcher and French University and he he has his chart that shows that which how many months of life baby humans kind of learned different concepts and you can met you can measure this various ways so things like distinguishing animate objects from animate inanimate object you can you can tell the difference at age two three months whether an object is going to stay stable is gonna fall you know about four months you can tell you know things like this and then things like gravity the fact that objects are not supposed to float in the air but as opposed to fall you run this around the age of eight or nine months if you look at a lot of you know eight month old babies you give them a bunch of toys on the highchair first thing they do is it's why I'm on the ground let you look at them it's because you know they're learning about actively learning about gravity gravity yeah okay so they're not trying to know you but they you know they need to do the experiment right yeah so you know how do we get machines to learn like babies mostly by observation with a little bit of interaction and learning those those those models of the world because I think that's really a crucial piece of an intelligent autonomous system so if you think about the architecture of an intelligent autonomous system it needs to have a predictive model of the world so something that says here is a wall that time T here is a stable world at time T plus one if I take this action and it's not a single answer it can be education yeah yeah well but we don't know how to represent distributions in high dimension continuous basis so it's got to be something we care that data hey yeah but with some summer presentation with certainty if you have that then you can do what optimal control theory is called model predictive control which means that you can run your model with the hypothesis for a sequence of action and then see the result now what you need the other thing you need is some sort of objective that you want to optimize am i reaching the goal of grabbing the subject about minimizing energy am I whatever right so there is some sort of objectives that you have to minimize and so in your head if you had this model you can figure out the sequence of action that will optimize your objective that objective is something that ultimately is rooted in your basal ganglia at least in the human brain that's that's what is available Gambia computes your level of contentment or miss contentment oh no noise that's a word unhappiness okay yeah this contentment this contentment and so your entire behavior is driven towards kind of minimizing that objective which is maximizing your contentment computed by your your basal ganglia and what you have is an objective function which is basically a predictor of what your basal ganglia is going to tell you so you're not going to put your hand on fire because you know it's gonna you know it's gonna burn and you're gonna get hurt and you're predicting this because of your model of the world and your your predictor of this objective right so you if you have those you have those three components you have four components you have the the hard-wired contentment objective computer if you want calculator and then you have the three components one is the objective predictor which basically predicts your level of contact and one is the model of the world and there's a third module I didn't mention which is a module that will figure out the best course of action to optimize an objective given your model okay yeah cool it's a policy policy network or something like that right now you need those three components to act autonomously intelligently and you can be stupid in three different ways you can be stupid because your model of the world is wrong you can be stupid because your objective is not aligned with what you actually want to achieve okay and in humans that would be a psychopath right and then the the third thing you the third way you can be stupid is that you have the right model you have the right objective but you're unable to figure out a course of action to optimize your objective given your model you