Transcript
3t06ajvBtl0 • Matt Botvinick: Neuroscience, Psychology, and AI at DeepMind | Lex Fridman Podcast #106
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0410_3t06ajvBtl0.txt
Kind: captions
Language: en
the following is a conversation with
Matt Botvinnik director of neuroscience
research deep mind he's a brilliant
cross-disciplinary mind navigating
effortlessly between cognitive
psychology computational neuroscience
and artificial intelligence quick
summary of the ads to sponsors the
Jordan Harbinger show and magic spoon
cereal please consider supporting the
podcast by going to Jordan Harbinger
complex and also going to magic spoon
complex and using collects a check out
after you buy all of their cereal click
the links buy the stuff it's the best
way to support this podcast and journey
I'm on if you enjoy this podcast
subscribe on youtube review it with five
stars set up a podcast follow on Spotify
support on patreon or connect with me on
Twitter at Lex Friedman spelled
surprisingly without the e just Fri D M
a.m.
as usual I'll do a few minutes of ads
now and never any ads in the middle that
can break the flow of the conversation
this episode is supported by the Jordan
Harbinger show go to Jordan Harbinger
complex
it's how he knows I sent you on that
page subscribe to his podcast an apple
podcast Spotify and you know where to
look I've been binging on his podcast
Jordan is a great interviewer and even a
better human being
I recently listened to his conversation
with Jack Barsky former sleeper agent
for the KGB in the 80s and author of
deep undercover which is a memoir that
paints yet another interesting
perspective on the Cold War era I've
been reading a lot about the Stalin and
then Gorbachev impudent errors of Russia
but this conversation made me realize
that I need to do a deep dive into the
Cold War era to get a complete picture
of Russia's recent history again go to
Jordan Harbinger complex subscribe to
his podcast that's how he knows I sent
you it's awesome you won't regret it
this episode is also supported by magic
spoon
barb keto friendly super amazingly
delicious cereal I've been on a keto or
very low carb diet for a long time now
it helps with my mental performance it
helps with my physical performance even
during this crazy push up pull up
challenge I'm doing including the
running it just feels great I used to
love cereal obviously I can't have it
now because most cereals have a crazy
amount of sugar which is terrible for
you so I quit eight years ago but magic
spoon amazingly somehow is a totally
different thing zero sugar 11 grams of
protein and only three net grams of
carbs it tastes delicious it has a lot
of flavors too new ones including peanut
butter but if you know what's good for
you you'll go with cocoa my favorite
flavor and the flavor of Champions click
the magic school complex link in the
description and use collects a check out
for free shipping and to let them know I
sent you they've agreed to sponsor this
podcast for a long time they're an
amazing sponsor and an even better
cereal I highly recommend it it's
delicious it's good for you
you won't regret it and now here's my
conversation with Matt Botvinnik how
much of the human brain do you think we
understand I think we're at a weird
moment in the history of neuroscience in
the sense that there's a there I feel
like we understand a lot about the brain
at a very high level but a very very
coarse level when you say high level
what are you thinking you thinking
functional yeah structurally so in other
words what is what is the brain for you
know what what what kinds of computation
does the brain do you know what kinds of
behaviors would we have - would we have
to explain if we were going to look down
at the mechanistic level and at that
level I feel like we understand much
much more about the brain than we did
when I was in high school but what but
it's at a very it's almost like we're
seeing it
a fog it's only at a very coarse level
we don't really understand what the the
neuronal mechanisms are that underlie
these computations we've gotten better
at saying you know what are the
functions that the brain is computing
that we would have to understand you
know if we were going to get down to the
neuronal level and at the other end of
the spectrum we you know in the last few
years
incredible progress has been made in
terms of technologies that allow us to
see you know actually literally see in
some cases what's going on at the the
single unit level even the dendritic
level and then there's this yawning gap
in between oh that's interesting so it's
a high level so there's almost a
cognitive science yeah yeah and then at
the neuronal level that's neurobiology
and neuroscience
yeah just studying single neurons the
the the the synaptic connections and all
the dopamine all the kind of new
transmitters one blanket statement I
should probably make is that as I've
gotten older I have become more and more
reluctant to make a distinction between
psychology and neuroscience to me the
point of neuroscience is to study what
the brain is for if you if you if you're
if you're a nephrologist and you want to
learn about the kidney you start by at
by saying what is this thing for well it
seems to be for taking blood on one side
that has metabolites in it that are that
shouldn't be there
sucking them out of the blood while
leaving the good stuff behind and then
excreting that in the form of urine
that's what the kidney is for it's like
obvious so the rest of the work is
deciding how it does that and this it
seems to me is the right approach to
take to the brain you say well what is
the brain for the brain as far as I can
tell is for producing behavior it's from
going it's for going from perceptual
inputs to behavioral outputs and the
behavioral output should be adaptive
so that's what psychology is about it's
about understanding the structure of
that function and then the rest of
neuroscience is about figuring out how
those operations are actually carried
out at a mechanistic level it's really
interesting but so unlike the kidney the
the brain the the gap between the
electrical signal and behavior so you
truly see neuroscience as the science
oh that that touches behavior how the
brain generates behavior or how the
brain converts raw visual information
into understanding like and it's like
you you basically see cognitive science
psychology and neuroscience is all one
science yeah is that a personal
statement I said I'm hopeful is that is
that a hopeful or a realistic statement
so certainly you will be correct in your
feeling in some number of years but that
number of years could be two hundred
three hundred years from now oh well
there's a is that aspirational or is
that a pragmatic engineering feeling
that you have it's it's both in the
sense that this is what I hope and
expect will bear fruit over the coming
decades but it's also pragmatic in the
sense that I'm not sure what we're doing
in either in either psychology or
neuroscience if that's not the framing I
don't I don't I don't know what it means
to understand the brain if there's no if
part of the enterprise is not about
understanding the behavior that's being
produced I mean yeah but out I would
have compared to maybe astronomers
looking at the movement of the planets
and the stars and without any interest
of the underlying physics right and I
would argue that there at least in the
early days there are some valued is just
tracing the movement of the planets and
the stars without thinking about the
physics too much because it's such a
to start thinking about the physics
before you even understand even the
basic structural elements of oh I agree
with that
I agree what you're saying in the end
the goal should be yeah deeply
understand well right and I I think so I
thought about this a lot when I was in
grad school because a lot of what I
studied in grad school was psychology
and I found myself a little bit confused
about what it meant to it seems like
what we were talking about a lot of the
time were virtual causal mechanisms like
oh well you know attentional selection
then selects some object in the
environment and that is then passed on
to the motor you know information about
that is passed on to the motor system
but these are these are virtual
mechanisms these are you know they're
metaphors they're you know that there's
no they're not there's no reduction -
there's no reduction going on in that
conversation to some physical mechanism
that you know or which is really what it
would take to fully understand you know
how how behavior is arising but the
causal mechanisms are definitely neurons
interacting I'm willing to say that at
this point in history
so in psychology at least for me
personally there was this strange
insecurity about trafficking in these
metaphors you know which we're supposed
to explain the the function of the mind
if you can't ground them in physical
mechanisms then what you know you know
what is the what is the explanatory
validity of these explanations and I I
managed to I managed to soothe my own
nerves by thinking about the history of
genetics research so I'm very far from
being an expert on the history of this
field but I know enough to say that you
know Mendelian genetics preceded you
know Watson and Crick and so there was a
significant period of time during which
people were you know continued
productively investigating the structure
of inheritance using what was
essentially a metaphor
of gene you know and no genes do this
and genes do that but you know where the
genes they're they're sort of an
explanatory thing that we made up and we
we ascribed to them these causal
property so there's a dominant there's a
recessive and then then they recombine
and and and then later there was a kind
of blank there that was filled in with
it with a with a physical mechanism that
connection was made in but it was worth
having that metaphor because that's that
gave us a good sense of what kind of
cause what kind of causal mechanism we
were looking for and the fundamental
metaphor of cognition you said is the
interaction of neurons is that what is
the metaphor no no the metaphor the the
metaphors we use in in in cognitive
psychology are you know things like
attention way that memory works you know
I I retrieve something from memory right
you know a memory retrieval occurs what
is the Hat you know that's not that's
not a physical mechanism that I can
examine in its own right but if we if
but it's still worth having that that
metaphorical level yeah so yeah I
misunderstood actually so the higher
level abstractions is the metaphor
that's most useful yes but but what
about so how does that connect to the
the idea that that arises from
interaction of neurons well even it is
the interaction of neurons also not a
metaphor to you is or is it literally
like that's no longer a metaphor that's
that's already that's already the lowest
level of abstractions that could
actually be directly studied well I'm
hesitating because I think what I want
to say could end up being controversial
so what I want to say is yes the
interaction of the interactions of
neurons that's not metaphorical that's a
physical fact that's that's where that's
where the causal interactions actually
occur now I suppose you could say well
you know even
is metaphorical relative to the quantum
events that underlie yes you know I
don't want to go down that rabbit hole
so is turtles on top potatoes but there
is it there isn't there's a reduction
that you can do you can say these
psychological phenomena are can be
explained through a very different kind
of causal mechanism which has to do with
neurotransmitter release and and so what
we're really trying to do in
neuroscience writ large you know as I
say which for me includes psychology is
to take these psychological phenomena
and map them on to neural events
I think remaining forever at the level
of description that is natural for
psychology for me personally would be
disappointing I want to understand how
mental activity arises from neural
neural activity but the converse is also
true studying neural activity without
any sense of what you're trying to
explain to me feels like at best groping
around you know at random now you've
kind of talked about this bridging at
the gap between psychology in
neuroscience but do you think it's
possible like my love is like I fell in
love with psychology and psychiatry in
general with Freud and when I was really
young and I hope to understand the mind
and for me understanding the mind at
least at a young age before I discovered
AI and even neuroscience was to his
psychology and do you think it's
possible to understand the mind without
getting into all the messy details of
neuroscience like you kind of mentioned
to you it's appealing to try to
understand the mechanisms at the lowest
level but do you think that's needed
that's required to understand how the
mind works
that's an important part of the whole
picture but I would be the last person
on earth to suggest that that reality
renders psychology in its own right
unproductive I trained as a psychologist
I I am fond of saying that I have
learned much more from psychology than I
have from neuroscience to me psychology
is a hugely important discipline and and
one thing that worms in my heart is that
ways of ways of investigating behavior
that have been native to cognitive
psychology since its you know dawn in
the 60s are starting to become they're
starting to become interesting to AI
researchers for a variety of reasons and
that's been exciting for me to see can
you maybe talk a little bit about what's
what you see is beautiful aspects of
psychology maybe limiting aspects of
psychology
I mean maybe just started off as a
science as a field to me was when I
understood what psychology is analytical
psychology like the way it's actually
carried out is really disappointing to
see two aspects one is how few how small
the end is how many how small the number
of subject is in the studies and two was
disappointing to see how controlled the
entire how how much it was in the lab
how it wasn't studying humans in the
wild there's no mechanism for studying
humans in a while so that's where I
became a little bit disillusioned into
psychology and then the modern world of
the Internet is so exciting to me the
Twitter data or YouTube daily data of
human behavior on the Internet becomes
exciting because then the N grows and
then in the wild girls but that's just
my narrow sense they give us optimistic
or pessimistic cynical view of
psychology how do you see the field
broadly
when I was in graduate school it was
early enough that there was still a
thrill in seeing that there were ways of
doing there were ways of doing
experimental science that provided
insight to the structure of the mind one
thing that impressed me most when I was
at that stage in my education was
neuropsychology looking at looking at
the analyzing the behavior of
populations who had brain damage of
different kinds and trying to understand
what what the what the specific deficits
were that arose from a lesion in a
particular part of the brain and the the
kind of experimentation that was done
and that's still being done to get
answers in that context was so creative
and it was so deliberate you know the it
was good science an experiment answered
one question but raised another and
somebody would do an experiment that
answered that question and you really
felt like you were narrowing in on some
kind of approximate understanding of
what this part of the brain was for do
you have an example of the memory of
what kind of aspects of the mind could
be studied in this kind of way oh sure I
mean the very detailed
neuropsychological studies of language
language function looking at production
and reception and the relationship
between you know visual function you
know reading and auditory and semantic
and I mean there were these beauty and
still are these beautiful models that
came out of that kind of research that
really made you feel like you understood
something that you hadn't understood
stood before about how you know language
processing is organized in the brain but
having said all that you know I I think
you know I think you are I mean I agree
with you that the cost of doing highly
controlled experiments is that you
by construction miss out on the richness
and complexity of the real world one
thing that so I I was drawn into science
by what in those days was called
connectionism which is of course that
you know what we now called deep
learning and at that point in history
neural networks were primarily being
used in order to model human cognition
they weren't yet really useful for
industrial applications so you always
fall in neural networks in biological
form beautiful Oh neural networks were
very concretely the thing that drew me
into science I was handed are you
familiar with the the PDP books from
from the 80s some when I was in I went
to medical school before I went into
science and really yeah this thing Wow I
also I also did a graduate degree in art
history so I'm I kind of explored
well art history I understand there's
just a curious creative mind but medical
school with the dream of what if we take
that slight tangent what did you what
did you want to be a surgeon
I actually was quite interested in
surgery I was I was interested in
surgery and psychiatry and I thought
that must be I must be the only person
on the planet who had who was torn
between those two fields and III said
exactly that to my advisor in medical
school who turned out I found out later
to be a famous psychoanalyst and and he
said to me no no it's actually not so
uncommon to be interested in surgery and
psychiatry and he conjectured that the
reason that people develop these these
two interests is that both fields are
about going beneath the surface and kind
of getting into the kind of secret yeah
I mean maybe you understand this as
someone who was interested in
psychoanalysis and or the stage there's
sort of a this you know there's a cliche
phrase that people use now on you know
like an NPR The Secret Life of Bees like
right yeah you know and that was part of
the thrill of surgery was seeing you
know the secret
you know the secret activity that's
inside everybody is abdomen and thorax
it's a very poetic way to connect it to
disciplines that are very practically
speaking different
each other that's for sure that's for
sure yes so so how do we get on to
medical school so so I was in medical
school and I I was doing a psychiatry
rotation and my kind of advisor in that
rotation asked me what I was interested
in and I said well maybe psychiatry he
said why and I said well I've always
been interested in how the brain works
I'm pretty sure that nobody's doing
scientific research that addresses my
interests which are I didn't have a word
for it then but I would have said about
cognition and he said well you know I'm
not sure that's true
you might you might be interested in
these books and he pulled down the the
PDB books from his shelf and they were
still shrink-wrapped
he hadn't read them but he handed to me
a hint that inform you said he you can
you feel free to borrow these and that
was you know I went back to my dorm room
and I just you know read them cover to
cover and what's PDP parallel
distributed processing which was the one
of the original names for deep learning
and so I apologize for the romanticized
question but what what idea in the space
of neural size in the space of the human
brain is to use the most beautiful
mysterious surprising what what had
always fascinated me even when I was a
pretty young kid I think was the the the
paradox that lies in the fact that the
brain is so mysterious and so it seems
so distant but at the same time it's
responsible for the the the the full
transparency of everyday life it's the
brain is literally what makes everything
obvious and familiar and and and there's
always one in the room with you yeah I I
used to teach when I taught at Princeton
I used to teach a cognitive neuroscience
course and the very last thing I would
say to the students was you know
people often when people think of
scientific inspiration the the metaphor
is often we'll look to the stars you
know the stars will inspire you to
wonder at the universe and and you know
think about your place in it and how
things work and and I'm all for looking
at the stars but I've always been much
more inspired and my sense of wonder
comes from the not from the distant
mysterious stars but from the extremely
intimately close brain yeah there's
something just endlessly fascinating to
me about that the like just like you
said the the one is close and yet
distant in in terms of our understanding
of it do you are you all so captivated
by the the fact that this very
conversation is happening because two
brains are communicating the I guess
what I mean is the subjective nature of
the experience if can take a small
taejun into the the mystical of it the
unconsciousness or or when you are
saying you're captivated by the idea of
the brain you are you talking about
specifically the mechanism of cognition
or are you also just like at least for
me it's almost like paralyzing the
beauty and the mystery of the fact that
it creates the entirety of the
experience not just the reasoning
capability but the experience well I I
definitely resonate with that that
latter thought and I I often find
discussions of artificial intelligence
to be disappointingly narrow you know
speaking of someone who has always had
an interest in in in art great it was
just gonna go there cuz it sounds like
somebody who has an interest in art yeah
I mean I there there there
there are many layers to you know to
full-bore him and experience and and in
some ways it's not enough to say oh well
don't worry you know we're talking about
cognition but we'll add emotion you know
yeah there's there's there's an
incredible scope to what humans go
through in in every moment and and yes
so it's that's part of what fascinates
me is that is that our brains are
producing that but at the same time it's
so mysterious to us how we literally our
brains are literally in our heads
producing mystics and yet there and yet
there's there it's so mysterious to us
and so and in the scientific challenge
of getting at the the the actual
explanation for that is so overwhelming
it's not that's just i don't know that
certain people have fixations on
particular questions and that's always
that's just always been mine yeah I
would say the poetry that is fascinating
and I'm really interested in natural
language as well and when you look at
our personal intelligence community it
always saddens me how much when you try
to create a benchmark for the community
together around how much of the magic of
language is lost when you create that
benchmark that there's something would
we talk about experience the the music
of the language the wit the something
that makes a rich experience something
that would be required to pass the
spirit of the Turing test is lost in
these benchmarks and I wonder how to get
it back in because it's very difficult
the moment you tried to do like real
good rigorous science you lose some of
that magic when you try to study
cognition in a rigorous scientific way
it feels like you're losing some of the
magic mm-hmm-hmm the the seen cognition
in a mechanistic way that AI vote at
this stage in our history well okay I I
agree with you but at the same time one
one thing that I found really exciting
about that first wave of deep learning
models in cognition was
there was the the fact that the people
who were building these models were
focused on the richness and complexity
of human cognition so an early debate in
cognitive science which I sort of
witnessed as a grad student was about
something that sounds very dry which is
the formation of the past tense but
there were these two camps one said well
the the mind encodes certain rules and
it also has a list of exceptions because
of course you know the rule is a DB but
that's not always what you do so you
have to have a list of exceptions and
and then there were the connectionists
who you know evolved into the deep
learning people who said well well you
know if you look carefully at the data
if you look at actually look at corpora
like language corpora it's it turns out
to be very rich because yes there are
there are there's a you know the there
most verbs that and you know you just
tack on e d and then there are
exceptions but there are also there's
also there are there are rules that in
you know there's the exceptions aren't
just random they there are certain clues
to which which which verbs should be
exceptional and then there are some
exceptions to the exceptions and there
was a word that was kind of deployed in
order to capture this which was quasi
regular in other words there are rules
but it's it's messy and there there's
their structure even among the
exceptions and and it would be yeah you
could try to write down you could try to
write down the structure in some sort of
closed form but really the right way to
understand how the brain is handling all
this and by the way producing all of
this is to build a deep neural network
and trained it on this data and see how
it ends up representing all of this
richness so the way that deep learning
was deployed in cognitive psychology was
that was the spirit of it it was about
that richness and that's something that
I always found very very compelling
still do
is it is there something especially
interesting and profound to you in terms
of our current deep learning neural
network artificial neural network
approaches and the whatever we do
understand about the biological neural
networks in our brain is there there's
some there's quite a few differences are
some of them to you either
interesting or perhaps profound in terms
of in terms of the gap we might want to
try to close in trying to create a human
level intelligence what I would say here
is something that a lot of people are
saying which is that one seeming
limitation of the systems that we're
building now is that they lack the kind
of flexibility the readiness to sort of
turn on a dime when this when the
context calls for it
that is so characteristic of human
behavior
so is that connected to you to the like
which aspect of the neural networks in
our brain is that connected to is that
closer to the cognitive science level of
now again see like my natural
inclination is to separate into three
disciplines of neuroscience cognitive
science and psychology and you've
already kind of shut that down by saying
you you're kind of see them as separate
but just to look at those layers I guess
where is there something about the
lowest layer of the way the neural
neurons interact and that is profound to
you in terms of this difference to the
artificial neural networks or is all the
difference the key difference is at a
higher level of abstraction one thing I
often think about is that um
you know if you take an introductory
computer science course and they are
introducing you to the notion of Turing
machines one way of articulating what
the significance of a Turing machine is
is that it's a machine emulator it's it
can emulate any other machine and that
that to me you know that that and it was
that way of looking at a deterring
machine you know it really sticks with
me I think of humans as maybe sharing in
some of that character
we're capacity limited we're not Turing
machines obviously but we have the
ability to adapt behaviors that are very
much unlike anything we've done before
but there's some basic mechanism that's
implemented in our brain that allows us
to run run software but you're just in
that point you mentioned into a machine
but nevertheless it's fundamentally our
brains are just computational devices in
your view is that what you're getting
like is it I was a little bit unclear to
this line you drew mmm is is is there
any magic in there or is it just basic
computation I'm happy to think of it as
just basic computation but mind you I
won't be satisfied until somebody
explains to me how what the basic
computations are that are leading to the
full richness of human cognition yes I
mean it's not gonna be enough for me to
you know understand what the
computations are that allow people to
you know do arithmetic or play chess
I want I want the whole whole you know
the whole thing in a small tangent
because you kind of mentioned
coronavirus the this group behavior oh
sure I is that is there something
interesting to your search of
understanding the human mind where law
behavior of large groups of just
behavior of groups is interesting you
know seeing that as a collective mind as
a collective intelligence perhaps seeing
the groups of people as a single
intelligent organisms especially looking
at the reinforcement learning work mm-hm
even done recently well yeah I can't I
can't I mean I
I have the I have the the honor of
working with a lot of incredibly smart
people and I wouldn't want to take any
credit for for leading the way on the
the multi-agent work that's come out of
out of my group or deep mine lately but
I do find it fascinating and I mean I
think there you know I think it it can't
be debated you know the human behavior
arises within communities that just
seems to me self-evident but to me so it
is self-evident but that seems to be a
profound aspects of something that
created that was like if you look at
like 2001 Space Odyssey when that well
the monkeys touch the yeah like that's
the magical moment I think Eva Hari
argues that the ability of our large
numbers of humans to hold an idea to
converge towards idea together like you
said shaking and bumping elbows somehow
converge like without even like like
without you know without being in a room
all together just kind of this like
distributed convergence towards an idea
yeah over a particular period of time
seems to be fundamental to to just every
aspect of our cognition of our
intelligence because humans I will talk
about reward but it seems like we don't
really have a clear objective function
under which we operate but we all kind
of converge towards one somehow and that
that to me has always been a mystery
that I think is somehow productive for
also understanding AI systems but I
guess I guess that's the next step the
first step is trying to understand the
mind well I don't know I mean I think
there's something to the argument that
that kind of bottom like strictly
bottom-up approach is wrongheaded in
other words you know there are there are
basic phenomena that you know you know
basic aspects of human intelligence that
you know can only be understood in in
the context of groups I'm perfectly open
to that I've never been particularly
convinced by the notion that we should
be we should
consider intelligence to in here at the
level of communities I I don't know why
I just I'm sort of stuck on the notion
that the basic unit that we want to
understand is individual humans and if
if we have to understand that in the
context of other humans fine but for me
intelligence is just I'm stubbornly I
stubbornly defined it as something that
is you know an aspect of an individual
human
that's just my time with you with us
that could be the reduction is dream of
a scientist because you can understand a
single human it also is very possible
that intelligence can only arise when
there's multiple intelligences when
there's multiple sort of it's a sad
thing if that's true because it's very
difficult to study but if it's just one
human that one human will not be Homo
Sapien would not become that intelligent
that's a real that's a possibility I I'm
with you well one thing I will say along
these lines is that I think I think a
serious effort to understand human
intelligence and maybe to build a
human-like intelligence needs to pay
just as much attention to the structure
of the environment as to the structure
of the you know the the cognizing system
whether it's a brain or an AI system
that's one thing I took away actually
from my early studies with the pioneers
of neural network research people like
Jay McClelland and John Cohen you know
the the structure of cognition is really
it's only a only partly a function of
the the you know the the architecture of
the brain and the learning algorithms
that it implements what it's really a
function what what what really shapes it
is the interaction of those things with
the structure of the world in which
those things are embedded right and
that's especially important for this
made most clear and reinforcement
learning where I simulate an environment
as you can only learn as much as you can
simulate and that's what made well deep
mine made very clear well the other
aspect of the environment which is the
self play mechanism of the other agent
of the competitive behavior which the
other agent becomes the environment
essentially yeah and that's I mean one
of the most exciting ideas in AI is the
self play mechanism that's able to learn
successfully so there you go there's a
there's a thing where competition is
essential for yeah earning yeah at least
in that context so if we can step back
into another beautiful world which is
the actual mechanics the dirty mess of
it of the human brain is is there
something for people who might not know
is there something in common or describe
the key parts of the brain that are
important for intelligence or just in
general what are the different parts of
the brain that you're curious about that
you've studied and that are just good to
know about when you're thinking about
cognition well my area of expertise if I
have one is prefrontal cortex so what's
that or do we it depends on who you ask
the the the the the technical definition
is has is anatomical it there are there
are parts of your brain that are
responsible for motor behavior and
they're very easy to identify and the
region of your cerebral cortex they out
needs sort of outer crust of your brain
that lies in front of those is defined
as the prefrontal cortex and when you
say anatomical sorry to interrupt
so that's referring to sort of the
geographic region yeah as opposed to
some kind of functional definition
exactly so that it this is kind of the
coward's way out and I'm telling you
what the prefrontal cortex is just in
terms of like what part of the
real-estate it occupies the thing in the
front of them yeah exactly and and in
fact the early history of
you know the neuroscientific
investigation of what this like front
part of the brain does is sort of funny
to read because you know it was really
it was really World War one that started
people down this road of trying to
figure out what different parts of the
brain the human brain do in the sense
that there were a lot of people with
brain damage who came back from the war
with brain damage and it that provided
as tragic as that was it provided an
opportunity for scientists to try to
identify the functions of different
brain regions and it wasn't actually
incredibly productive but one of the
frustrations that neuropsychologist face
was they couldn't really identify
exactly what the deficit was that arose
from damage to this these most you know
kind of frontal parts of the brain it
was just a very difficult thing to you
know to you know to pin down there were
a couple of neuropsychologists who
identified through through a large
amount of clinical experience in close
observation they started to put their
finger on a syndrome that was associated
with frontal damage actually one of them
was a russian neuropsychologist named
Gloria who you know students of
cognitive psychology still read and and
what he started to figure out was that
the frontal cortex was somehow involved
in flexibility the in in in guiding
behaviors that required someone to
override a habit or to do something
unusual or to change what they were
doing in a very flexible way from one
moment to another so focused on like new
experiences and so the so the way your
brain processes and acts in new
experiences yeah what later helped bring
this function into better focus was a
distinction between controlled and
automatic behavior or - in in other
literature's this is referred to as
habitual behavior versus goal directed
behavior so it's very
very clear that the human brain has
pathways that are dedicated to habits to
things that you do all the time and they
need to be autumn at they don't require
you to concentrate too much so the that
leaves your cognitive capacity freed you
do other things just think about the
difference between driving when you're
learning to drive versus driving after
you're fairly expert there are brain
pathways that slowly absorb those
frequently performed behaviors so that
they can be habits so that they can be
automatic for that that's kind of like
the purest form of learning
I guess it's happening there which is
why I mean this is kind of jumping ahead
which is why that perhaps is the most
useful for us to focusing on and trying
to see how artificial intelligent
systems can learn is that the way it's
interesting I I do think about this
distinction between controlled and
automatic or goal directed and habitual
behavior a lot in thinking about where
we are in AI research but but just to
finish to finish the the kind of
dissertation here the the the role of
the front of the prefrontal cortex is
generally understood these days sort of
in in Contra distinction to that
habitual domain in other words the
prefrontal cortex is what helps you
override those habits it's what allows
you to say well what I usually do in
this situation is acts but given the
context I probably should do why I mean
the elbow bump is a great example right
if you know reaching out and shaking
hands is a probably habitual behavior
and it's the prefrontal cortex that
allows us to bear in mind that there's
something unusual going on right now
and in this situation I need to not do
the usual thing the kind of behaviors
that Luria reported and he built tests
for you know detect
these kinds of things we're exactly like
this so in other words when I stick out
my hand I want you instead to present
your elbow a patient with frontal damage
would have great deal of trouble with
that you know somebody preferring their
hand would elicit you know a handshake
the prefrontal cortex is what allows us
to say oh no hold on that's the usual
thing but I'm I have the ability to bear
in mind even very unusual contexts and
to reason about what behavior is
appropriate there just to get a sense is
our us humans special in the presence of
the prefrontal cortex do mice have a
prefrontal cortex do other mammals that
we can study if you if no then how do
they integrate new experiences yeah
that's a that's a really tricky question
and a very timely question because we
have revolutionary new technologies for
monitoring measuring and also causally
influencing neural behavior in mice and
fruit flies and these techniques are not
fully available even for studying brain
function in in monkeys let alone humans
and so it's a it's a very sort of for me
at least a very urgent question whether
the kinds of things that we want to
understand about human intelligence can
be pursued in these other organisms and
you know to put it briefly there's
disagreement
you know people who study fruit flies
will often tell you hey root flies are
smarter than you think
and they'll point to experiments where
fruit flies were able to learn new
behaviors we're able to generalize from
one stimulus to another in a way that
suggests that they have abstractions
that guide their generalization I've had
many conversations in which
I will start by observing you know
recounting some some observation about
Mouse behavior where it seemed like mice
were taking an awfully long time to
learn a task that for a human would be
profoundly trivial and I will conclude
from that that mice really don't have
the cognitive flexibility that we want
to explain and that a mouse researcher
will say to me well you know hold on
that experiment may not have worked
because you asked a mouse to deal with
stimuli and behaviors that were very
unnatural for the mouse if instead you
kept the logic of the experiment the
same but put you know kind of put it in
a you know presented it the information
in a way that aligns with what mice are
used to dealing with in their natural
habitats you might find that a mouse
actually has more intelligence than you
think and then they'll go on to show you
videos of mice doing things in their
natural habitat which seem strikingly
intelligent you know dealing with you
know physical problems you know I have
to drag this piece of food back to my
you know back to my lair but there's
something in my way and how do I get rid
of that thing so I think I think these
are open questions to put it you know to
sum that up and then taking a small step
back so related to that is you kind of
mentioned we're taking a little shortcut
by saying it's a geographic geographic
part of the the prefrontal cortex is a
region of the brain but if we what's
your sense in a bigger philosophical
view prefrontal cortex and the brain in
general do you have a sense that it's a
set of subsystems in the way we've kind
of implied that are they're pretty
distinct or to what degrees of that or
to what degree is it a giant
interconnected mess where everything
kind of does everything and is
impossible to disentangle them I think
there's overwhelming evidence that
there's functional differentiation that
it's clearly not the case that all parts
of the brain are doing the same thing
this follows immediately from the kinds
of studies of brain damage that we were
chatting about before it's obvious from
what you see if you stick an electrode
in the brain and measure what's going on
at the level of you neural activity
having said that there are two other
things to add which kind of I don't know
maybe tug in the other direction
one is that it's when you look carefully
at functional differentiation in the
brain what you usually end up concluding
at least this is my observation of the
literature is that the the differences
between regions are graded rather than
being discrete so it doesn't seem like
it's easy to divide the brain up into
true modules where you know that are you
know that have clear boundaries and that
have you know like like clear channels
of communication between them instead
lies to the prefrontal cortex yeah oh
yeah yeah the prefrontal cortex is made
up of a bunch of different sub regions
the you know the functions of which are
not clearly defined and which then the
borders of which seem to be quite vague
and then then there's another thing
that's popping up in very recent
research which you know which involves
application of these new techniques
which there are a number of studies that
suggest that parts of the brain that we
would have previously thought were quite
focused in their function are actually
carrying signals that we wouldn't have
thought would be there for example
looking in the primary visual cortex
which is classically thought of as
basically the first cortical way station
for processing visual information
basically what it should care about is
you know where are the edges in this
scene that I'm viewing
it turns out that if you have enough
data you can recover information from
primary visual cortex about all sorts of
things like you know what what behavior
the animal is engaged in right now and
what what how much reward is on offer in
the task that it's pursuing so it's
clear that even even regions whose
function is pretty well defined at a
course brain are nonetheless carrying
some information about information from
very different domains so you know the
history of neuroscience is sort of this
oscillation between the two views that
you articulated you know the kind of
modular view and then the big you know
mush view and you know I think I guess
we're gonna end up somewhere in the
middle which is which is unfortunate for
our understanding because the mod
there's something about our you know
conceptual system that finds it's easy
to think about a modular AI system and
easy to think about a completely
undifferentiated system but something
that kind of lies in between is
confusing but we're gonna have to get
used to it I think unless we can
understand deeply the lower-level
mechanism and you're all communicating
yeah so yeah on that on that topic you
kind of mention information just to get
a sense I imagine something that there's
still mystery and disagreement on is how
does the brain carry information and
signal like what in your sense is the
basic mechanism of communication in the
brain well I I guess I'm old-fashioned
in that I consider the networks that we
use in deep learning research to be a
reasonable approximation to you know the
the mechanisms that carry information in
the brain so the the the usual way of
articulating that is to say what really
matters is a rate code it what matters
is how how how quickly is an individual
neuron spiking how you know what's the
frequency at which it's spiking is the
timing of the spike yeah is it is it
firing fast or slow let's you know let's
put a number on that and that number is
enough to capture what what neurons are
doing there's you know there's
still uncertainty about whether that's
an an adequate description of how
information is is transmitted within the
brain there you know there are there are
studies that suggest that the precise
timing of spikes matters there are
studies that suggest that there are
computations that go on within the
dendritic tree within a neuron that are
quite rich and structured and that
really don't equate to anything that
we're doing in our artificial neural
networks having said that I feel like we
can get I feel like I feel like we're
getting somewhere
by sticking to this high level of
abstraction just the rate and by the way
we're talking about the electrical
signal that I remember reading some
vague paper somewhere recently where the
mechanical signal like the vibrations or
something of the of the neurons also
communicates and if I haven't seen that
but this is there somebody was arguing
that the the electrical signal this is
in nature paper something like that
where the electrical signal is actually
a side effect of the mechanical signal
but I don't think they changes the story
but it's almost the interesting idea
that there could be a deeper it's like
it's always like in physics with quantum
mechanics there's always a deeper story
that could be underlying the whole thing
but you think is basically the rate of
spiking that gets us that's like the
lowest hanging fruit that can get us
really far this is a this is a classical
view I mean this is this is this is not
the only way in which this stance would
be controversial is it you know in the
sense that there are there are members
of the neuroscience community who are
interested in alternatives but this is
really a very mainstream view the way
that neurons communicate is that
neurotransmitters arrive or you know at
a at you know they they wash up on a
neuron the neuron has receptors for
those transmitters the the the the the
meeting of the transmitter with these
receptors changes the voltage of the
neuro
and if enough voltage change occurs then
a spike occurs right one of these like
discrete events and it's that spike that
is conducted down the axon and leads to
neuroses this is just this is just like
neuroscience 101 this is like the way
the brain is supposed to work now what
we do when we build artificial neural
networks of the kind that are now
popular in the AI community is that we
don't worry about those individual
spikes we just worry about the frequency
at which those spikes are being
generated and the you know we consider
people talk about that as the activity
of a neuron and you know so the the
activity of units in a deep learning
system is you know broadly analogous to
the spike rate of a neuron there there
are people who who believe that there
are other forms of communication in the
brain in fact I've been involved in some
research recently that suggests that the
voltage the voltage fluctuations that
occur in populations of neurons that
aren't you know that are sort of below
the level of a spike production may be
important for for communication but I'm
still pretty old-school in the sense
that I think that the the things that
we're building in AI research constitute
reasonable models of how a brain would
work let me ask just for fun a crazy
question because I can do you think it's
possible were completely wrong about the
way this basic mechanism of your
neuronal communication that the
information is thought is some very
different kind of way in the brain oh
heck yes you know I would look I
wouldn't be a scientist if I didn't
think there was any chance we were wrong
but but I mean if you look if you look
at the history of deep learning research
as it's been applied to neuroscience of
course the vast majority of deep
learning research these days isn't about
neuroscience but you know if you go back
to the 1980s there's a you know sort of
an unbroken chain of research in in
which a particular strategy is taken
which is
hey let's train a deep a deep learning
system let's train a multi-layer neural
network on this task that we trained our
you know backbone or our monkey on or
this human being on and then let's look
at what the units deep in the system are
doing and let's ask whether what they're
doing resembles what we know about what
neurons deep in the brain are doing and
over and over and over and over that
strategy works in the sense that the
learning algorithms that we have access
to which typically send our own back
propagation they give rise to you know
patterns of activity patterns of
response patterns of like neuronal
behavior and these in these artificial
models that look haunting Lisa
hauntingly similar to what you see in
the brain and you know is that a commune
yes incidences at a certain point it
starts looking like such coincidence is
unlikely to not be deeply meaningful
yeah yeah that's yeah the circumstantial
evidence is overwhelming
but it could be always open to a total
of flipping a table yeah of course so
you have co-authored several recent
papers that sort of weave beautifully
between the world of neuroscience and
artificial intelligence and this maybe
if we could can we just try to dance
around and talk about some of them maybe
tried to pick up the interesting idea as
a jump to your mind from memory so maybe
looking at we're talking about the
prefrontal cortex the 2018 I believe
paper called the prefrontal cortex as a
matter of reinforcement learning system
what is there a key idea that you can
speak to from that paper yeah the I mean
the key idea is about meta learning so
what is meta learning meta learning is
by definition a situation in which you
have a learning algorithm
and the learning algorithm operates in
such a way that it gives rise to another
learning algorithm in the in the
earliest applications with this idea you
had one learning algorithm sort of
adjusting the parameters on another
learning algorithm but the case that
we're interested in this paper is one
where you start with just one learning
algorithm and then another learning
algorithm kind of emerges out of the
kind of thin air I can say more about
what I mean by that I don't mean to be
um you know your entities but that's the
idea of meta learning you you it relates
to the old idea and psychology of
learning to learn situations where you
you you have experiences that make you
better at learning something new like a
group a familiar example would be
learning a foreign language the first
time you learn a foreign language it may
be you know quite laborious and
disorienting and a novel but if you
let's say you've learned to two foreign
languages the third foreign language
obviously is going to be much easier to
pick up and why because you've learned
how to learn you know how this goes you
know okay I'm gonna have to learn how to
conjugate I'm gonna happen that's a
that's a simple form of meta learning
right in the sense that there's some
slow learning mechanism that's giving
that's helping you kind of update your
fast learning mechanism that that that
makes you so how from from our
understand from the psychology world
from neuroscience honor our
understanding how meta learning works
might work in the human brain what what
lessons can we draw from that that we
can bring into the artificial
intelligence world well yeah so we the
origin of that paper was in AI work that
that we were doing in my group we were
we were looking at what happens when you
train a recurrent neural network using
standard reinforcement learning
algorithms but but you train that
network not just in one task but you
train it in a bunch of interrelated
tasks and then you ask what happens when
you give it yet another task
in that sort of line of interrelated
tasks and and what we started to realize
is that a form of meta learning
spontaneously happens in in recurrent
neural networks and and the simplest way
to explain it is to say a recurrent a
recurrent neural network has a kind of
memory in its activation patterns it's
recurrent by definition in the sense
that you have units that connect to
other units that connect to other units
so you have sort of loops of
connectivity which allows activity to
stick around and be updated over time in
psychology we call in neuroscience we
call this working memory it's like
actively holding something in mind and
and and so that memory gives the
recurrent neural network of dynamics
right the way that the activity pattern
evolves over time is inherent to the
connectivity of the recurrent neural
network okay so that's that's idea
number one now the dynamics of that
network are shaped by the connectivity
by the synaptic weights and those
synaptic weights are being shaped by
this reinforcement learning algorithm
that you're you know training the
network with so the punchline is if you
train a recurrent neural network with a
reinforcement learning algorithm that's
adjusting its weights and you do that
for long enough the activation dynamics
will become very interesting right so
imagine imagine I give you a task where
you have to press one button or another
left button or right button and some
time in and there's some probability
that I'm going to give you an M&M if you
press the left button and there's some
probability I'll give you an M&M if you
press the other button and you have to
figure out what those probabilities are
just by trying things out but as I said
before instead of just giving you one of
these tasks I give you a whole sequence
you know I give you two buttons and you
figure out which one's best and I go
good job here's here's a new box two new
buttons you have to figure out which
one's best
good job here's a new box and every box
has its own probabilities and you have
to figure so if you train a neural net a
recurrent neural network on that kind of
sequence of tasks
the what happens it seemed almost
magical to us when we first started kind
of realizing what was going on the slow
learning algorithm that's justing the
the synaptic weights though those slow
synaptic changes give rise to a network
dynamics that them cell that you know
the dynamics themselves turn into a
learning algorithm so in other words you
can you can tell this is happening by
just freezing the synaptic weights
saying okay no more learning you're done
here's a new box figure out which button
is best and the recurrent neural network
will do this just fine there's no like
it figures out which which button is
best it train it kind of transitions
from exploring the two buttons to just
pressing the one that it likes best in a
very rational way how is that happening
it's happening because the activity of
the day the activity dynamics of the
network have been shaped by this slow
learning process that's occurred over
many many boxes and so what's happened
is that this slow learning algorithm
that's slowly adjusting the weights is
changing the dynamics of the network the
activity dynamics into its own learning
algorithm and as we were as we were kind
of realizing that this is the thing it
just so happened that the group that was
working on this included a bunch of
neuroscientists and it started kind of
ringing a bell for us which is to say
that we thought this sounds a lot like
the distinction between synaptic
learning and activity synaptic memory
and activity based memory in the brain
and it also reminded us of recurrent
connectivity that's very characteristic
of prefrontal function so there this is
this is kind of why it's good to have
people working on AI that know a little
bit about neuroscience and vice-versa
because we started thinking about
whether we could apply this principle to
neuroscience and that's where the paper
came from so the kind of principle of
the the recurrence they can see in the
prefrontal cortex then you start to
realize
that is possible too for something like
an idea of a learning to learn emerging
from this learning process as long as
you keep varying the environment
sufficient zactly so so the kind of
metaphorical transition we made to
neuroscience was to think okay well we
know that the prefrontal cortex is
highly recurrent we know that it's an
important locus for working memory for
active activation based memory so maybe
the prefrontal cortex supports
reinforcement learning in other words
you what is reinforcement learning you
take an action you see how much reward
you got you update your policy of
behavior maybe the prefrontal cortex is
doing that sort of thing
strictly in its activation patterns it's
keeping around a memory in its activity
patterns of what you did how much reward
you got and it's using that that
activity based memory as a basis for
updating behavior but then the question
is well how did the prefrontal cortex
get get so smart in other words how did
it where did these activity dynamics
come from how did that program that's
implemented in the recurrent dynamics of
the prefrontal cortex arise and one
answer that became evident in this work
was well maybe maybe the mechanisms that
operate on the synaptic level which we
believe are mediated by dopamine are
responsible for shaping those dynamics
so this may be a silly question but
because this kind of several temporal of
classes of learning are happening and so
the learning to learn is emerges can it
just can you keep building stacks of
learning to learn to learn learning to
learn to learn to learn to learn because
it keeps I mean basically abstractions
of more powerful abilities to generalize
of learning complex rules yeah or is
this that's over stretching the this
kind of mechanism well what at one of
the
one of the people in AI who started
thinking about meta learning from there
very early on your ganancia tuber sort
of cheekily suggested I think it is it
may have been in his PhD thesis that we
should think about meta meta meta meta
meta meta learning you know that that's
really that's really what's going to get
us to true intelligence certainly
there's a poetic aspect to it and it
seems interesting and correct that that
kind of level of abstraction would be
powerful but is that something you see
in the brain this kind of is it useful
to think of learning in these meta meta
meta way or is it just meta learning
well one thing that really fascinated me
about this mechanism that we were
starting to look at and you know other
groups started talking about very
similar things at the same time and and
then a kind of explosion of interest in
metal learning happened in the AI
community shortly after that I don't
know if we had anything to do with that
but but I was gratified to see that a
lot of people started talking about meta
learning one of the things that I like
about the kind of flavor of meta
learning that we were studying was that
it didn't require anything special it
was just if you took a system that had
some form of memory that the function of
which could be shaped by picture RL
algorithm then this would just happen
yes right I mean there there there are a
lot of forms of there are a lot of meta
learning algorithms that have been
proposed since then that are fascinating
and effective in in their you know in
their domains of application but they're
you know they're engineered they're
they're things that you had to say well
see if we wanted meta learning to happen
how would we do that here's an algorithm
that would but there's something about
the kind of meta learning that we were
studying that seemed to me special in
the sense that it wasn't an algorithm it
was just something that automatically
happened if you had a system that had
memory and it was trained with a
reinforcement learning algorithm in and
in that sense it can be as meta as it
wants to be right it there's no limit on
how
abstract the the meta-learning can get
because it's not reliant on the human
engineering a particular metal learning
algorithm to get there and and that's I
I also I don't know I guess I hope that
that's relevant in the brain I think
there's a kind of beauty in the in in
the ability of this emergent the
emergent aspect of it yeah it's
engineered exactly it's something that
just it just happens in in in a sense in
a sense you can't avoid this happening
if you have a system that has memory and
the function of that memory is shaped by
reinforcement learning and this system
is trained in a series of interrelated
tasks this is gonna happen you can't
stop it as long as you have certain
properties maybe like of a current
structure too you have to have memory it
actually doesn't have to be a recurrent
neural network when a paper that I was
honored to be involved with even earlier
used a kind of slot based memory you
remember the title just it was memory
augmented neural networks I think it
what I too was meta learning in memory
augmented neural networks and and you
know it was the same exact story you
know if you have a system with memory
here it was a different kind of memory
but the function of that memory is
shaped by reinforcement learning here it
was the root you know the reads and
writes that occurred on this slot based
memory this yeah this will just happen
and and and so this but this brings us
back to something I was saying earlier
about the importance of the environment
the this this will happen if the system
is being trained in a setting where
there's like a sequence of tasks that
all share some abstract structure you
know sometimes talk about tasks
distributions and that's something
that's very obviously true of the world
that humans inhabit we're we're
constantly like if you just kind of
think about what you do every day
you never you never do exactly the same
thing that you did the day before but
everything that you do is sort of has a
family resemblance it shares structure
with something that you did before and
so you know the the real world is sort
of you know saturated with this kind of
this property it's an endless variety
with endless redundancy and that's the
setting in which this kind of meta
learning happens and it does seem like
we're just so good at finding just like
in this emergent phenomena you describe
we're really good at finding that
redundancy finding those similarities
the family resemblance some people call
it sort of what is it
Melanie Mitchell was talking about
analogies so we were able to connect
concepts together in in this kind of way
in in this same kind of automated
emergent way which if there's so many
echoes here of psychology neuroscience
and obviously now with reinforcement
learning with recurrent neural networks
at the core if we could talk a little
bit about dopamine you have really
you're a part of co-authoring really
exciting recent paper very recent in
terms of release on dopamine and
temporal difference learning can you
describe the key ideas of that paper
sure yeah I mean one thing I want to
pause to do is acknowledge my co-authors
on actually both of the papers we're
talking about so the this dopamine I'll
just I'll certainly post all their names
okay wonderful yeah as I you know I I'm
sort of abashed to be the spokesperson
for these papers when I had such amazing
collaborators on both so it's a it's a
comfort to me to know that you all have
you all acknowledge that yeah it's not
an incredible team there but yeah so
yeah it's such a it's so much fun and
and in the case of the the dopamine
paper we also collaborated with now
ochite at Harvard who you know what a
paper simply wouldn't happened without
him but so so you were asking for like a
thumbnail sketch of yes thumbnail sketch
or key ideas or you know things the
insights that no continued on our kind
of discussion here between euros
and yeah yeah I mean this was another a
lot of the work that we've done so far
is taking ideas that have bubbled up in
AI and you know asking the question of
whether the brain might be doing
something related which I think on the
surface sounds like something that's
really mainly of use to neuroscience we
see it also as a way of validating what
we're doing on the AI side if we can
gain some evidence that the brain is
using some technique that we've been
trying out in our AI work that gives us
confidence that you know it may be a
good idea that it'll you know scale to
rich complex tasks that it'll interface
well with other mechanisms so you see is
a two-way Road yeah for just because a
particular paper is a little bit focused
on from one to the from a yeah from
you'll network's to neuroscience
ultimately the discussion the thinking
the productive long-term aspect of it is
the the two-way Road nature of the whole
and yeah I mean we we've talked about
the notion of a virtuous circle between
AI and neuroscience and you know the way
I see it that's always been there since
the two fields
you know jointly existed there have been
some phases in that history when AI was
sort of ahead there are some phases when
neuroscience was sort of ahead I feel
like given the bursts of innovation
that's happened recently on the AI side
AI is kind of ahead in the sense that
they're all of these ideas that we you
know we you know for which it's exciting
to consider that there might be neural
analogs and neuroscience
you know in a sense has been focusing on
approaches to studying behavior that
come from you know that are kind of
derived from this earlier era of
cognitive psychology and you know so in
some ways fail to connect with some of
the issues that we're you know grappling
with in AI like how do we deal with you
know
you know complex environments but I've
you know I think it's inevitable that
this circle will keep turning and there
will be a moment in the not too
different distant future when
neuroscience is pelting AI researchers
with insights that may change the
direction of our work just as just a
quick human question is it you have
parts of your brain this is very meta
but they're able to both think about
neuroscience and AI you know I don't
often meet people like that do you do
you think let me ask a meta plasticity
question you think a human being can be
both good at AI and neuroscience is like
what on the team at deep mind what kind
of human can occupy these two realms and
is that something you see everybody
should be doing can be doing or is it a
very special few can kind of jump just
like we thought about our history I
would think it's a special person that
can major in art history and also
consider being a surgeon otherwise known
as a dilettante yeah easily distracted
no I I think it does take a special kind
of person to be truly world-class at
both AI and neuroscience and I am not on
that list I happen to be someone who
whose interest in neuroscience and
psychology involved using the kinds of
modeling techniques that are now very
central in AI and that sort of I guess
bought me a ticket to be involved in all
of the amazing things that are going on
in AI research right now I do know a few
people who I would consider pretty
expert on both fronts and I won't
embarrass them by naming them but you
know there are there are like
exceptional people out there who are
like this the the one the one thing that
I find is a is a barrier to being truly
world-class on both fronts is is the
just the
the complexity of the technology that's
involved in both disciplines now so the
the engineering expertise that it takes
to to do
you know truly frontline hands-on AI
research is really really considerable
the learning curve of the tools just
like the specifics of just whether it's
programming or the kind of tools
necessary to collect the data to manage
the data to distribute to compute all
that kind of stuff yeah and on the
neuroscience I guess side there'll be
all different sets of tools exactly
especially with the recent explosion in
you know in neuroscience methods so but
but how you know so having said all that
I I think I think the rule I think the
best scenario for both neuroscience and
AI is to have people who interacting who
live at every point on this spectrum
from exclusively focused on neuroscience
to exclusively focused on the
engineering side of AI but but to have
those people you know inhabiting a
community where they're talking to
people who live elsewhere on the on the
spectrum and I be I may be someone who's
very close to the center in in the sense
that I have one foot in the neuroscience
world and one foot in the AI world in
and and that central position I will
admit prevents me at least someone with
my limited cognitive capacity from being
a truly you know true having true
technical expertise in any you know
either domain but at the same time I at
least hope that it's worthwhile having
people around who can kind of you know
see the connections if the community the
yeah the the emergent intelligence of
the community yeah yeah that's nicely
distributed is useful okay exactly yeah
so hopefully but I mean I've seen that
work I've seen that work out well at D
mind there there are there are people
who I mean even if you just focus on the
AI work that happens a deep mind it's
been a good thing to have some people
around doing that kind of work whose
PhDs are in neuroscience or psychology
every every academic discipline has its
kind of blind spots and kind of
unfortunate obsessions and it's
metaphors and it's reference points and
having some intellectual diversity is is
really healthy people get each other
unstuck I think I see it all the time at
deep mind and you know I like to think
that the people who bring some
neuroscience background to the table are
helping with that so one of the one of
them I like probably the deepest passion
for me what I would say maybe who kind
of spoke off mic a little bit about it
but that that I think is a blind spot
for at least robotics and AI folks is
human robot interaction human agent
interaction maybe idea of thoughts about
how we reduce the size of that lines but
do you also share the feeling that not
enough folks are studying this aspect of
interaction well I I'm I'm actually
pretty intensively interested in this
issue now and there are people in my
group who've actually pivoted pretty
hard over the last few years from doing
more traditional cognitive psychology
and cognitive neuroscience to doing
experimental work on human agent
interaction and there are a couple of
reasons that I'm pretty passionately
interested in this one is it's kind of
the outcome of having thought for a few
years now about what we're up to like
what were you like what are we doing
like what what is this what is this aid
AI research for so what does it mean to
make the world a better place I think
I'm pretty sure that means making life
better for humans yeah
and so how do you make life better for
humans that's that's a proposition that
when you look at it carefully and
honestly is rather horrendously
complicated especially when the AI
systems that
your that your building our learning
systems they're not you're not you know
programming something that you then
introduce to the to the world and it
just works as programmed like Google
Maps or something we're building systems
that that learn from experience so you
know that that typically leads to AI
safety questions how do we keep these
things from getting out of control how
do we keep them from doing things that
harm humans and I mean I hasten to say I
consider those hugely important issues
and there are large sectors of the
research community a deep mind and of
course elsewhere who are dedicated to
thinking hard all day every day about
that but there's a there's I guess I
guess I would say a positive side to
this too which is to say well what would
it mean to make human life better oh and
and how how can we imagine learning
systems doing that and and in talking to
my colleagues about that we reached the
initial conclusion that it's not
sufficient to philosophize about that
you actually have to take into account
how humans actually work and what humans
want and the difficulties of knowing
what humans want and the difficulties
that arise when humans want different
things and and so human agent
interaction has become you know a quite
a quite intensive focus of my group
lately if for no other reason that in
order to really address that that issue
in an adequate way you have to I mean
psychology becomes part of the picture
yeah and then so there's a few elements
there so if you focus on solving into
like the if you focus on the robotics
problem let's say a GI without humans in
the picture is you're missing
fundamentally the final step you when
you do want to help human civilization
you eventually have to interact with
humans and when you create a learning
system just as you said
that will eventually have to interact
with humans the interaction itself has
to be become has to become part of the
learning process right so you can't just
watch well my sense is it sounds like
your senses you can't just watch humans
to learn about humans yeah you have to
also be part of the human world you have
to interact with humans yeah exactly and
I mean then questions arise that start
imperceptibly but inevitably to slip
beyond the realm of engineering so
questions like if you have an agent that
can do something that you can't do under
what conditions do you want that agent
to do it so you know if you know if I if
I have a if I have a robot that can play
Beethoven sonatas better than any human
in the sense that the you know the the
sensitivity the express the expression
is just beyond what any human do I do I
want to listen to that do I want to go
to a concert and hear a robot play these
are these are these are an engineering
questions these are questions about
human preference and human culture and
psychology bordering and philosophy yeah
and then and then you start asking well
well even if we knew the answer to that
is it our place as AI engineers to build
that into these agents probably the
agents should interact with humans
beyond the population of AI engineers
and figure out what those humans want
yeah and then you know when you start I
referred this the moment ago but even
that becomes complicated be quote what
if what if
2-8 what if two humans want different
things and and you have only one agent
that's able to interact with them and
try to satisfy their preferences then
you're into the realm of of of like
economics and social choice theory and
and even politics so there's a sense in
which if you if you kind of follow what
we're doing to its logical conclusion
then it goes beyond questions of
engineering and technology
and you know starts to shade and
perceptibly into questions about what
kind of society do you want and actually
that once once that dawned on me I
actually felt I don't know what the
right word is quite refreshed in my in
my involvement in AI research it's
almost like this building this kind of
stuff is gonna lead us back to asking
really fundamental questions about
what's you know what is this like look
what's the good life and yeah who gets
to decide and and you know you know
bringing in viewpoints from multiple sub
communities to help us you know shape
the way that we live this it's it's
there's something it it started making
me feel like doing a a I research in you
know fully responsible away would you
know could potentially lead to a kind of
like cultural renewal yeah it's it's the
way done it's the it's the way to
understand human beings at the
individual at the societal level it may
become a way to answer all the silly
human questions of the meaning of life
and all the all those kinds of things
but if it doesn't even if it doesn't
give us a way of answering those
questions it may force us back to
thinking about thinking about you know
and it might bring it it might bring it
might restore a certain I don't know a
certain depth to or even dare I say
spirituality to the way that you know to
to to the world I don't think maybe that
you crann do switch well I don't think I
I'm with you I think it's a it's a I
will be that's one of the philosophy of
the 21st century the way which will open
the door I think a lot of a I
researchers are afraid to open that door
of exploring the view beautiful richness
of the human agent interaction human AI
interaction I'm really happy that
somebody like you have opened to that
door and I think one thing I often think
about is you know the the the usual the
usual schema for thinking about human
human agent interaction is this kind of
dystopian
you know oh you know where our robot
overlords and and again I hasten to say
AI safety is usually working and I you
know I'm not saying we shouldn't be
thinking about those risks totally on
board for that
but there's a what having said that
there's a there's a I what often follows
for me is the thought that you know
there's another there's another kind of
narrative that might be relevant which
is when we think of when we think of
humans gaining more and more information
about you know like human life the the
narrative there is usually that they've
gained more and more wisdom and more you
know they get closer to enlightenment
and you know and they become more
benevolent and you know like the Buddha
is like the like that's that's a totally
different narrative and why isn't it the
case that we we imagine that the the AI
systems that we're creating and just
kind of like they're gonna figure out
more and more about the way the world
works and the way that humans interact
and they'll they'll become beneficent
I'm not saying that will happen I'm not
you know III I'm I don't honestly expect
that to happen without some careful
setting things up very carefully but
it's another way things could go right
and yeah and I would even push back on
that I believe that the most
trajectory's natural human trajectories
will lead us towards progress so for me
there is a kind of sense that most
trajectories in AI development will lead
us into trouble you mean and and we over
focus on the worst case it's like in
computer science theoretical computer
science has been this focus on
worst-case analysis there's something
appealing to our human mind at some
lowest level to be mean we don't want to
be eaten by the tiger I guess so we want
to do the worst-case analysis but the
reality is that shouldn't stop us from
actually building out all the other
trajectories which are potentially
leading to all the positive world's all
the all the Enlightenment this book in
language now with Steven Pinker and so
on this
looking generally at human progress and
there's so many ways the human progress
can happen with AI it's and I think you
have to do that research you have to do
that work you have to do the not just AI
safety work of the one worst case
analysis how do we prevent that but the
the actual tools and the glue and the
mechanisms of human AI interaction that
would lead to all the positive yeah
isn't go yes super exciting area right
yeah you know we should be spending we
should be spending a lot of our time
saying what can go wrong I think it's
harder to see that there's work to be
done to bring into focus the question of
what what it would look like for things
to go right yeah that it's you know
that's not obvious there and we wouldn't
be doing this if we didn't have the
sense there was huge potential right
we're not doing this you know you know
for no reason we we have a sense that AG
I would be a major boom to humanity but
I think I think it's worth starting now
even when our technology is quite
primitive asking well exactly what would
that mean we can start now with
applications that are already gonna make
the world a better place like you know
solving protein folding you know I I
think this deep mind has gotten heavy
into science applications lately which i
think is you know you know a wonderful
wonderful move for us to be making but
when we think about AGI when we think
about building you know fully
intelligent agents that are gonna be
able to in a sense do whatever they want
you know we should start thinking about
what do we want them to want what what
what kind of world do we want to live in
that's not an easy question and if you
think we just need to start working on
it and even on the path to sort of it
doesn't have to be AG I was just
intelligent agents that interact with us
and help us enrich our own existence on
social networks for example on
recommender systems and various
intelligence there's so much interesting
interaction that's yet to be understood
and studied and you know how how do you
create I mean Twitter's is
struggling with this very idea how do
you create AI systems that increase the
quality in the health of a conversation
for sure it's a beautiful beautiful
human psychology question and how do you
do that without without deception being
involved without manipulation being
involved you know maximizing human
autonomy and how do you how do you make
these choices in a democratic way how do
you how do we how do we face the how do
we again I'm speaking for myself here
how do we face the fact that it's a
small group of people who have the skill
set to build these kinds of systems but
the you know what it means to make the
world a better place
is something that we all have to be
talking about yeah the kind of the world
the that we're trying to make a better
place includes a huge variety of
different kinds of people yeah how do we
cope with that this is this is a problem
that has been discussed you know in in
Gori extensive detail in social choice
theory you know there one thing I'm
really enjoying about the recent
direction work has taken in some parts
of my team is that
yeah we're reading the IEEE literature
we're reading the neuroscience
literature but we've also started
reading like economics and as I
mentioned social choice Theory even some
political theory because it turns out
that it's you know it all becomes
relevant it all becomes relevant and but
you know at the same time we've been
trying not to write philosophy papers
right we've been trying not to write
position papers
we're trying to figure out ways of doing
actual empirical research that kind of
take the first small steps to thinking
about what it really means for humans
with all of their complexity and
contradiction and you know paradox and
you know to be bought and to be brought
into contact with these API systems in a
way that then it really makes the world
a better place often reinforcement
learning frameworks actually kind of
allow you to
to do that machine learning and so that
that's the exciting thing about AI is
allows you to reduce the unsolvable
problem philosophical problem into
something more concrete that you can get
ahold of yeah and it allows you to kind
of define the problem in some way that
allows for growth in the system that
sort of beat you know you're not
responsible for the details right you
say this is generally what I want you to
do and then learning takes care of the
rest
of course the safety issues are you know
arise in that context but I think also
some of these positive issues arise in
that context what would it mean for an
AI system to really come to understand
what humans want and you know in if you
know in with all of the subtleties of
that right you know humans humans want
help with certain things but they don't
want everything done for them right
there is part of part of the
satisfaction that humans get from life
is in accomplishing things so if there
were devices around that did everything
for him you know I often think of the
movie wall-e yeah that's like dystopian
in a totally different ways like the
machines are doing everything for us
that's that's not what we want it um you
know anyway I just I find this you know
this kind of this opens up a whole
landscape of research that feels
affirmative yeah and it's not to me it's
one of the most exciting and it's wide
open
yeah we have to because it's a cool
paper talk about dopamine
oh yeah okay so I can let's we were
gonna we were gonna I was gonna give you
a quick summary here's a quick summary
of uh what's the title of the paper I I
think we called it a distributional
distributional code for value in
dopamine based reinforcement learning
yes so that's another project that grew
out of pure AI research a number of
people that deep mind and a few other
places had started working on a new
version of reinforcement learning it
would the
which was defined by taking something in
traditional reinforcement learning and
just tweaking it so the thing that they
took from traditional reinforcement
learning was a value signal so that the
at the center of reinforcement learning
at least most algorithms is some
representation of how well things are
going your expected cumulative future
reward and that's usually represented as
a single number so if you imagine a
gambler in a casino and the gamblers
thinking well I have this probability of
winning such and such an amount of money
and I have this probability of losing
such and such an amount of money the
that situation would be represented as a
single number which is like the expected
the the weighted average of all those
outcomes and this new form of
reinforcement learning said well what if
we what if we generalize that to
distributional representation so now we
think of the gambler as literally
thinking well there's this probability
that I'll win this amount of money and
there's this probability that I'll lose
that amount of money and we don't reduce
that to a single number and it had been
observed through experiments through you
know just trying this out that that rep
that kind of distributional
representation really accelerated
reinforcement learning and led to better
policies what's your intuition about so
we're talking about rewards yeah so
what's the intuition why that is why
what is it well it's an it's kind of a
surprising historical note at least
surprised me when I learned it that this
had been tried out in a kind of
heuristic way people thought well gee
what would happen if we tried and then
it had this empirically it had this
striking effect and it was only then
that people started thinking well gee
why wait why wait why why is this
working and and that's led to a series
of studies just trying to figure out why
it works it which is ongoing but one
thing that's already clear from that
research is that one reason that it
helps is that it drives richer
representation learning so if you
imagine imagine two situations that have
the same expected value they're the same
kind of weighted average value Stan
deep reinforcement learning algorithms
are going to take those two situations
and kind of in terms of the way they're
represented internally dozen ex-squeeze
them together because the the thing that
you're trying to represent which is
their expected value is the same so all
the way through the system things are
going to be mushed together but what if
in what if what if those two situations
actually have different value
distributions they have the same average
value but they have different
distributions of value in that situation
distributional learning will will
maintain the distinction between these
two things so to make a long story short
distribution of learning can keep things
separate in the internal representation
that might otherwise be conflated or
squished together and maintaining those
distinctions can be useful in in when
the system is now faced with some other
task where the distinction is important
if we look at the optimistic and
pessimistic dopamine neurons so first of
all what is dopamine why is this why is
it all useful to to think about in the
artificial intelligence sense but what
do we know about dopamine in the human
brain what is what is it why is it
useful why is it interesting what does
have to do with the prefrontal cortex
and learning in general yeah so well
there's this hint this is also some a
case where there is a huge amount of
detail and debate but one one one
currently prevailing idea is that the
function of this neurotransmitter
dopamine resembles a particular
component of standard reinforcement
learning algorithms which is called the
reward prediction error so I was talking
a moment ago about these value
representations how do you learn them
how do you update them based on
experience well if you if you made some
prediction about a future reward and
then you get more reward than you were
expecting then probably retrospectively
you want to go back and increase the
value representation that you attached
to the
earlier situation if you got less reward
than you were expecting
you should probably decrement that
estimate and that's the process of
temporal difference exactly this is the
central mechanism of temporal difference
learning which is one of several kind of
you know kind of back them sort of the
backbone of our armamentarium in in RL
and it was this connection between the
world prediction error and dopamine was
was made you know in the in the 1990s
and there's been a huge amount of
research that you know seems to back it
up dopamine made to be doing other
things but this is clearly at least
roughly one of the things that it's
doing but the usual idea was that
dopamine was representing these reward
prediction errors again in this like
kind of single number way that
representing your surprise you know it
with a single number and in distribution
ilaria forcement learning this this kind
of new elaboration of the standard
approach it's not only the value the
value function that's represented as a
single number it's also the reward
prediction error and so what happened
was that will Dabney one of my
collaborators who was one of the first
people to work on distributional
temporal difference learning talked to a
guy in my group will Chris Nelson who's
a computational neuroscientist and said
gee you know is it possible that
dopamine might be doing something like
this distributional coding thing and
they started looking at what was in the
literature and then they brought me in
we started talking to now ochita and we
came up some with some specific
predictions about you know if the brain
is using this kind of distributional
coding then in the tasks that now has
studied you should see this this this
and this and that's where the paper came
from we kind of enumerated a set of
predictions all of which ended up being
fairly clearly confirmed and all of
which leads to at least some initial
indication that the brain might be doing
something like this distributional
coding that dopamine might be
representing surprise signals in a way
that is not just collapsing everything
to a single number but instead it's kind
of respecting the
the variety of future outcomes if that
makes sense so yeah so that's we're
showing suggesting possibly that
dopamine has a really interesting
representation scheme for for in in the
human brain for its reward signal
exactly that's fascinating it's just
that's another beautiful example of AI
revealing something that's about
neuroscience potentially suggesting
possibilities well you never know so a
minute you published paper like that the
next thing you think is I hope that
replicates like I hope I hope we see
that same thing in other datasets but of
course several labs now are doing the
follow-up experiment so we'll know soon
but it has been it has been a lot of fun
for us to you know to take these ideas
from AI and kind of bring them into
neuroscience and and you know see how
far we can get so we kind of talked
about it a little bit but where do you
see the field of neuroscience and
artificial intelligence heading broadly
like what are the possible exciting
areas that you can see breakthroughs in
the next let's get crazy not just three
or five years but the next 10 20 30
years that would make you excited and
perhaps you'd be part of on the
neuroscience side there's a great deal
of interest now in what's going on in AI
and and at the same time I feel like so
the neuroscience especially the part of
neuroscience that's focused on circuits
and and systems you know kind of like
really mechanism focused there's been
this explosion in new technology and up
until recently the experiments that have
exploited this technology have have not
involved a lot of interesting behavior
and this is for a variety of reasons you
know one of which is in order to employ
some of these technologies you actually
have to if you're if you're studying a
mouse you have to head fix the mouse in
other words you know you have to like
immobilize the mouse
and so it's been it's been tricky to
come up with ways of eliciting
interesting behavior from a mouse that's
that's restrained in this way but people
have begun to you know create very
interesting solutions to this like
virtual reality environments where the
animal can kind of move a trackball and
and and and as people have kind of begun
to explore what you can do with these
technologies
I feel like more and more people are
asking well let's try to bring behavior
into the picture let's try to like
reintroduce behavior which was supposed
to be what this whole thing was about
and I'm hoping that those two trends the
the kind of growing interest in behavior
and the widespread widespread interest
in what's going on in AI will come
together to kind of open a new chapter
in neuroscience research where there's a
kind of rebirth of interest in the
structure of behavior and its underlying
substrates but that that research is
being informed by computational
mechanisms that were coming to
understand in AI you know if we can do
that then we might be taking a step
closer to this utopian future that we
were talking about earlier where there's
really no distinction between psychology
and neuroscience night neuroscience is
about studying the mechanisms that
underlie whenever it is the brain is for
and you know what is the brain for it's
for behavior now I feel like we could I
feel like we could maybe take a step
toward that now if people are motivated
in the right way you also ask Betty I so
that is very science question you said
neuroscience that's right and especially
place like deep mind are interested in
both branches sort of what what about
the engineering or intelligence systems
I think I think the one of the key
challenges that a lot of people are
seeing now in AI is to build systems
that have the kind of
flexibility and the kind of flexibility
that humans have in two senses one is
that humans can be good at many things
they're not just expert at one thing and
they're also flexible in the sense that
they can switch between things very
easily and they can pick up new things
very quickly because they they very they
very able see what a new task has in
common with other things that they've
done and and that's something that our
AI systems to you know blatantly do not
have there are some people who like to
argue that deep learning and deep RL are
simply wrong for getting that kind of
flexibility I don't share that belief
but the simpler fact of the matter is
we're not building things yet that do
have that kind of flexibility and and I
think the the attention of a large part
of the AI community is starting to pivot
to that question how do we get that
that's going to lead to a focus on
abstraction it's gonna lead to a focus
on what in psychology we call cognitive
control which is the ability to switch
between tasks the ability to quickly put
together a program of behavior that
you've never executed before but you
know makes sense for a particular set of
demands it's very closely related to
what the prefrontal cortex does on the
neuroscience side so I think it's going
to be an interesting and interesting new
chapter so that's the reasoning side and
cognition side but let me ask the over
romanticize question do you think we'll
ever engineer an AGI system that we
humans would be able to love and that
would love us back so I have that level
and depth of connection I love that
question and it it it relates closely to
things that I've been thinking about a
lot lately you know in the context of
this human AI research there there's
social psychology research in particular
by Susan Fiske at Princeton in the
department I used to
where I used to work where she she
dissects human attitudes toward other
humans into a sort of two-dimensional
you know a two-dimensional
two-dimensional scheme and one dimension
is about ability you know how able how
capable is is this other person and the
but the other dimension is warmth so you
can imagine another person who's very
skilled and capable but it's very cold
right and you wouldn't you wouldn't
really like highly you might have some
reservations about that other person
right but there's also a kind of
reservation that we might have about
another person who who elicits in us or
displays a lot of human warmth but is
you know not good at getting things done
right that that like the the greatest
esteem that we we reserved our greatest
esteem really for people who are both
highly capable and also quite warm right
that that's that's like the best of the
best this is I mean I'm just this isn't
a normative statement I'm making this is
just an empirical it's an empirical
statement this is what humans seem this
is these are the two dimensions that
people seem to kind of like along which
people size other people up in an in AI
research we really focus on this
capability thing you like we want our
agents to be able to do stuff you know
this thing can play go at a superhuman
level that's awesome and but that's only
one dimension what's the what about the
other dimension what would it mean for
Nai system to be warm and you know I
don't know maybe there are easy
solutions here like we can put them put
a face on rei systems it's cute it has
big years I mean that's probably part of
it but I think it also has to do with a
pattern of behavior a pattern of you
know what would it mean for an AI system
to display caring compassionate behavior
in a way that actually made us feel like
it was for real yeah that we didn't feel
like it was simulated we didn't feel
like we were being duped to me that you
know people talk about the Turing test
or some some descendant of it I feel
like
that's the ultimate Turing test you know
is there is there an AI system that can
not only convince us that it knows how
to reason and it knows how to interpret
language but that we're comfortable
saying yeah that AI system is a good guy
you know like I'm the warmth scale
yeah whatever warmth is we kind of
intuitively understand it but we also
want to be able to yeah we don't even
understand it explicitly enough yet to
be able to engineer it exactly and
that's and that's an open scientific
question you kind of alluded it several
times in the human AI interaction that's
the question that should be studied and
probably one of the most important
questions and usually and human to AG we
humans are so good at it yeah you know
it's not just weird it's not just that
we're born warm you know like I suppose
some people are are warmer than others
given you know whatever genes they
manage to inherit but there's also
there's also there are also learned
skills involved right I mean there are
ways of communicating to other people
that you care that they matter to you
that you're enjoying interacting with
them yeah right and we learn these
skills from one another and it's not out
of the question that we could build
engineered systems
I think it's hopeless as you say that we
could somehow hand design these sorts of
these sorts of behaviors but it's not
out of the question that we could build
systems that kind of we-we-we in instill
in them something that sets them out in
the right direction so that they they
end up learning what it is to interact
with humans in a way that's gratifying
to humans I mean honestly if that's not
where we're headed
I think it's exciting as a scientific
problem just as he described I I
honestly don't see a better way to enter
than talking about warmth and love and
Matt I don't think I've ever had such a
wonderful conversation where my
questions were so bad and your answers
was so beautiful
so I deeply appreciate it I really do
very fun I don't know I as you can
probably tell I'm I really you know I
there's something I like about kind of
thinking outside the box and like yeah
I'm so it's good having fun to do that
awesome thanks so much for doing it
thanks for listening to this
conversation with Matt bah panic and
thank you to our sponsors the Jordan
Harbinger show and magic spoon low carb
keto cereal please consider supporting
this podcast by going to Jordan
Harbinger complex and also going to
magic spoon complex and using code Lex a
check out click the links buy all the
stuff it's the best way to support this
podcast and the journey I'm on in my
research and the startup if you enjoy
this thing subscribe on youtube review
it with the five stars in a podcast the
port on patreon follow on Spotify or
connect with me on Twitter at Lex
Friedman again spelled miraculously
without the e just Fri DM a.m. and now
let me leave you with some words from
urologists vs amachandran hannah three
pound mass of jelly that you can hold in
your palm imagine angel's contemplate
the meaning of infinity even question
its own place in cosmos especially all
inspiring it's the fact that any single
brain including yours is made up of
atoms that were forged in the hearts of
countless far-flung stars billions of
years ago
these particles drifted for eons and
light years until gravity and change
brought them together here now these
atoms now form a conglomerate your brain
I can not only ponder the very stars
they gave it birth but can also think
about its own ability to think and
wonder about its own ability to wander
with the arrival of humans it has been
said the universe has suddenly become
conscious of itself this
truly is the greatest mystery of all
thank you for listening and hope to see
you next time
you