Transcript
epQxfSp-rdU • Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0048_epQxfSp-rdU.txt
Kind: captions
Language: en
you've studied the human mind cognition
language vision evolution psychology
from child to adult from the level of
individual to the level of our entire
civilization so I feel like I can start
with a simple multiple-choice question
what is the meaning of life is it a to
attain knowledge as Plato said B to
attain power as Nietzsche said C to
escape death as Ernest Becker said d to
propagate our genes as Darwin and others
have said e there is no meaning as the
nihilists have said F knowing the
meaning of life is beyond our cognitive
capabilities as Steven Pinker said based
on my interpretation twenty years ago
and G none of the above
I'd say aid comes closest but I would
amend that to attaining not only
knowledge but fulfillment more generally
that is life health stimulation access
to the living cultural and social world
now this is our meaning of life it's not
the meaning of life if you were to ask
our genes their meaning is to propagate
copies of themselves but that is
distinct from the meaning that the brain
that they lead to sets for itself so to
you knowledge is a small subset or a
large subset it's a large subset but
it's not the entirety of human striding
because we also want to interact with
people we want to experience beauty we
want to experience the the richness of
the natural world but understanding the
what makes the universe tick is his way
up there for some of us more than others
certainly for me that's that's one of
the top five so is that a fundamental
aspect are you just describing your own
preference or is this a fundamental
aspect of human nature is to seek
knowledge just in your latest book you
talk about the the the power the
usefulness of rationality and reason so
on is that a fundamental nature
human beings or is it something we
should just strive for it's both it is
we're capable of striving for it because
it is one of the things that make us
what we are Homo sapiens wise men we are
unusual among animals in the degree to
which we acquire knowledge and use it to
survive we we make tools we strike
agreements via language we extract
poisons we predict the behavior of
animals we try to get at the workings of
plants and when I say we I don't just
mean we in the modern West but we as a
species everywhere which is how we've
managed to occupy every niche on the
planet and how we've managed to drive
other animals to extinction and the
refinement of Reason in pursuit of human
well-being of health happiness social
richness cultural richness is our our
main challenge in the present that is
using our intellect using our knowledge
to figure out how the world works how we
work in order to make discoveries and
strike agreements that make us all
better off in the long run right and you
do that almost undeniably and in a
data-driven way in a recent book but I'd
like to focus on the artificial
intelligence aspect of things and not
just artificial intelligence but natural
intelligence too so twenty years ago in
the book you've written on how the mind
works
you conjecture again my right to
interpret things you could you can
correct me if I'm wrong but you
conjecture that human thought in the
brain may be a result of and now we're a
massive network of highly interconnected
neurons so from this interconnectivity
emerges thought compared to artificial
neural networks we use for machine
learning today is there something
fundamentally more complex mysterious
even magical about the biological neural
networks versus the ones we've been
starting to use over the past 60 years
and it becomes a success in the past 10
there is something
a little bit mysterious about the human
neural networks which is that each one
of us who is a neural network knows that
we ourselves are conscious conscious not
of a sense of registering our
surroundings or even registering our
internal state but in having subjective
first-person present-tense experience
that is when I see red it's not just
different from green but it just there's
there's a redness to it I feel whether
an artificial system would experience
that or not I don't know and I don't
think I can know that's why it's
mysterious if we had a perfectly
lifelike robot that was behaviorally
indistinguishable from a human would we
attribute consciousness to it or ought
we to attribute consciousness to it and
that's something that it's very hard to
know but putting that aside put inside
that that largely philosophical question
the question is is there some difference
between the human neural network and the
ones that we were building in artificial
intelligence will mean that we're on the
current trajectory not going to reach
the point where we've got a lifelike
robot indistinguishable from a human
because the way their neural so-called
neural networks were organized are
different from the way ours are
organized having there's overlap but I
think there are some some big
differences that they're the current
neural networks current so called deep
learning systems are in reality not all
that deep that is they are very good at
extracting high order statistical
regularities but most of the systems
don't have a semantic level a level of
actual understanding of who did what to
who why where how things work what
causes what else do you think that kind
of thing can emerge as it does so
artificial you know so much smaller the
number of connections and so on in the
current human biological networks but do
you think sort of go to go to
consciousness or to go to this higher
level semantic reasoning about things do
you think that can emerge with just a
larger network with a more richly
weirdly interconnected network
separating consciousness because
consciousness is even a matter of
complex a really good one
yeah you could have you could sensibly
ask the question of whether shrimp are
conscious for example they're not
terribly complex but maybe they feel
pain so let's just put that one that
part of it aside yet but I think sheer
size of a neural network is not enough
to give it structure and knowledge but
if it's suitably engineered then then
why not
that is where neural networks natural
selection did a kind of equivalent of
engineering of our brains so I don't
know there's anything mysterious in the
sense that no no system made out of
silicon could ever do what a human brain
can do I think it's possible in
principle whether it'll ever happen
depends not only on how clever we are in
engineering these systems but whether
even we even want to whether that's even
a sensible goal that is you can ask the
question is there any locomotion system
that is as as good as a human well we
kind of want to do better than a human
ultimately in terms of legged locomotion
there's no reason that humans should be
our benchmark they're their tools that
might be better in some ways it may just
be not as maybe that we can't duplicate
a natural system because at some point
it's so much cheaper to use a natural
system that we're not going to invest
more brainpower and resources so for
example we don't really have a subsidy
and exact substitute for wood we still
build houses out of would we still go
furniture out of wood we like the look
we like the feel it's wood has certain
properties that synthetics don't there's
not that there's any magical or
mysterious about wood it's just that the
extra steps of duplicating everything
about wood is something we just haven't
bothered because we have wood likewise a
cotton I mean I'm wearing cotton
clothing now feels much better than the
polyester it's not that cotton has
something magic in it and it's not that
if there was that we couldn't ever
synthesize something exactly like cotton
but at some point it just it's just not
worth it we've got cotton and likewise
in the case of human intelligence the
goal of making an artificial system that
is exactly like the human brain is a
goal that we
no one's gonna pursue to the bitter end
I suspect because if you want tools that
do things better than humans you're not
going to care whether it does something
like humans so for example you're
diagnosing cancer or particularly
whether why set humans as your benchmark
but in in general I suspect you also
believe that even if the human should
not be a benchmark on women's don't want
to imitate humans in their system
there's a lot to be learned about how to
create an artificial intelligence system
by studying the human yeah III think
that's right
there in in the same way that to build
flying machines we want understand the
laws of aerodynamics and including birds
but not mimic the birds right but the
same laws you have a view on AI
artificial intelligence and safety that
from my perspective is refreshingly
rational or perhaps more importantly has
elements of positivity to it which I
think can be inspiring and empowering as
opposed to paralyzing for many people
including AI researchers the eventual
existential threat of AI is obvious not
only possible but obvious and for many
others including a researchers the
threat is not obvious so Elon Musk is is
famously in the highly concerned about
AI camp saying things like AI is far
more dangerous and nuclear weapons and
that AI will likely destroy human
civilization so in February you said
that if Elon was really serious about AI
they the threat of AI he would stop
building self-driving cars that he's
doing very successfully as part of Tesla
then he said Wow if even Pinker doesn't
understand the difference between arrow
AI like a car in general AI when the
latter literally has a million times
more compute power and an open-ended
utility function humanity is in deep
trouble so first what did you mean by
the statement about Elon Musk should
stop Bill ourselves driving cars if he's
deeply concerned
not last time that Elon Musk has fired
off an intemperate tweet well we live in
a world where Twitter has power yes yeah
I think the the that there are two kinds
of existential threat that have been
discussed in connection with artificial
intelligence and I think that they're
both incoherent one of them is vague
fear of AI takeover that it just as we
subjugated animals and less
technologically advanced people's so if
we build something that's more advanced
than us it will inevitably turn us into
pets or slaves or or domesticated animal
equivalents
I think this confuses intelligence with
a will to power that it so happens that
in the intelligence system we are most
familiar with namely Homo sapiens we are
products of natural selection which is a
competitive process and so bundled
together with our problem-solving
capacity are a number of nasty traits
like dominance and exploitation and
maximization of power and glory and
resources and influence there's no
reason to think that sheer
problem-solving capability will set that
as one of its goals its goals will be
whatever we set it its goals as and as
long as someone isn't building a
megalomaniacal artificial intelligence
and there's no reason to think that it
would naturally evolve in that direction
now you might say well what if we gave
it the goal of maximizing its own power
source well that's a pretty stupid goal
to give a an autonomous system you don't
give it that goal I mean that's just
self-evident we idiotic so if you look
at the history of the world there's been
a lot of opportunities where engineers
could instill in a system destructive
power and they choose not to because
that's the natural process of
Engineering well weapons I mean if
you're building a weapon its goal is to
destroy people and so I think they're
good reasons to not not build certain
kinds of weapons I think the building
nuclear weapons was a massive mistake
but probably do you think so
maybe pause on that because that is one
of the serious threats do you think that
it was a mistake in a sense that it was
should have been stopped
early on or do you think it's just an
unfortunate event of invention that this
was invented we think it's possible to
stop I guess is the question it's hard
to rewind the clock because of course it
was invented in the context of World War
two and the fear that the Nazis might
develop one first then once was
initiated for that reason it was it it
was hard to turn off especially since
winning the war against the Japanese and
the Nazis was such an overwhelming goal
of every responsible person that there's
just nothing that people wouldn't have
done then to ensure victory it's quite
possible if World War two hadn't
happened that nuclear weapons wouldn't
have been invented we can't know but I
don't think it was by any means a
necessity any more than some of the
other weapon systems that were
envisioned but never implemented like
planes that would disperse poison gas
over cities like crop dusters or systems
to try to do to create earthquakes and
tsunamis in enemy countries to weaponize
the weather weaponize solar flares all
kinds of crazy schemes that that we
thought the better off I think analogies
between nuclear weapons and artificial
intelligence are fundamentally misguided
because the whole point of nuclear
weapons is to destroy things the point
of artificial intelligence is not to
destroy things so the analogy is is
misleading so there's two artificial
intelligence you mentioned the first one
was the intelligence all know hungry
yeah the system that we design ourselves
where we give it the goals goals are
external to the means to attain the
goals I if we don't design an artificial
intelligence system to maximize
dominance then it won't maximize
dominance it just that we're so familiar
with Homo sapiens when these two traits
come bundled together particularly in
men that we are apt to confuse high
intelligence with a will to power but
that's just an error the other fear is
that we'll be collateral damage that
will give artificial intelligence a goal
like make paperclips and it will pursue
that goal so brilliantly that
before we can stop it it turns us into
paperclips we'll give it the goal of
curing cancer and it will turn us into
guinea pigs for lethal experiments or
give it the goal of world peace and its
conception of world pieces no people
therefore no fighting and so it'll kill
us all now I think these are utterly
fanciful in fact I think they're
actually self-defeating they first of
all assume that we're going to be so
brilliant that we can design an
artificial intelligence that can cure
cancer but so stupid that we don't
specify what we mean by curing cancer in
enough detail that it won't kill us in
the process and it assumes that the
system will be so smart that it can cure
cancer but so idiotic that it doesn't
can't figure out that what we mean by
curing cancer is not killing everyone so
I think that the the collateral damage
scenario the value alignment problem is
is also based on a misconception so one
of the challenges of course we don't
know how to build either system
currently or are we even close to
knowing of course those things can
change overnight but at this time
theorizing about it is very challenging
in either direction so that that's
probably at the core the problem is
without that ability to reason about the
real engineering things here at hand is
your imagination runs away with things
exactly but let me sort of ask what do
you think was the motivation the thought
process of elam Wasco i build autonomous
vehicles I study autonomous vehicles I
studied Tesla autopilot I think it is
one of the greatest currently
application large scale application of
artificial intelligence in the world it
has a potentially a very positive impact
on society so how does a person who's
creating this very good quote/unquote
narrow AI system also seem to be so
concerned about this other general AI
what do you think is the motivation
there what do you think is the thing
really you probably have to ask him but
there and and he is notoriously
flamboyant impulsive to the as we have
just seen to the detriment of his own
goals of the health of a company so I
don't know what's going on
on his mind you probably have to ask him
but I don't think the and I don't think
the distinction between special-purpose
a and so-called general is relevant that
in the same way that special-purpose AI
is not going to do anything conceivable
in order to attain a goal all
engineering systems have to are designed
to trade off across multiple goals
well we build cars in the first place we
didn't forget to install brakes because
the goal of a car is to go fast it
occurred to people yes you want to go
fast but not always so you build an
brakes too likewise if a car is going to
be autonomous that doesn't and program
it to take the shortest route to the
airport it's not going to take the
diagonal and mow down people and trees
and fences because that's the shortest
route that's not what we mean by the
shortest route when we program it and
that's just what and an intelligent
system is by definition it takes into
account multiple constraints the same is
true in fact even more true of so-called
general intelligence that is if it's
genuinely intelligent it's not going to
pursue some goal single-mindedly
omitting every other consideration and
collateral effect that's not artificial
in general intelligence that's that's
artificial stupidity I agree with you by
the way on the promise of autonomous
vehicles for improving human welfare
I think it's spectacular and I'm
surprised at how little press coverage
notes that in the United States alone
something like 40,000 people die every
year on the highways vastly more than
are killed by terrorists and we spend we
spent a trillion dollars on a war to
combat deaths by terrorism but half a
dozen a year whereas if you're an year
out 40,000 people are massacred on the
highways which could be brought down to
very close to zero so I'm with you on
the humanitarian benefit let me just
mention that it's as a person who's
building these cars it is it a little
bit offensive to me to say that
engineers would be clueless enough not
to engineer safety into systems I often
stay up at night thinking about those
40,000 people that are dying and
everything I tried to engineer is to
save those people's lives so every new
invention that I'm super
excited about every new and the in all
the deep learning literature and cvpr
conferences and nips everything I'm
super excited about is all grounded in
making it safe and help people so I just
don't see how that trajectory can all a
sudden slip into a situation where
intelligence will be highly negative you
know you and I certainly agree on that
and I think that's only the beginning of
the potential humanitarian benefits of
artificial intelligence there's been
enormous attention to what are we going
to do with the people whose jobs are
made obsolete by artificial intelligence
but very little attention given to the
fact that the jobs that hooni made
obsolete are horrible jobs the fact that
people aren't going to be picking crops
and making beds and driving trucks and
mining coal these are you know soul
deadening jobs and we have a whole
literature sympathizing with the people
stuck in these menial mind deadening
dangerous jobs if we can eliminate them
this is a fantastic boon to humanity now
granted we you solve one problem and
there's another one namely how do we get
these people a a decent income but if
we're smart enough to invent machines
that can make beds and put away dishes
and and handle hospital patients well I
think we're smart enough to figure out
how to redistribute income to apportion
some of the vast economic savings to the
human beings who will no longer be
needed to to make beds okay Sam Harris
says that it's obvious that eventually
AI will be in existential risk he's one
of the people says it's obvious we don't
know when the claim goes but eventually
it's obvious and because we don't know
when we should worry about it now this
is a very interesting argument in my
eyes so how do you how do we think about
time scale how do we think about
existential threats when we don't really
know so little about the threat unlike
nuclear weapons perhaps about this
particular threat that it could happen
tomorrow
right so but very likely won't yeah
they're likely to be a hundred years
away so how do
do we ignore it do how do we talk about
it do we worry about it what how do we
think about those what is it a threat
that we can imagine it's within the
limits of our imagination but not within
our limits of understanding - sufficient
to accurately predict it but but what
what is what is the ether asre AI xai
being the existential threat AI can
always know like enslaving us or turning
us into paperclips I think the most
compelling from the Sam Harris was fact
it would be the paperclip situation yeah
I mean I just think it's totally
fanciful I just don't build a system
don't give it a don't first of all the
code of engineering is you don't
implement a system with massive control
before testing it now perhaps the
culture of engineering will radically
change then I would worry I don't see
any signs that engineers will suddenly
do idiotic things like put a electrical
power plant in control of a system that
they haven't tested first or all of
these scenarios not only imagine a
almost a magically powered intelligence
you know including things like cure
cancer which is probably an incoherent
goal because there's so many different
kinds of cancer or bring about world
peace I mean how do you even specify
that as a goal but the scenarios also
imagine some degree of control of every
molecule in the universe which not only
is itself unlikely but we would not
start to connect these systems to
infrastructure without without testing
as we would any kind of engineering
system now maybe some engineers will be
irresponsible and we need legal and
regulatory and legal responsibilities
implemented so that engineers don't do
things that are stupid by their own
standards but the ii-i've never seen
enough of a plausible scenario of
existential threat to devote large
amounts of brain power to to forestall
it so you believe in the sort of the
power and mass of the engineering of
reason as the argue
this book of Reason science and sort of
be the very thing that puts the
development of new technology so it's
safe and also keeps us safe it's the
same and you know granted the same
culture of safety that currently is part
of the engineering mindset for airplanes
for example so yeah I don't think that
that that should be thrown out the
window and that untested all-powerful
system should be suddenly implemented
but there's no reason to think they are
and in fact if you look at the progress
of artificial intelligence it's been you
know it's been impressive especially in
the last ten years or so but the idea
that suddenly there'll be a step
function that all of a sudden before we
know it it will be all powerful that
there'll be some kind of recursive
self-improvement some kind of Foom is
also fanciful we certainly by the
technology that we that were now
impresses us such as deep learning when
you train something on hundreds of
thousands or millions of examples
they're not hundreds of thousands of
problems of which curing cancer is a
typical example and so the kind of
techniques that have allowed AI to
increase in the last five years are not
the claim that are going to lead to this
fantasy of of exponential sudden
self-improvement so it's may I think
it's it's kind of a magical thinking
it's not based on our understanding of
how AI actually works now give me a
chance here so you said fanciful magical
thinking in his TED talk Sam Harris says
that thinking about AI killing all human
civilization is somehow fun
intellectually now I have to say as a
scientist engineer I don't find it fun
but when I'm having beer with my non-ai
friends there is indeed something fun
and appealing about it like talking
about an episode of black mirror
considering if a large meteor is headed
towards Earth we were just told a large
meteors headed towards Earth something
like this
and can you relate to this sense of fun
and do you understand the psychology of
it yeah that's a good question
III personally don't find it fun
I find it kind of actually a waste of
time because there are genuine threats
that we ought to be thinking about like
like pandemics like like a cyber
security vulnerabilities like the
possibility of nuclear war and certainly
climate change this is enough to film it
many conversations without and I think
there I think Sam did put his finger on
something namely that there is a
community us sometimes called the
rationality community that delights in
using its brain power to come up with
scenarios that would not occur to mere
mortals to less cerebral people so there
is a kind of intellectual thrill in
finding new things to worry about that
no one has worried about yet
I actually think though that it's not
only is it is a kind of fun that doesn't
give me particular pleasure but I think
there is there can be a pernicious side
to it namely that you overcome people
with such dread such fatalism that
there's so many ways to die to
annihilate our civilization that we may
as well enjoy life while we can there's
nothing we can do about it if climate
change doesn't do us in then runaway
robots will so let's enjoy ourselves now
we've got to prioritize we have to look
at threats that are close to certainty
such as climate change and distinguish
those from ones that are merely
imaginable but with infinitesimal
probabilities and we have to take into
account people's worry budget you can't
worry about everything and if you so
dread and fear and terror and numb and
fatalism it can lead to a kind of
numbness well they're just these
problems are overwhelming and the
engineers are just gonna kill us all so
let's either destroy the entire
infrastructure of science technology or
let's just enjoy life while we can so
there's a certain line of worry which
I'm worried about a lot of things
engineering there's a certain line of
worry when you cross a lot across
that it becomes paralyzing fear as
opposed to productive fear and that's
kind of what they're highlighting there
exactly right and we've seen some we
know that human effort is not well
calibrated against risk in that because
a basic tenet of cognitive psychology is
that perception of risk and hence
perception of fear is driven by imagined
ability not by data and so we miss
allocate vast amounts of resources to
avoiding terrorism which kills on
average about six Americans a year with
a one exception of 9/11 we invade
countries we invent entire new
departments of government with massive
massive expenditure of resources and
lives to defend ourselves against a
trivial risk whereas guaranteed risks
and you mentioned as one of them you
mentioned traffic fatalities and even
risks that are not here but are
plausible enough to worry about
like pandemics like nuclear war receive
far too little attention the in
presidential debates there's no
discussion of how to minimize the risk
of nuclear war lots of discussion of
terrorism for example and and so we I
think it's essential to calibrate our
budget of fear worry concern planning to
the actual probability of harm yep so
let me ask this then this question
so speaking of imagined ability you said
it's important to think about reason and
one of my favorite people who who likes
to dip into the outskirts of reason
through fascinating exploration of his
imagination is Joe Rogan oh yes you so
who has through reason used to believe a
lot of conspiracies and through a reason
has stripped away a lot of his beliefs
in that way so it's fascinating actually
to watch him through rationality kind of
throw away that ideas of Bigfoot and
9/11 I'm not sure exactly trails I don't
know what the leaves in yet
but you no longer know believed in
that's right no either he's become a
real force for for good yeah so you were
on the Joe Rogan podcast in February and
had a fascinating conversation but as
far as I remember didn't talk much about
artificial intelligence I will be on his
podcast in a couple weeks
Joe is very much concerned about
existential threat away I am not sure if
you're this is why I was I was hoping
that you would get into that topic and
in this way he represents quite a lot of
people who look at the topic of AI from
10,000 foot level so as an exercise of
communication he said it's important to
be rational and reason about these
things let me ask if you were to coach
me as AI researcher about how to speak
to Joe and the general public about AI
what would you advise well I'd the short
answer would be to read the sections
that I wrote an Enlightenment I know
about AI but a longer reason would be I
think to emphasize and I think you're
very well positioned as an engineer to
remind people about the culture of
engineering that it really is safety
oriented that another discussion in
enlightenment now I plot rates an
accidental death from various causes
plane crashes car crashes Occupational
accidents even death by lightning
strikes and they all plummet because the
culture of engineering is how do you
squeeze out the the lethal risks death
by fire death by drowning death by
asphyxiation all of them drastically
declined because of advances in
engineering then I gotta say I did not
appreciate until I saw those graphs and
it is because exactly people like you
who stamp at night thing oh my god it is
what a mime is what I mean what I'm
inventing likely to hurt people and to
deploy ingenuity to prevent that from
happening now I'm not an engineer
although I spent 22 years at MIT so I
know something about the culture of
engineering my understanding is that
this is the way this is what you think
if you're an engineer
and it's essential that that culture not
be suddenly switched off when come start
official intelligence so I mean fact
that could be a problem but is there any
reason to think it would be switched off
I don't think so and one there's not
enough engineers speaking up for this
way for this the excitement for the
positive view of human nature what
you're trying to create is the
positivity like everything we try to
invent is trying to do good for the
world but let me ask you about the
psychology of negativity it seems just
objectively not considering the topic it
seems that being negative about the
future makes you sound smarter than me
positive about the future irregardless
of topic am I correct in the observation
and if you if so why do you think that
is yeah I think that I think there is
that that phenomenon that as Tom Lehrer
the satirist said always predict the
worst and you'll be hailed as a prophet
it may be part of our overall negativity
bias we are as a species more attuned to
their negative than the positive we
dread losses more than we enjoy gains
and that mate might open up a space for
prophets to remind us of harms and risks
and losses that we may have overlooked
so I think there there there is that
asymmetry so you've written some of my
favorite books all over the place so
starting from enlightenment now to the
better angels of our nature
blank slate how the mind works the the
one about language language instinct
bill gates big fan to set of your most
recent book that it's my new favorite
book of all time so for you as an author
what was the book early on in your life
that had a profound impact on the way
you saw the world certainly this book
enlightenment now is influenced by David
Deutsch as the beginning of infinity a
rather deep reflection on knowledge and
the power of knowledge to improve the
human condition the and with bits of
wisdom such as that problems are
inevitable but problems are solvable
given the
knowledge and that solutions create new
problems have to be solved in their turn
that's I think a kind of wisdom about
the human condition that influenced the
writing of this book there's some books
that are excellent but obscure some of
which I have on my page of my website I
read a book called the history of force
self-published by a political scientist
named James Payne on the historical
decline of violence and that was one of
the inspirations for the better angels
of our nature
the what about early on if we look back
when you're maybe a teenager loved a
book called one two three infinity when
I was a young adult I read that book by
George gamma the physicist very
accessible in humorous explanations of
relativity of number theory of
dimensionality high multiple dimensional
spaces in a way that I think is still
delightful seventy years after it was
published I like that the time life
science series these were books that
would arrive every month my mother
subscribed to each one on a different
topic one would be on electricity what
would be on forests want to be learned
may evolution and then one was on the
mind and I was just intrigued that there
could be a science of mind and that that
book I would cite as an influence as
well then later on you fell in love with
the idea of studying the mind that's one
thing that grabbed you it was one of the
things I would say the I read as a
college student the book reflections on
language by Noam Chomsky spent most of
his career here at MIT Richard Dawkins
two books the blind watchmaker and The
Selfish Gene or enormous Li influential
partly for mainly for the content but
also for the writing style the ability
to explain abstract concepts in lively
prose Stephen Jay Gould first collection
ever since Darwin also excellent example
of lively writing George Miller
psychologist that most psychologists are
familiar with came up with the idea that
human memory has a capacity of seven
plus or minus two chunks and then
Sophia's biggest claim to fame but he
wrote a couple of books on language and
communication that I've read it's an
undergraduate again beautifully written
and intellectually deep wonderful Steven
thank you so much for taking the time
today my pleasure thanks a lot Lex
you