Transcript
HGY1vf5H1z4 • MEGATHREAT: The Dangers Of AI Are WEIRDER Than You Think! | Yoshua Bengio
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/TomBilyeu/.shards/text-0001.zst#text/0946_HGY1vf5H1z4.txt
Kind: captions
Language: en
want to start with a quote from Ilyas at
skever he said for people that don't
know he's a co-founder of openai he said
it may be that today is large General
networks are slightly conscious so I
want to pose that question to you are
computers becoming conscious right now I
think it's uh
a question that doesn't make much sense
because we don't even have a clear
scientific understanding of what
conscious means
So based on that I would say no there
are lots of properties of our
Consciousness that are missing what it
means to be conscious in other words
what sort of computations are going on
in our brain when we become conscious of
something
and and
you know how that is is related to
Notions for example of self or relations
to others
um
our thoughts emerge and how they're
related to each other all kinds of Clues
we have about Consciousness including
how it's implemented in neural circuits
that are completely missing in large
language models all right as we as we
think about Consciousness from an
evolutionary standpoint we think about
its utility
um and for for people that haven't heard
Consciousness defined before it
the I think the easiest way to explain
it is it feels like something to be a
human and so the question is does it
feel like something to be a machine and
the most important question I think as
we think about the dangers of AI and
what's coming is does it matter
is it additional utility for it to feel
like something to be a human or to be a
machine do you agree that that's going
to matter in terms of goal orientation
in terms of quote unquote wanting to do
something as we think about our AI you
know is it going to take over are we
going to be dealing with Killer Robots
or am I totally off base with that my
group put out
um paper just in the last couple of
months
and we propose a theory that that may uh
that is anchored in how brains compute
so the theory has to do with the
dynamical nature of the brain in other
words you know you have a
uh 80 billion neurons and their activity
is changing over time
the trajectory that your brain goes
through is all these neurons change
their activity
tends to converge towards some
configuration when you're becoming
conscious that convergence has
mathematical implications that
would suggest that what we store in our
short-term memory are these thoughts
that are discrete but compositional in
other words like think like a short
sentence and it's also something
ineffable which means
it's very hard to translate in words and
there are good reasons for that it's
just the
uh it would take a huge number of words
to be able to translate the the
trajectory that state of your brain
which is a very very high dimensional
object into words it's just impossible
essentially
so even though we may communicate with
language we may have a different
interpretation of what this means and
especially in particular a different
subjective experience because of our ex
or our life has been different right so
we've learned different ways of
interpreting the world okay if if
Consciousness is a byproduct of the
feeling I get when my particular brain
is honing in on a thought that there is
a neural pattern that becomes
recognizable
um the the thing I think that becomes
important and the reason that I think
this is important as we think about
artificial intelligence potentially
becoming Killer Robots is my big thing
with AI has always been AI has to want
something it has to want an outcome not
necessarily interesting let me finish
that sentence and then we'll pick that
apart but if I'm right and AI has to
want something and that's certainly how
humans behave then I understand the
utility of this ineffable feeling that
you're talking about that we call
consciousness because
for humans to make a decision and know
what direction to go in we must have
emotion if you selectively damage the
region of the brain that controls
emotion people cannot make decisions
they can tell you all the rational
reasons why they should eat fish instead
of beef or beef instead of fish but they
can't then actually decide and do it so
we need that feeling that where this
thing is more desirable than that thing
and so my thinking has always been as it
relates to AI that
if AI doesn't want something it will
never be from an emotional standpoint if
it doesn't feel like anything to be a
robot they will never have the final
decision making capability to care
enough to take over the world
and so that's where it's like if it
becomes conscious and it suddenly feels
like something to be a robot then
they're going to be motivated in a
direction that direction could be bad it
could be good whatever but they're going
to be motivated in a direction now if
they are like humans
but if they never become conscious or it
never feels like anything I would think
they would be much like they are now
where it's like well it could be this it
could be that if you've ever talked to
Chachi petite which of course you have
but that feels like it would sort of be
a Perpetual State of Affairs what might
I be getting wrong
my belief is that you're talking about
two things that are actually quite
separate as if there are one so wanting
something having goals and getting some
kind of internal or external reward for
achieving those goals is something that
we already do in machine learning you
know reinforcement learning is all based
on this and you don't need subjective
experience for that
so these are like really distinct
abilities
subjective experience is related to
thoughts that we discussed earlier we
could have machines that have something
like thoughts and potentially if we
implement it similarly to how it is in
our brain they might have subjective
experience it doesn't mean that they
need to have goals I think we can build
machines that that have these
capabilities in other words they can
help us solve problems by telling us how
you know what is the problem what is the
a good scientific understanding of what
is going on and what might be better
Solutions and but they're not trying to
achieve anything except be as truthful
to the data what they know whether you
have observed what then is the disaster
scenario of something that can pass the
touring test that you're worried enough
that you're saying look we need to treat
this the way that we would treat
anything else dangerous whether that's
the environment whether that's or sorry
climate change or whether that's nuclear
weapons like to to put it on that level
just at the touring test level give me
give me the disaster scenario we already
have trolls right that are trying to
influence people on the internet social
media
but there are humans and you can't scale
the number of trolls very easily this
would be too expensive and maybe people
would not want to do it even if you bait
them
but you can scale AI with just more
compute power
so you could have ai trolls
that
I mean I think there already exists AI
Trolls but they are stupid it's easy to
you know interact with them a little bit
and you see they're not human I mean
they've been repetitive and and so on
and so now we get to the point where
you're going to have ai trolls that
essentially invade are
social media invade or even our email
and in fact they can do they could do
better than that it could be
personalized so right now
it's a little bit difficult for a human
troll to have a good personal
understanding of every person that they
hit on
that to know their history I mean it
would just take too much time for them
to study you
and multiplied by a billion people
but an AI system that could just have
access to all of the interactions that
you've had the videos where you spoke
the texts that's available on the
internet
they could know you a lot better right
so how could that be used
well
it could be used to hit on the right
buttons for you to change your political
opinion on something
it could be used to even fool you into
thinking your
in a conversation with someone you know
because they can know you and they can
know your friend
and they can impersonate your friend at
least text other text up
so I don't think we have these things
but just they're just like one small
step away from having these capabilities
as I was thinking through the same
problem
I was thinking here is a terrifying
example dear parents AI is going to
reach out to you mimicking your child
asking for money and so it's not a
Nigerian prince anymore it's Mom uh I
something happened at school whatever
they talk in their language they
reference things that you you don't
think that they could have possibly put
out there but of course if it's if the
AI is good at image recognition and it
knows that you guys were on a beach
seven years ago like it could it could
replicate things in in the form of a
memory that you would never believe that
anybody else could possibly know but we
leak especially kids leak so much data
out into social media that to your point
that AI would be able to have so much
context so at my last company we got
socially engineered and they convinced
us to wire 50 Grand and when we went
back and looked at the emails back and
forth between our
finance department and the the CEO
it was so believable it wrote like it
was obviously a person but it was
writing like they would write to each
other and
I was just I was really flabbergasted
and so to think that a human could do
that to your point it's very hard for
them to get the amount of contextures to
take so much time but when AI is doing
it and it can churn through everything
that those two people had ever said to
each other
ever online uh that gets really scary
really fast okay so if if we were if we
did this pause the the letter that you
guys wrote and we paused for six months
and we were gonna hold the convention in
that time and all governments were there
Yoshua and you're up on stage and your
job isn't to tell us what to do but it's
to open the conversation in the right
place
where would you open that conversation
what do you want us focused on in term
I'm guessing it's like we need to limit
this or something along those lines
where do you begin
I don't know for sure exactly how these
Technologies could be used you and I can
like make up things maybe some are going
to be easier than we thought something
could be harder
but there's so much uncertainty about
how bad it can turn that we need to be
put it so Prudence here
is something that we need to bring in
our decision making uh
individually because we're gonna be
facing potentially these attacks
uh as as Nations at the planet level
yeah that that's that's that would be my
main message that that the technology
has reached a point where it can be very
damaging and there's too much unknown of
how this can happen when it will happen
and even the strongest expert even the
people who built the latest systems
can't tell you
it means
that we have to get our act together and
mostly is going to come from governments
so we need those people to get quickly
educated and we need to uh
also have Scholars experts not just AI
experts but like you know social
scientists legal Scholars
um psychologists because you know this
is the psychology of how this could be
used how to exploit people's weaknesses
um
in order to
do the the work the research also like
what sort of precautions do we need so
there are very simple things that we can
do very quickly for example
um watermarks and
content
um origin display so watermarks just
means that one accompanies say like open
AI what's up their software they could
easily put out
um another software that anybody could
run that can test with
99.99 confidence where they're uh a text
came from their system or not so he was
wouldn't see the difference
but for a machine that has the right
code it's very easy
if if if if their system is instrumented
properly in other words the kind of
sneak in some bits of information that
are not
you can't notice statistically there is
no difference but
the chances of having this particular
sequence of of words would be very very
unlikely and and would go to zero
quickly is the length of the message
increases so watermarks are easy to put
in technically speaking
and they would say this texts
comes from this company this version
whatever okay so a piece of software
running on your computer would be able
to say oh by the way the text that you
gave me to read is this company blah
blah blah
and then we need that information to be
displayed because of course you know
being able to detect the it's coming
from an AI system is one thing and but
when you have a user interface it should
also be mandatory like if I if I'm a on
a social media in particular and I'm
getting
uh you know I'm interfacing I mean I'm
interacting with some some character out
there online I need to know that that
character is not a human
and so that must be displayed if I get
uh a picture or a video or a text in an
email
I need my
email uh you know uh software to tell me
warning this is coming from you know
open AI GPT 5.6 okay so I'm going to
push back with the obvious thing and I
think I won't even have to play devil's
advocate here I I maybe I'm not more
pessimistic than you but I am in the the
toothpaste is out of the tube and
there's no getting it back in so I as as
a way to move all this forward lets you
and I actually debate the reality of all
this so uh I'm at the governmental
meeting you start saying that my
immediate reaction is Yoshua China is
going to develop this if we don't if we
put the brakes on this they're not going
to and this is a winner take all
scenario we cannot allow ourselves to
get behind
what say you
it's a good it's a good concern
um
and that's why we have to get China
around the table as well and Russia and
all the countries that may have the
capability to to do this but Russia
right now feels hemmed into a corner
they are Putin is literally intimating
that he's going to use nuclear weapons
there's no Universe like we've already
tried Financial sanctions that's caused
them to you know start trading in
non-dollar denominations uh they're
grouping up with China Brazil South
Africa
um they India they don't care like
they're going to use that to their
advantage they're in fact even bluffing
would be a way smarter play for him to
say no no we're going to keep doing it
even if he wasn't even if they're like
backwaters it would be wise of him to
say no in in fact if you don't NATO if
you don't immediately back off we're
going to unleash a troll Farm the likes
of which you've never seen we're going
to completely destroy democracy in the
western world
yeah so first of all uh
we can protect ourselves without
necessarily hampering the research so I
think people misunderstood a letter it
never said stop the eye research
it's mostly about these very large
systems
that can be deployed in the public and
then used potentially in the various
ways that we have to be careful with
it's a tiny tiny sliver of the whole
thing that we're doing
um
second
and and second in the short term we do
have to protect the public in our
societies with things like
like trolls and cyber attacks and and uh
that can exploit AI
um third I I don't know I'm not a note
I don't my comfort zone here in terms of
diplomacy and then you know
you and me both but it's fun
um but but my
my guess is that
um the authoritarian governments are
probably as scared of this technology
but for different reasons
so why are they scared because
the same AI systems that could perturb
our democracies could also challenge
their power
in other words
imagine
AI trolls you know being able to defeat
the
protections of the uh
Chinese firewall and and interacting
with people and you know putting
Democratic ideas in their heads in China
um well that would not be something that
this governments probably would like to
see
um and in fact I think China has been
the fastest moving on regulation
not for the same reasons as we are
so they are afraid of this
so I think they will come to the table
but again like it's not my specialty
with anything but at least we
there's a chance that they they might be
willing to talk and remember
um the nuclear treaties were uh
worked on and signed right in the middle
of the Cold War
so
so long as each party recognizes that
they might have something worse to lose
by not entering those discussions I
think there's a chance we can
have a global coordination and we have
to work even if it's hard we have to
work on it yeah I don't I'm not so
worried about the hard part as I am what
is the natural reaction when you have a
very difficult dangerous thing and
history tells me that we don't come to
the table to sign the non-proliferation
agreement until we have proliferated so
far and we have so many missiles pointed
at each other that we finally go okay
let's not let this go beyond any more
and let's not let it go out to other
countries like we're perfectly fine
being in a stalemate with each other and
I worry that a similar kind of reaction
will be had here but I take your point
that this is not an area where either of
us are an expert as much as I find it
utterly fascinating to pursue that line
of thought but I I want to now go back
to what would we do to actually begin to
limit this stuff so we need to get
people thinking hey this is dangerous
that's clear but then the watermark
thing to me works only for people that
agree that they're going to do it
but is there a way so taking the instead
of trying to get people to not do things
how do we build defensive things that
even when somebody's trying to hack the
system so I doubt you know this about me
but we're building a video game and so
one of the things you have to think
about is this game people will attempt
to hack it like that that is just it
goes without saying so rather than me
trying to ask everybody hey please don't
hack video games like literally it's the
dumbest thing ever for the gamers to
hack the games is stupid you end up
ruining the fun that game will die out
and then people will try to invent a
whole new game far better for everybody
to just let's all agree that we're not
going to hack it but it human nature is
is what it is and that's never going to
work so what they do is they create an
adversarial approach where it's like I
find the best hackers in the world to
come in to try to hack this game and
then I figure out what I would have to
do to defeat that so what would an
adversarial setup look like an AI when
someone's trying not to Watermark but I
can still figure out who that came from
or it had you know is there a signature
or something like that that we could
identify you can reboot your life your
health even your career anything you
want all you need is discipline I can
teach you the tactics that I learned
while growing a billion dollar business
that will allow you to see your goals
through whether you want better health
stronger relationships more successful
career any of that is possible with the
mindset and business programs in Impact
Theory University join the thousands of
students who have already accomplished
amazing things tap now for a free trial
and get started today
watermarks are the easy thing and and
the
I agree they will only be done by the
like
legit actress
um people have already been working on
um machine learning
trained to detect
text or images that come from other
machine learning systems but these
systems are not nearly as good but yes
we we this is already being developed
and uh you know presumably there's going
to be a lot more effort in that
direction and we need that as Plan B
right the plan a is already to reduce
the like right now it's just too easy to
you can have an API and just
right on top of uh chat GPT
um
so yeah we should do all these things uh
by the way the kind of adversarial
approach that you're talking about is
from what I hear and read is also what
openai has been doing and and companies
like like Google have been doing the
um they hire people to try to break
their system as much as they can that's
exactly what they're doing like uh you
know red teams
um
and and that's good we need to continue
doing that
um but maybe we need to make sure
um the the the guidelines for doing that
are shared across the board and people
can uh we ensure all companies have have
that sort of uh re-test thing before
it's released to the public for example
yeah
um about because you asked like what we
can we do in the short term at the
beginning of your question
so Canada has a law a bill that is going
to pass into law probably in the spring
that uh maybe the first one
um around the world on on uh Ai and it
has a nice feature
which hopefully other countries will
imitate which is that the law itself
is fairly
you know uh
simple it it states a number of
principles
um
and then it leaves the details of what
exactly needs to be enforced to
regulation
and the reason this is good is because
it's much easier for governments to
change regulation regulation could be
changed like this
uh you don't need to go back to the
parliament
and so you could have much more adaptive
legislative System including the law and
the regulation and that's going to be
super important because
the the the the nefarious uses that we
didn't think about like they're going to
come up and we need to wrap quickly if
we have to go back to Parliament it's
going to take two years no this is not
going to work right we need to have a
system that's very adaptive in terms of
legislation
yeah that that is inevitable uh that
brings me back to we're in this
situation because I think people are
surprised at how rapidly AI is advancing
what how did we get caught off guard
like someone like you has been in this
for so long you knew the rate of change
um what happened is is it just we we
just could not anticipate as we scaled
the data up how fast the machine would
learn or is there what what is the X we
were surprised that the machine did X
quickly what was X ask acid training
tests in other words manipulate language
well enough I can fool us
uh the experience I had of so sorry what
what I'm asking is what allowed it to do
that in a way that caught us off guard
well that's interesting right it didn't
require any new science it it's
essentially scale that did it
do you think Consciousness is a function
of scale no right no
I don't think so uh I mean some people
think so but there are theories around
that uh I
think scale is probably useful but that
there are some very specific
qualitative features of how we become
conscious that would work even at
smaller scales
um
so yeah scale is important simply
because
the job that we're asking these
computers to do when they answer
questions
is computationally
very demanding
and this comes from so I have these I
have a blog post where I talk about the
large language models and some of their
limitations
um the issue here is that if you take
almost any problem in computer science
that you can write down formally like
try to optimize this or that or to find
the answer to this and that question
almost all of these questions
the optimal solution is intractable
meaning it would take an exponential
amount of computation compared with how
big the question is
and so the it's like if you want the
optimal neural net that can answer your
questions about that they can reason
properly and so on is exponentially big
which means it's we can't have it but
the bigger our neural net the better it
approximates this
so there's a sense in which bigger is
better because of that even with
problems that look simple so as an
example to illustrate what I mean
consider the problem of playing the game
of goat
the rules of the game are fairly simple
you can write a few lines of code that
check the rules and tell you how many
points you get and so on
the neural net that can play goal and
like really
win like in other words go by the rules
and exploit them in order to figure out
how you know what is the optimal move
and so on that neural net
the neural Nets we have now that play
really better than humans they are huge
also okay and
um it's just a property of many computer
science problems that are like that like
the the knowledge needed to describe the
problem maybe even when the knowledge is
small the size of the machine that's
necessary to
answer questions take decisions that are
optimal is very big
so I think that's the reason why we need
big neural Nets that's why we have a big
brain even if the amount of knowledge
that's involved is small now in addition
the amount of knowledge that's necessary
to understand the world around us is
also big
so so but but I I think the biggest part
of what our brain does is inference is
this is the technical term to mean given
knowledge how do you answer questions
properly like optimize or take decisions
that are that are good given that
knowledge
okay is inference the ability to apply a
pattern that I saw in the past to a new
novel problem
that's yes that's part of inference
um In classical AI
uh things were very clear between
um knowledge and inference
so knowledge was people having typed a
bunch of rules and facts
and so the knowledge was not launched it
was handcrafted
and inference was well you have some
search procedure that looks how to
combine these pieces of knowledge these
facts and rules in order to answer your
question and we know that's NP hard
that's like exponentially hard and so we
use approximations it's never perfect
and so on but people didn't use neural
Nets in those days they use like
classical computer science algorithms
that try to approximate this like a star
now we have neural Nets
and neural Nets can
do this approximate difference it can be
trained to do a really good job at
searching for
good answers to questions given that
piece of knowledge how does it Define
good is I always assume that what AI was
doing was trying to guess effectively
the next letter or the next word So
based on all the patterns that it had
seen so it's like I've seen questions
like this before and here are the
answers that have been rewarded in that
a human has told me that it likes this
answer better than this answer and that
the the pattern recognition of the
machine combined with the human ranking
those responses from the machine gives
us the way that the AI approaches that
question to this answer
am I missing something
yeah I think I mean what you're saying
makes sense but
there's also a lot of knowledge we have
that can be distilled for example
through
How We Do It For Education
uh we do it through books encyclopedia
so it's it's not old not old knowledge
we have but but you can see that so let
me try to put it in this way Wikipedia
is way smaller than your brain
smaller than my brain yeah smaller is a
number of bits that are needed to encode
it whereas the number of bits that are
needed to encode all the synaptic
weights in your brain got it yep yep
huge orders of magnitudes Greater
um so
if we were just talking about these
kinds of knowledge which is not
everything obviously like
physical intuitions and so on is another
kind that we can't put in Wikipedia But
if we just talk about that kind of
knowledge
uh
you would want a very big brain just the
people to answer questions that are
consistent without knowledge
that's that's that's what I meant
okay right now that's not the way we
train uh
uh our large language models by the way
the way we trade them is we look at
texts that presumably is more or less
consistent without because that's not
even the case there is like people are
not truthful and they say all kinds of
things but even if it were and then by
imitating that text like predicting the
next word and so on
uh we implicitly encapsulate
the underlying knowledge which let's say
is Wikipedia
um
but uh yeah uh so so again the argument
is
scale
is important because many problems
require doing
computation that is intractable if you
want to really get the right answer
and so we need these really large neural
Nets to do a good job of approximating
how to compute the answer
okay so now I'm gonna have to get into
the nitty-gritty a little bit this will
be really 101 for you but might be
certainly will be instructive for me and
hopefully many others
to say that a neural network is large
what do we mean are we just daisy
chaining gpus CPUs
um are they so when I think about the
brain the brain is is broken into these
hyper specialized regions so for
instance vision is comprised of this
part of vision tracks motion and I can
selectively damage the motion Center of
your brain and now you see everything in
a snapshot uh there's uh things to deal
with corners and so you can selectively
damage the part of your brain that that
detects Corners there sayings it detects
straight lines curved lines it's it's
all these like
hyper-specific little bits and pieces
and
I don't my understanding of a neural
network is it isn't that hyper
specialized it's a lot of the same thing
over and over and over and over and over
and over and over
um
help me understand what it means to be a
large neural network
okay so you write that the brain
seems to have very specialized and
modular structure
as in different parts of Cortex
especially uh when when we look at what
neurons do in different parts we see
that they're they're rather specialized
it's it's not
perfectly easy to like identify what
this neuron does but but we we get a
sense of what it's about
and it's also true of our large neural
Nets but to a lesser extent so people
have been
trying to
uh give a name to what each particular
unit in a large neural net is doing
and we can do that by checking when does
it turn on what kind of input was
present
so if we look a lot of the things that
make this particular Unit on
and we ask humans so you know what
what's the what's the category that this
belongs to then we're
often able to
um
to give a name and at least that has
been done a lot for
um image processing neural Nets because
that's easy sometimes you could say well
it's this part of the image and this
kind of object
for text I know there's some papers
doing that
um
now I do think that our brain is is more
modular you know more with more
specialization than what we're currently
uh see by the way cortex is
a uniform architecture
like the the part of your brain that is
cortex which is thought to be the part
that's more modern in evolution and
really uh essential for like Advanced
connect abilities
um
is all the same texture it's all the
same kind of units repeated all over the
place and depending on your experience
or the kinds of uh brain accidents that
you may have a different part of Cortex
will latch on a different job so uh
these are more or less replaceable
pieces of Hardware like like our neural
Nets
um there are other pieces in the brain
that are not cortex that seem to be much
more specialized like hippocampus and
and
hypothalamus and so on I I'm at the
edges now that was certainly useful
information but I want to push a little
bit farther so
when I'm what I'm trying to wrap my head
around is I have a vague understanding
of how the brain works very specialized
I do not understand how we scale a
neural network unless you're saying that
each okay let me uh I was going to say
each node and then I realized to me a
node is either a GPU or a CPU but I
actually don't know if that's true uh so
first is I would need to understand what
is a node inside of a neural net and
then how are the different parts of the
neural net program to do a specialized
thing
we'll start there okay okay all right
um I'm going to start with the end
they're not programmed to do a
specialized thing that emerges through
learning whoa whoa whoa
that's true of the brain and that's true
of neural Nets you don't tell this part
of the neural net you'd be responsible
for vision and this part you'll be
responsible for language but that
happens
yes you get specialization that happens
whoa because they collaborate to solve
the problem they're different pieces
as how learning this like even like a a
simple neural net from 1990 does that
how complex is that underlying code is
that really basic but somehow has these
incredibly complex emergent Properties
or is that incredibly sophisticated of
course whoa very simple
uh what the complexity emerges because
you you have all of these degrees of
freedom and you have a powerful way to
train each of the these degrees of
freedom these synaptic weights so that
collectively they optimize what you want
which is like predicting the piece of
text that comes next properly
um but let me go back to the hardware
question
the hardware we use currently to train
our artificial neural Nets is very
different from the brain they're very
very very different
um we don't know how to build Hardware
that would be as efficient as the brain
in terms of energy
and all uh compute that we can squeeze
into a few Watts right and we wish we
would so lots of people are trying to
figure out how to build circus that
would be as efficient computationally as
the brain
um
another difference is that
the brain has highly decentralized like
at the level of neurons and we got like
80 billions of them decentralized memory
and computation
the traditional uh
CPU
has
memory completely separated from compute
and you have bus that transfers
information from one to the other to do
the computation in the little uh little
CPU
that's very different from how the brain
is organized where every neuron has a
bit of memory and a bit of compute
now people doing Hardware have been
working to build chips that would have
something that's more decentralized and
more like the brain and there are
several companies doing this sort of
things
um they haven't yet
you know reached a point where it can be
a GPU so a GPU is a kind of hybrid thing
where
it's really the same CPU pattern but
instead of having one CPU you've got
5 000.
and they each have their little memory
but there's also some shared memory and
it was designed initially for graphics
I'm going to Graphics but it turned out
that
or many of the kinds of neural Nets that
we we wanted to do it was a pretty good
computational architecture but it has
its own limitation it's it's
energy wise it's like a huge waste
compared to the brain as I said earlier
and a large part of that waste is
because you have all that traffic still
between memory you know places that
contain memory and and places that do
compute
so it's much more parallel than the good
old CPU
but much less parallel than the brain
hmm
you're so deep in this it probably
doesn't freak you out as much as it
freaks me out but this is uh like as I
really start to try to wrap my head
around what is happening this feels
deeply mysterious now I've heard
um people say that one of the things is
freaking them out and this is people
deep deep in AI one of the things that
they find unnerving is that they don't
understand what the neural network is
doing they don't understand how it came
up with a given answer
is
how is that possible
it's it's just a fundamental property of
systems that learn
um and that learn not
like a set of uh simple recipes like you
would learn how to do a a recipe in your
kitchen but learn
something very complicated
that cannot be reduced to a few formulas
uh like how to walk or how to speak or
how to translate or how you go from
speech Acoustics to sequence of words
these tasks
cannot be easily
uh
done by traditional programming
but if you put a machine that has that
can like approximate any function to
some degree of precision so big a big
neural net
and you tweak each of the parameters of
that machine
billions of times
it can learn to do what you want it can
change its but then
you don't really understand how it does
it you understand
why it you know uh you know you
understand the code that specifies how
this machine computes but the actual
computation it does depends on what it
has learned which is based on less and
lots of experience
so maybe a good analogy is like our own
intuition these machines are like
intuition machines so what I mean is
this you know
how to act in different contexts like
for example how to climb stairs
but you can't explain it to a machine
you can't write a program people have
tried robot assists have tried you can't
write a program that does that
one reason is
it's you know it's all happening in the
unconscious right but but there's a more
friend the reason it's all happening in
their countries it's just too big it's a
very very complicated program that's
running in your brain
and the only way that you can acquire
that skill that's reasonable is by trial
and error and practice and you know
maybe some of evolutionary you know uh
pressure that
initializes your weights close to
something that's needed to to learn to
walk
um
so things that we do intuitively that
need a lot of practice
are exactly like what those machines are
learning they they you
they can't explain it we can't explain
our own intuition
uh we just know this is how we should do
it
um and it's knowledge that's so complex
that we can't put it in for We cannot
put it in a few formulas or a few
sentences it's just
that's that's a major of things that
that there are very complicated things
that can't be easily put into
verbalizable form but they can still be
discovered acquired through learning
through practice through repetition of
doing the exercise again and again
I have a grandson who's been learning to
walk in the last few months
you know he was stumbling a lot and and
going again and again and again and
after a few months now he's pretty good
he's not like us yet
but it's months and months of practice
and
getting better gradually
through lots and lots of practice that's
how we train those neural Nets and
that's why we can't explain why they
give this particular answer they're just
like well I know this is the answer but
I can't explain to you because it's too
complicated I have like
500 billion weights that really are the
explanation do you want those 500
billion whites what are you going to do
with that
okay let's start teasing this apart so
one of the more interesting things in
what you just said is going to highlight
the difference between what humans do
and what machines do and why
um until there is a breakthrough and I
always love saying this stuff in front
of experts so you can strike me down if
you think I'm crazy but I think one of
the reasons that a breakthrough is going
to be required and that we're not just
going to be able to scale our way to
artificial general intelligence and I've
completely heard you that AI passing a
Turing test opens up a Pandora's box
that is utterly terrifying in terms of
its ability to disregulate
the human's ability to function well as
a hive
heard but now
the reason I think there's going to need
to be a breakthrough is that the reason
that your grandson is able to get better
over time
isn't just the calculus of balance it's
that by doing it he's building
stabilizing muscles and so his muscles
are getting stronger in areas that they
didn't need to be strong in when he was
crawling so you get this biological
feedback loop of oh I see what I'm going
to have to do part of the repetition
isn't just locking it into my brain part
of the repetition is that I'm going to
need to develop the muscle fibers and
the strength now how much of that is
mediated by the brain in a part of the
brain that's subconscious is a huge
question and certainly gets to the
complexity in your 50 billion parameters
and all that the other part is that his
brain is reconfiguring neuronal
connections and it's making some of
those connections more efficient through
a process called myelination so it's
wrapping the fatty tissue to sheath
different connections just like an
electrician would do and now it's it's
got this incredible biological feedback
loop of I have a desire I'm goal
oriented I want to do this thing this
thing is walk now
how the interplay of I want to walk
because I see my parents walk I see
Grandpa walking I want to do that thing
or I have something in me tells me being
over there is better than being here and
so I actually want a locomote to get
there and I would figure this out even
if I never saw anybody move which is
probably more likely given the baby
start crawling and they don't see people
crawl
they just have a desire to locomote
somewhere
again going back to my initial thing
about I think machines are going to need
to have desire they have a reason that
they want to cross the road if we want
to get to human level intelligence but
let's just let me not fractal too much
here so okay we have this biological
feedback loop
you're not going to get that with a
neural network no matter how much you
scale it up it doesn't have a biological
it doesn't have the ability to change
itself yet now maybe it will and maybe
it could architect a new chip or
something once it has the ability to
manipulate 3D printers or what have you
but for now it's stuck with a physical
configuration of chips unlike a human
which can morph from muscles to brain
matter it's stuck with a configuration
but and this feels like the very
interesting thing that we've gotten
right so far which is I have figured out
the pieces that I need so whether that's
gpus or the code or both but I figured
out the pieces that I need for that
configuration to learn in a very
emergent way so I set up the pieces and
then I give it
a thing I wanted to learn and a quote
unquote reward for doing so and then a
massive amount of emergent Behavior
comes out of that but it's always going
to be limited in a way that human
intelligence is not because of the
biological feedback loop okay now that
I've set that stage do you agree that
machines will need something that
imitates that biological feedback loop
meaning I need efficiency here that I
did not have a moment ago for me to
continue to get good at this thing
and that without that we're sort of
stuck at the the
highly potentially destructive ability
to manipulate language and and images
but that's it
so actually current neural Nets already
do what you say I mean they don't have
the biological framework but they they
do learn from practice and mistakes but
can they Recon re can they reconfigure
their architecture to get better at it
you don't need to change the chips they
just need to change the content of the
memory in those chips that contains that
says so why is the biological Loop
different
Y is different
um it's different because it you know it
it has been designed by Evolution
whereas we are designing these things
using our means and but but
fundamentally let me let me step back
here a little bit
to State something important as a kind
of
uh starting point
bodies
are
machines
they are biological machines cells are
machines there are biological machines
we don't fully understand them we know
it's full of feedback loops we know a
lot I mean we know a lot of biology but
we don't understand the full thing but
we know it's just matter interacting and
exchanging information
so yeah it's just a different kind of
machine now
the question some people think that uh
in particular when people were
discussing Consciousness because
Consciousness looks mysterious some
people think that well
it's got to be something that's based on
biology otherwise how could it ever like
be in machines well it's I I completely
with that
um
because it's just it it it's just
information processing
um now the kind of information
processing going on in our bodies and
our brains and so on uh may have some
particular attributes that we still
don't have in in our current machines
but the
the the specific Hardware just that
needs to have enough power so you know
one of the Great
uh
uh starting points of computer science
by people like Turing and Von Neumann in
in the early days of computing is the
realization
with for example the turing machine that
you can decouple the hardware
from the software that and the same
outward facing Behavior
can be achieved by just changing the
software parts so long as the hardware
is sufficiently complex and trains show
that you need very very simple Hardware
and then you can do any computation
that's like computer science 101
so
that would suggest that there is no
reason why we couldn't in the future
build machines that have the same
capabilities as we do now we are still
the current systems are missing a bunch
of things
um you talked you know we talked about
walking and why is it that we don't have
robots that can walk I mean they can
walk as well as humans have you seen
Boston Dynamics that sucks freakish it
can parkour they're not as good as
humans by you know a big gap
but yeah I've seen I've seen them
um but but I think
the issue is simply that we have tons
more data available to train language
models than we have for training robots
it's hard to create the training data
for a robot because it's in the physical
world you can't just replicate a million
robots and then but eventually people
will do it
uh or be able to do good enough job with
simulation there's a lot of work going
in that direction
but um
but yeah so
I I I kind of disagree with your
conclusions so go back to the the reason
that we don't have robots that can walk
is because it's just not it's not able
to to
use some sort of model to see enough
okay but there's you're saying the point
of that is there's nothing fundamentally
missing from the architecture that the
AI is running on it's just a modeling
problem
it yes the software part we're we're
still far up for example you know one of
the clues I mentioned earlier is that
the amount of training data that that a
large language model needs like you know
gptx
uh compared to what a human needs in
terms of amount of text to kind of
understand language
is is hugely different so that tells me
we're missing something important but I
don't think it's because we're missing
something in the low level Hardware of
biology
uh although I you know I'm a big fan of
listening to biology and and
understanding what brains are doing and
so on so they can serve as inspiration
but I don't think it's a hardware
problem now Hardware is important for
efficiency
so
current gpus are not efficient compared
to our brains and and but but it doesn't
mean that in in the next few years we
will not be able to to build uh
specialized Hardware that will be a
thousand times more efficient than
current ones
um and now there's a much bigger
incentive for companies to actually
invest in this because the these AI
systems are going to be more and more
everywhere and it's going to become much
more profitable to do these Investments
yeah man proliferation to AIS is crazy
uh before we derail on that though I
want to ask you so
we're comparing the way that machines
are evolving the way the AI is evolving
to human evolution
um
I've always thought of evolution as uh
to use Richard Dawkins quote the blind
watchmaker
it's not trying to make a watch
but the watch emerges out of
um up what we could probably refer to as
a few simple lines of code it's like uh
replication and the way that it
replicates plus uh a desire to survive
on a long enough time scale
there's not even a need for a desire to
survive it's simply the selection of
those who survive
yeah interesting that that's is that a
important distinction because I worry
well actually I don't worry this this
would then
um maybe what you're trying to get me to
understand about why machines don't need
a desire they just there needs to be a
selection criteria for the one that does
the thing better and that will be enough
to Boom to have the the exponential
um and that's the way we train those
systems so the way we train them is that
we if you want we throw away all the
configurations of parameters that don't
work and we focus more and more on ones
that do that's that's how training
proceeds it it changes things
in small steps just like Evolution does
except Evolution does it in parallel
with you know billions of uh individuals
uh uh kind of
searching the space of genetic
configurations that can be useful
whereas we're doing it the learning way
so we have like one individual big
neural net and we're like making one
small change at a time
um but it's both our processes of search
in a very high dimensional space of
computations
okay so let me this was something that I
heard you say in an interview at one
point I wasn't sure if I was going to
ask it but it's now as you were saying
that I realize that the entire universe
is born of a simple set of physical
laws for lack of a better word
and everything that we see from because
I was trying to think what is the origin
of evolution because you said that it
you you don't need it to desire it just
needs to get selected and then I was
like well what's selecting it the laws
of physics just dictate that certain
things will continue to hold their form
and function and others will
disintegrate uh okay so then everything
is born out of these laws of physics
which we don't fully understand yet but
do you think there will be similar laws
of intelligence that we realize oh here
are the very simple subset and all of
the struggle that we have right now is
because much like we don't yet fully
understand the laws of physics but yet
we can still build a nuclear bomb
nuclear power GPS all of that we know
enough to do amazing things but we don't
know everything
do you think we have the same thing
happening in intelligence
that's what drove me into the field
that hope that there may be some
principles that we can understand as
humans verbally like write about them
explain them to each other and so on
maybe write math that formalizes them
that are sufficient to explain our
intelligence now obviously for this to
work it has to be that it explains how
we learn because the content of what we
learned the knowledge that has been
acquired by Evolution and then by our
you know in our individual life
is too big to be put in a few uh you
know lines of math
um
so whether this is true or not obviously
we don't know but everything we have
seen with the progress of neural Nets in
the last few decades suggests that yes
because if you look
inside these systems like what are the
mathematical principles behind those
large language models very few
it's it's something you can describe
that you can you can you can explain you
know when we when I teach uh we explain
these to students and so on
um it's not that complicated it's just
like physics is not that complicated
what is complicated is
the consequence so I think there's a
good analogy here to also understand the
story about
intuition and very complicated things
that are difficult to put in formula
um
the laws of physics
um are very simple you can write them
down but what's complicated is well if
you put
a huge number of atoms together that
obey these laws
and you get something very complicated
like
an ice storm
it's very difficult to predict
um because we don't have the
computational power to like uh emulate
that it's it's
out of very simple things like simple
laws of physics you get something
extremely complicated that comes out
that emerges
and it's similar with neuralness a few
simple
lines of code a few simple mathematical
equations
plus you know basically that at scale
and with enough data in this case and
you get something that emerges that's
very powerful and very complicated and
and not easy to reduce to those initial
principles
okay so now I wanna I wanna bring back
in uh the idea of alignment
of Desire
um so if if physics runs off the back of
a set of simple rules that does not need
to want any outcome
but humans
manifest desire and so we rapidly become
the most complicated thing that we know
of
is there do you think about the problem
of alignment are AI researchers trying
to give
the intelligence a level of Desire
because that would make it more profound
or
is that am am I just barking at the
wrong tree I I keep coming back to
AI without desire
mildly potent AI with desire uh
dangerous beyond all measure and reason
um
yes and no
so
yes with desires
and a lot of
and the right you know uh computational
and the right algorithms could be very
potent and very dangerous
and potentially very difficult to align
to our needs our values and so on
and lots of people are working on this
like how do we design the algorithms so
that even though we give goals to the
machines
and they will not end up doing things
that are against
what we want
so that's the alignment problem
but where I disagree with you
is that I think we could have ai systems
that have no goals
no wants
but they're just trained
to do good inference to do to learn as
as well as possible about the world from
the data they have
and to recapitulate to us
in order good
answers to the questions we are asking
so let me explain why this would be very
useful
in science typically we do experiments
and then we try to make sense of that
data
we come up with theories and there could
be multiple theories that are consistent
with the data and so different people
may have different opinions on them or
they recognize that all of these
theories are possible and at this point
we can't
there's a big weight between those
theories then what they do is
based on the fact that we have these
competing theories they will Design
another batch of experiments to try to
figure out which you know to eliminate
some of those theories and then the
cycle goes back because more experiments
more data more analysis more theories
and and eventually we hopefully zoom in
on fewer interior theories
so this is the experimental process of
science we come up with an understanding
of the world but it's not one
understanding there's always some
ambiguity
uh in some cases we're very sure but
yeah uh a scientist whose honest is
never never sure except maybe for math
right
so why am I telling you all this because
that whole process which is at the heart
of all the progress we've seen in
humanity which would be needed to cure
disease to fight climate change even to
understand how Society works and people
interact with each other better so all
of the things that scientists do to make
sense of the world and come up with
proposals of things we could do to
achieve goals
all of that process could be done by
very powerful AI systems
that don't have any goals
their only job is to make sense of the
data
represent all the theories that are
compatible with it and suggest the best
choice of experiments we should do next
in order to get the answers to the
questions we want
and that can all be done without any
wants just by obeying some laws of
probability
uh that we know that there are known and
we just need the computational scale to
Implement that
uh and algorithms you know that people
will discover but but I think we already
have the basis for that
so what I'm trying to say is we could
have machines that are extremely
powerful more powerful even than a human
brain like we have scientists doing that
job right now
but but I'm looking for example in
biology because of the progress of uh
biotech we are now generating data sets
that no human brain can can visualize
can can absorb
we are we have robots that do
experiments again in biotechnology where
the number of experiments is in the
millions the human cannot like specify a
million different things to try
by hand
a machine can
a machine with the right codes and that
machine doesn't need to have any wants
it just needs to do
Beijing inference if you want the
technical term like and just needs to do
Beijing reference
um so
yeah bottom line is
we can have hugely useful machines that
are incredibly smart that have no ones
whatsoever
okay so I'm I it's becoming clearer to
me now what our what our base
assumptions are so your base assumption
is that I think AI is already does all
the amazing things you want it to do is
as dangerous as you could hope it to be
uh as a tool for humans to use
and the thing that I'm focused on is
in your scenario I can just tell it to
stop and it will stop the paperclip
problem in my estimation isn't a real
problem if I can just tell it stop stop
making paper clips and then it shuts
down where it becomes a problem is when
it's like no I want to make paper clips
and I'm going to keep making paper clips
and there's nothing you can do to stop
me and I'm going to go around you this
way and that way and I'm not nearly as
concerned I get humans have so many
weapons at their disposal I already know
what the world looks like when people
have just unbelievable Lee powerful
weapons at their disposal it's
manageable but when the weapon gets to
be a million times smarter than I am and
decide what it wants to aim at and
decide when it wants to go off and
nobody gets to tell it otherwise that's
a world that freaks me out
and so when you think about the
alignment problem
do you think it's a problem like because
in in your world where the AI doesn't
have its own wants and desires coming
from an emotional place where one thing
feels better than the other and so it
has the same type of human desire to go
in a given direction that we have and we
know what that's like people kill for
their [ __ ] kids man they will do
crazy things when the thing feels good
enough
so in your world can't we just tell it
to stop
Okay so
there are two kinds of machines we could
build with the current state of our
technology today there's a choice
what kind of machine is more like us
and has wants and goals and it could
decide to do something we did not
anticipate
and that could be very dangerous
and
people are trying to see how we could
program them in a way that would be
safer that's the alignment AI alignment
problem
but we have a choice we don't have to go
that route we could build machines that
are not like us we don't try to make
them like humans we don't give them
feelings we don't give them once we see
the thing is once we understand those
principles of intelligence we can choose
how we
apply them if we're wise we're going to
choose a safe it doesn't do anything it
doesn't want anything it just it's
training objective is
truth
okay so might I suggests when you're
we've gathered all the uh the Nations
together you're about to go on stage
what I'm gonna try to then get you to
convince people is that that becomes the
most important thing do not give AI
desire period like
inference only truth only that's it
that's it that's it
and actually that I think that's the
safest route
the problem now the problem is we need
to have all these people around the
table and to agree
and honestly I'm not sure it's gonna
work
um there might be some crazy guy
somewhere who says yes but then goes a
different route
because he wants to have fun with those
machines that look like humans and he's
a [ __ ] and doesn't realize how
dangerous it is
people are crazy people have emotions
people
um
are unconscious of the consequences they
think oh it's going to be fine
but I'm going to make a lot more money
than the other guy because I'm going to
use this thing that
is more like humans there's going to be
a temptation to build systems that are
like us
would it be more powerful if it was more
like us
I don't know if it would be well there
would be more powerful in the sense of
being able to act in the world
but that's also more dangerous right
um act in the world uh based on their
goals right that that's that's the place
which is a slippery slope
and or maybe we can make progress but
even if we make progress with progress
with uh the AI life and techniques are
trying to design uh cp1 rules and
algorithms such that even if they have
goals it's going to be safe
but but even that is not a sufficient
protection because somebody could just
decide to not use those guidelines
so having algorithms that make AI
alignment work is not enough we have a
social problem we have a
a problem of collective wisdom how do we
change our society
so that we avoid
somebody doing a catastrophic thing with
a very powerful tool that can
potentially
destroy yourself it's not something for
tomorrow it's not going to happen
tomorrow it's not going to happen next
year it's not gonna happen in five years
but
we are on that path and it's going to
take a lot of time for society to adapt
and probably reinvent itself deeply
for us to find a way to be happy and
and safe
when was the last time we had to
reinvent ourselves like that
ah
we we reinvented ourselves many times
over
um but not like that of course this this
challenge is completely new
but we did so think about major cultural
changes that have happened in the story
in the history of humanity
um I think about like religions
um the invention of nation states
um
you know uh invention of central banks
and money and I'm I'm almost quoting
Harare here so we've created all kinds
of fictions as he calls them
that driver society and and people
and in ways that kind of work
um but are not adequate for the next
challenge by the way
dealing with this challenge also helps
us deal with things like climate change
and nuclear dangers and so on because
it's all about how do we coordinate
the billions of people on Earth so that
we all behave in a way that's not
dangerous to the rests
I don't know how to do that but we need
our best Minds to start thinking about
it
you're really starting to to pull
together some very interesting threads
here so
um you've all know a harari's idea of a
a collective fiction I've heard other
people refer to it as a useful fiction
um that's very interesting now my
concern is that that works when people
don't understand so I'll go to the most
recent one Central Banking so
people don't understand it and you know
you've got the whole idea of what's it
called the beast from Jekyll Island or
something from Jekyll Island where they
they go and to your point it was very
much a decision they a a cabal of people
went and decided we're gonna do it like
this and we're gonna present it to the
world like that and they did it and hey
it it just quote unquote Works
um there's very few things though more
unnerving than peeling back that and
realizing what it actually is
um and so I wonder how we present a
useful fiction to the world about AI
that will get us all unified in a way
that will be useful
um but isn't
manipulative
I think that's the essence of what
democracy should be
that we rationally
accept
the collective rules
or our
individual and Collective well-being
so that actually has worked quite well
in many countries
um but we need to go like one step
further in that direction it can
absolutely be truthful and not
manipulative
salonist principles
of you know Justice and fairness and
equity and so on are respected
people will go with that
but here I think we need to
we need to go beyond even beyond the the
kind of democratic system so in a
democratic system if it works well right
we don't need to lie to people to get
them to accept to go in a particular
direction to vote for you know a
referendum or something for a particular
decision
they should be in fact
as
conscious and
well understanding of of the decisions
that they're collectively taking
yeah yeah getting them getting everybody
on the same page that that is the tricky
part that's why I when you first said it
in the context of religion
it immediately felt like oh if we could
pull that off if we had a collective
narrative about what this meant it might
work the problem is it's not my
preferred way of solving the problem
obviously I'd much rather go with like
an Uber democracy that really goes to
these principles uh even further
yeah that's where I think I begin to I
get it regulation works and on the
countries that come up to the table
regulation is amazing we should regulate
this I think people we have to you have
to do something to your point just
because it's hard doesn't mean you
should stand still
but at the same time that's one where
I'm like yeah well all the countries
that regulated does not account for the
person like you were talking about
that's like oh I'm gonna go build this
thing they don't recognize second third
order consequences or more terrifyingly
they do recognize the second and third
order consequences and they do it anyway
or even like because that that gets into
the the crazy man hypothesis but having
read about Robert Oppenheimer when they
were building the bomb and how you just
become convinced that look the Nazis are
building a bomb we need to do it we have
to do it faster uh we'll sort of worry
about the bigger problems later down the
road and I very much worry that that's
where we are with AI okay I'm going to
set that aside for a second because it's
terrifying and I so as I said at the
beginning I worry too yeah I rightly so
how we solve the problem is a completely
different thing let me let me ask you do
you think as AI continues to come on
board and let's say that we it we're
thoughtful about it we've got good
regulation in place will it be like
dealing with a hyper-intelligent human
or will it feel completely alien to us
it depends how we choose to design it
so
if we
build systems that have ones that have a
personality that that that have emotions
we could because the more we understand
these things from humans the more we'll
be able to do it
um
personally I don't think that's the wise
choice
and so if we go the other route of
systems that are useful to us
not necessarily
[Music]
acting like humans
um I think it'll be much easier uh
collectively because we won't be
expecting those things to interact with
us like like humans do
um they they will be just assistance
basically that help us sort out problems
and find Solutions
yeah the alien idea I as you were
answering that question I had a wave of
I don't I don't see how we're gonna
loneliness unto itself
is going to lead people to play with
making it emotional
even even as I think about the way that
we want to use AI in our in my company
it's to generate very realistic
characters in a video game and I can
just see that to make them more and more
realistic you're gonna want them you can
mimic emotion for sure and if we pass
regulations that's probably where we
stop is you create things that mimic
emotion but don't actually have them
but to create something that is
um
that is more realistic we will so I want
to go back to go for a second so in go
they said that the it was like
playing an alien it just it made moves
that were so different so given that now
already you have people saying that it
comes at something so counter-intuitive
that it feels completely foreign you
don't think that even without wants and
desires that it's gonna feel
just just I think it's completely
different the reason it
it it it it looks foreign to uh current
players might be the same as
the currently uplingo
uh at the master level
I might look for into somebody 100 years
ago
because we've made progress in our
understanding of how to play well and
and the strategies we use now maybe very
surprising to somebody 100 years ago
so it's just that these machines have
now trained on someone so many games
that they're like
you know 100 years into the future if
you want in the evolution go if we had
let things
uh go so they discovered basically it's
like it's like science right looks like
magic until you understand it so if if
uh if if somebody from 100 years ago
comes today and looks at our cell phones
it's gonna look like magic it's gonna
look very unintuitive what
how could that possibly be right
and we are just used to that and so I
yeah I I don't think it's uh because
there's something fundamentally
different I mean there are fundamental
differences but but that is just being
uh
systems that are more competent because
they've been trained on more data and
trained longer and focusing on this
particular problem in case of both
evolution is one of the big themes that
has come out today if people want to
keep up Yoshua with you and the
ever-evolving science that is AI where
should they go
well I have a website
it's easy to find uh and a Blog where I
I write some of my ideas
and of course I also write a lot of
scientific papers my group does Mila The
Institute that I founded uh with couple
of universities and universities here in
Montreal has about a thousand AI
researchers working on many of these
problems but also uh thinking hard about
responsible AI aspect and and these
questions and there are many people
around the world who are thinking hard
about this and it as it should the truth
is hitting your career goals is not easy
you have to be willing to go the extra
mile to stand out and do hard things
better than anybody else but there are
10 steps I want to take you through that
will 100x your efficiency so you can
crush your goals and get back more time
into your day you'll not only get
control of your time you'll learn how to
use that momentum to take on your next
big goal to help you do this I've
created a list of the 10 most impactful
things that any High achiever needs to
dominate and you can download it for
free by clicking the link in today's
description alright my friend and back
to today's episode this is literally a
direct quote from the book towards the
end
but and I quote we are headed for
collapse civilization is becoming
incoherent around us
I'd love to know what you guys mean by
that and if that to you is a big part of
the thread through the book because it
was for me
well the first thing to say is that you
skipped the warning in the front of the
book that it should only be read while
sitting down so fall over and injure
themselves
um yeah well we are headed for collapse
that's really not even an extraordinary
claim if you just simply extrapolate out
from where we are we are outstripping
the planet's capacity to house us and we
don't appear to have a plan for shifting
gears so it's it's really a factual
statement now the question really is why
and the the bitter pill is that the very
thing that made us so successful as a
species
is Now setting us up for disaster that
is to say our evolutionary capacity to
solve problems has uh outstripped our
capacity to adapt to the new world that
we have created for ourselves and so
we've become psychologically and
socially and physiologically and
politically unhealthy and our
civilization isn't any better
that said if any species could get us
out of this mess it's us like you know
it's exactly as Brett said we are the
most labile the most adaptable the most
generalist species on the planet and
born with the most potential to become
anything else previously unimagined so I
do feel like in the end the message of
the book which is explicitly and
consistently evolutionary in all of its
different instantiations is hopeful and
yes that that quote that you read is
ominous and I think as Brett said you
know a factual statement but we can do
this we we have we have to do it and uh
we need to try and in fact in
evolutionary biology we recognize
something we call adaptive Peaks and
adaptive valleys and it would have to be
true that to shift gears to something
much better something that that uh gave
humans more of what it is that we all
value we would have to go through an
Adaptive Valley and it would look
frightening and in fact they are
dangerous places to be but it's part and
parcel of shifting from one mode of
existence to another
all right I think an idea that's going
to be really important to get across and
this is something as a guy that only
ever thought I would talk about business
and then in trying to explain how to get
good at business I kept having to come
back to mindset and then trying to
explain mindset I keep having to go to
Evolution it's like that we're having a
biological experience that your brain is
an organ it comes uh you guys said that
we are not a blank slate but we are the
blankest of slates which I think is a
phenomenal way to put this idea and I
want to tie that to the title and get
you guys take so it's a
hunter-gatherer's guide to the 21st
century and so the way that I take that
is that notion you have to understand
that you're a product of evolution that
your brain is a product of evolution and
then once you understand the forces of
evolution and how we got here then maybe
just maybe we can find our way out of it
so what are the key elements to being a
product of evolution that you think
people miss that we must understand if
we're going to navigate our way well out
of this Valley of evolution let me say
first that the the title
hunter-gatherer's Guide to the 21st
century
evokes that sort of romanticized
hunter-gatherer on the African Savannah
of the Paleolithic which of course is a
part of our human history and does have
many lessons in it to teach us about who
we are now and who we can become but as
we say in the book we are all parts of
our history like we are not just
hunter-gatherers we are also right now
post-industrialists and there are
evolutionary implications of that go a
little farther back a lot farther back
depending on your Framing and we are
agriculturalists go farther back where
hunter-gatherers go farther back we're
primates where mammals we're fish all of
these moments of our evolutionary
history
have left their Mark in us and have
something to teach us about both what
our capacities are and what our
weaknesses are and what we can do going
forward
and I would add the Lessons From
Evolution uh are both good and bad here
one thing that we realized that our
students over the course of many years
of teaching this material realized was
that everything about our experience as
human beings is shaped by our
evolutionary nature and that has a very
disturbing upshot because we are
fantastic creatures with an utterly
mundane Mission the very same mission
that every other evolved creature has to
Lodge its genes in the future
um and that this actually
explains the nature not only of our
physical beings but of our culture and
our perception of the world so
understanding that all of that marvelous
architecture is built for an utterly
mind-numbing purpose is an important
first step in seeing where to go but the
other thing to realize and you
referenced our our assertion that we are
the blankest Slate that has ever existed
or has ever been produced by evolution
and what this means is that we actually
have an arbitrary map of what we can
change that to the extent that our
genomes have offloaded much of the
evolutionary adaptive work to the
software layer that means we are
actually capable of changing that layer
because that layer is built for change
but not everything exists in that layer
so some things about what we are are
very difficult to change some things are
actually trivial easily changed and
knowing which is which is a matter of
sorting out where the information is
housed but it's all there for the same
reason it's exactly it's all there for
the same reason it's all evolutionary be
it genetic or cultural or anything else
can you guys give us an example of and I
found this very provocative in the book
and it certainly Rings true to me but
that to say that we are in some ways
fish from an evolutionary standpoint
that we are you know in some ways
primates from an evolutionary standpoint
what does that mean exactly again it's a
factual plan one that once you've seen
the picture standing from the right
place is uncontroversial when we say you
know is a platypus warm-blooded we are
not asking a question about its
phylogeny right we're asking about how
it works right when we ask is a whale a
mammal we are asking a question about
phylogeny so when we ask the question
are humans fish if we're asking a
functional question then maybe not but
if we're actually asking a question akin
to is a mouse a mammal right then we are
asking a question about the evolutionary
relatedness of that creature to
everything else and the key thing you
need to understand is that a group a
good evolutionary group like mammal or
primate or ape is a group that if you
imagine the Tree of Life
falls from the Tree of Life with a
single clip right if you clip the Tree
of Life at a particular place all of the
Apes fall together if you clip it lower
down all of the primates fall together
and the claim that we are fish is a
simple matter of if we agree that a
shark is a fish and we agree that a
guppy is a fish if you clip the tree of
life such that you capture those two
species you will inherently capture all
the tetrapods which is to say creatures
like us so we are fish as a factual
matter if the question is one of
evolutionary relatedness so it'll be um
if I may just say um say that in in
slightly different words
there are at least two main ways to be
similar right you can be similar because
you have shared history and you can be
similar because you've converged on some
solution and so dragonflies and swans
both fly not because the most recent
common ancestor of dragonflies and swans
flew but because in each of their uh
environments flight was an Adaptive
response and that means that flying
flyingness is not a phylogenetic it's
not a historical representation of what
those two things are whereas if you say
well both both whales and humans lactate
in order to feed their babies that is a
description of of something that they
both inherited from a shared ancestor
right so the earliest mammal lactated to
Fiats yet if any any organism on the
planet today that is descendant of that
first mammal that lactated to to feed
its young is a mammal even if some
future mammal went a different way and
lost the ability to delect it it would
still be a mammal so you know Brett
mentioned tetrapods uh tetrapods came
you know with the fish that came out
onto land with you know four feet and
started moving around and its amphibians
and the reptiles and the birds and the
mammals
but snakes are tetrapods not because
they still have four feet because they
don't but because they're a member of
those that group so it's it's a
historical description of group
membership as opposed to like an
ecological description of what we're
doing so we're not aquatic like most
fish are but we're fish because we
belong to a group that includes all the
fish I'm gonna say why I think that
matters and why I think you guys put
that at the beginning of a book that
sort of has this punch line of like Hey
we're really headed towards disaster and
we have to be very thoughtful and here
are some solutions so the reason why in
business you end up having to talk about
evolution is because I need a business
owner to understand you cannot trust
your impulses because your impulses may
not have the growth of your business in
mind it may not reflect an understanding
of consumer Behavior it may simply be
something from our evolutionary past
that was like
um akin to it's better to jump away from
the garden hose thinking that it might
be a snake than it is to think that it
might be a garden hose and it really is
a snake and once you understand okay my
mind is structured in a certain way it
has these insane biases it tends me
towards certain things like the one that
bothers me the absolute most is that
when people have a feeling it feels so
real it and you never translate it into
logic so you're like that thing makes me
angry therefore it is bad and it must be
attacked assailed whatever and if you
run a business like that if you cannot
Divorce Yourself from I have an Impulse
stop that insert conscious control and
then figure out sort of what the first
principles logical buildup IS you can't
solve a novel problem and until you can
solve a novel problem in an environment
that changes as rapidly as our current
world you guys call it hyper novelty if
I remember correctly
you you get into these crazy making
scenarios and so while
it seems almost absurd to say that in
some way we are fish the the key point
that I take away from your book and that
just seems so powerful to recognize to
me is that you have to understand that
you it wasn't a perfect construction at
least not towards modern goals does that
make sense to you guys absolutely
absolutely now there are um there are
really two upshots to this claim that
you are a fish right it's very hard for
people to wrap their minds around it the
first time but once you realize that
this is what we mean when we say uh you
know a whale is a mammal that we are
making a claim about the tree of life
then you can actually teach yourself how
adaptive Evolution works just by simply
recognizing that snakes are the most uh
species clade of legless lizards snakes
are lizards right you don't think of it
that way but they are seals are bears
that have returned to the Sea right so
once you understand that all you have to
do is say actually this is a that it's
unambiguous and that means that adaptive
evolution is the kind of process that
can turn a bear into an aquatic creature
like a seal right so that's foreign
the other thing that you mentioned and
you're right on the money which is that
if you use your intuitive honed Instinct
in order to sort through novel problems
you will constantly upend yourself
because those instincts aren't built
with those problems in mind now the
thing that's special for us humans is
that we have an alternative and the
alternative we argue in the second to
last chapter of the book
is consciousness that the correct tool
for approaching novel problems is to
promote whatever the underlying issue is
to Consciousness to share it between
individuals who likely have different
experience will see different components
of it clearly and to come to an emergent
understanding of what the meaning of the
problem is and what the most likely
useful solution may be so in some sense
really what you're saying is in this
context you're trying to get people to
get into their conscious mind and
process this as a team activity rather
than go with their gut which is very
likely wrong absolutely and you know our
capacity as humans but that includes as
a modern human who is you know trying to
engage in business with people to
oscillate between this conscious State
and a cultural state which is one in
which actually maybe change isn't
happening so rapidly maybe the rules
that we've got are good for the current
situation let's just do this let's do a
set and forget on this set of things
over here and not not constantly
renegotiate whereas in this other part
of the landscape we actually do need to
stay in our conscious minds and yes we
need to Tamp down the emotion and Tamp
down the you know the quick gut response
but engage with one another and
recognize that you know it's not Satan
on the other side of the interaction
it's another human being with all the
same kinds of strengths and weaknesses
as each of us has
yeah there's a really interesting thing
that happens when you have
um a team around you whether they're
employees or otherwise where
um the just literally just the other day
I said something to my team and several
of them misconstrued it and I could see
they were having a big emotional
response and I said okay tell me your
objection in a single sentence with no
commas no run-ons no parentheticals and
what you find is that old Einstein quote
of if you can't explain it simply you
probably don't understand it very well
and so people have this emotional
reaction but they and they then enact
out in the world that emotional reaction
but they don't actually stop to take the
time to be able to say it in a single
sentence and so you end up in what my my
wife and business partner and I call you
end up having to chase them because
you'll solve the they'll say here's my
problem you'll solve and say cool so if
I do something that addresses that and
they'll be like well it's not quite that
it's it's this and then you solve that
and they're like well it's not and it's
like when you force people to say
something really simply it forces them
to interpret that emotion to bring it
into the conscious mind and then to
actually deal with it
um which I find utterly fascinating do
you guys have a method by which you do
that in your own lives or that you've
taught other people to do it
yeah I would say there's a first go-to
move which is let's figure out what we
actually disagree about and very
frequently
um you can cover half the distance or
more just simply by separating an issue
into two different ones so for example
if I talk to a conservative audience I
know we're going to disagree about
climate change but I also know from
experience
that I can get a conservative audience
to agree that if they believed that
human activity was causing substantial
change to the climate and that that was
going to destabilize systems on which we
were dependent that they would be
enthusiastic about doing something about
it and so what we really disagree about
is whether or not we are causing
something sufficient that we need to
take that action right that's half the
distance covered in a matter of just
simply dividing it into two puzzles and
you'd be amazed almost everything that
we have Fierce disagreements about look
like this where you just sort of assume
the other side has every defect rather
than realizing we agree to a point at
which point we we differ yeah no and
this is um this is different from what
we were just talking about right with
regard to you know you're having an
emotional or an analytical response this
is a question of okay we think we're
talking about the same thing but
probably we are using the same words for
different categories yes
and can we can we figure out how many
subcategories there are and you know say
I've got five in my thing and you've got
five but maybe there's only two that
overlap so maybe we focus on those two
but maybe there's also maybe the you
know the devil in the details is in one
of those one of those other six that is
only in one of the people's Brands and
when it's revealed to be like actually
you think I believe that thing and I
don't like that's not something we share
between us so yeah having the capability
to go in and like zoom in and out on
problems and say actually the problem
can be smaller than you think and and
also it is larger than you think and
then I think and let's constantly
re-revaluate the the framing and the
scale at which we're doing analysis
you guys talk in the book about theory
of mind and Heather I know you've uh
either started writing have written or
have threatened to write a science
fiction novel which you know I
desperately want you to do and publish
um but I've started doing a game when I
find myself in that situation where and
I learned this in my previous company
where both of my partners were really
smart guys but every now and then we'd
get in an argument and
I'd be like I think they're an idiot but
I know they're not an idiot and they
think I'm an idiot but I'm not an idiot
and so I started approaching it as a
writer and saying okay if I were writing
this character in this scene what would
have to be true for them to be acting
this way what would they have to believe
be thinking whatever and in my marriage
this has become an extraordinary tool of
saying for you to be reacting this way
you would have to think that I believe
XYZ is that the issue and then by
getting to that what I call Base
assumptions
you can really begin to facilitate that
you guys must have encountered this a
bazillion times with students how do you
unearth that like what's the process of
of uncovering that especially in fact
it is so weird to me that you two have
become like the most attacked people on
planet Earth I I will never quite
understand how this has happened but how
do you guys
tease out and not just go ah they're
evil how do you find those underlying
issues well first of all I think we're
we're attacked because we we look like
villains sure yes so much so right
exactly um well you you hinted an
important issue here that I think is
actually quite modern so if you lived
any sort of normal existence from an
ancestor you know even just a couple
hundred years before the present you
would find that they pretty much grew up
around the people that they ended up
interacting with as adults they didn't
stray very far from home everything
would be incredibly familiar and the
language that they used to interact with
everybody they were encountering would
have been shared because it would have
been picked up from an immediate group
of ancestors that they both knew right
when we use English to talk to someone
else we have an incredibly blunt tool
because the ancestor from which we
picked up that shared language is quite
distant and what this does you know you
really have two kinds of people in the
world you've got people who more or less
use the tool like English as it was
handed to them and they don't question
it and you have people who are trying to
break new ground and what is true for
everybody who breaks new ground is that
they end up building a personal tool kit
they will redefine words so that they
become sharper and more refined and more
useful and then when you put two such
people together they will talk right
past each other because they don't
remember that they redefined things so
one thing that is essential if you're
going to team up with someone else who's
generative and done their own work and
arrived at some interesting conclusion
you need time it's weeks of talking to
each other before you even understand
how they use language once you do that
you can have an incredible conversation
but if you think you're going to sit
down with them and immediately pool what
you know and get somewhere you got
another thing coming because at first
they're going to sound like they don't
know what they're talking about right
you've got to find those definitions and
figure out what they mean and it's
actually if once you realize that this
is the job it's very pleasurable and
it's it's really an honor when somebody
lets you look through their eyes you say
oh that's how you see the world and now
I get a chance to see it that way and
then let me show you what I'm seeing and
you really can get somewhere but there's
no shortcut about the time necessary to
learn each other's language that's right
and that that really is a parallel for
what we're doing in the classroom as
well you know we didn't
you know if we were teaching 18 year
olds we were teaching freshmen we didn't
assume that they all came in as experts
obviously
um and yet the same logic applies that
everyone has you know I I wouldn't say
actually I don't think I really agree
that you know regardless of what
language you're speaking you either you
know take it on on faith as as you have
received it or you act
um decisively to change it I think
teenagers tend to be modifying language
pretty actively and so especially when
you
um you know find yourself in a room full
of you know relatively young people in a
college classroom you have a lot of
people here using language differently
um than you the professor does and then
you're also in the business of
introducing to them
um you know a set of tools some of which
has specialized language associated with
it
um associated with you know whatever it
is that you're teaching and finding the
Common Ground between these like okay
actually all of us modify language some
and and let's figure out how to use
language that we can all agree on and
understand and
um you know for for the purposes of
communication as opposed to for the
purposes of displaying group membership
yeah I think that's because that's what
jargon is often is about group
membership displays and that's what um
you know memes and especially
um well a lot of them a lot of the very
rapidly changing language um that
doesn't happen in technical Space is
really about demonstrating that you're
on the inside of some some joke
well actually this is a perfect case uh
of a personal definition that must be
shared otherwise you can't talk right
because uh I at least distinguish
between terms of Art and jargon most
people will use the term jargon for both
things but the point is terms of art are
a necessary evil right you have to add
some special term because the language
that you're handed the general language
doesn't cover it and so you need a
special term to describe something and
that means that somebody walking into
the conversation isn't necessarily aware
of what's being said until they've
learned that term jargon is the
pathological version of this jargon is
the use of these specially defined terms
to exclude people from a conversation
that they probably could understand and
that they might even realize you didn't
know what you were talking about if they
could understand the words that you were
using so you use those words to protect
yourself and
um until somebody gets that when you say
jargon you're not talking about
specialist language you're talking about
a competitive strategy
they won't know what you're saying so uh
and you know the difference as Heather
points out is in a room full of 18 year
olds especially when you're the
professor at some level you can say look
here are the terms that we need in order
to have this conversation and more or
less people will adopt them because
that's the natural state of things
rather than two peers getting together
where you have to you know my my rule is
I don't care whether the definition ends
up being the one that I came up with or
your set of definitions it doesn't
matter to me what I need is a term for
everything that needs to be
distinguished and we both need to know
what those terms are in order to have
the conversation but whose terms they
are doesn't matter well and yet
um you know as as I think we say in the
book our undergraduate advisor Bob
trivers an extraordinary evolutionary
biologist when we were leaving college
and applying to grad school he gave us a
piece of advice about what kinds of jobs
we might ultimately uh want if we were
to stay in Academia and he said do not
accept a job in which you are not
exposed to undergraduates because
teaching undergraduates means exposing
yourself and the thinking that you are
presenting to to naive Minds who will
throw curveballs at you and some of
those curveballs are going to be
nuisances and maybe they'll waste your
time but some of them are likely to
reveal to you the Frailty in your own
thinking or in the thinking of the field
and that is the way that progress is
made and so you know whom we call peers
is up for discussion and recognizing
that we can we can all learn from almost
every person that we interact with is a
remarkable Way Forward
yeah and the corollary to that is uh
there's a lot of pressure not to reveal
what you don't know by asking questions
that will establish the the boundaries
of your knowledge and being Courageous
about actually acknowledging what you
don't know often leads to the best
conversations right yeah you guys do
talk about that in the book and I think
that this is such an important idea I'd
love to tie to something else you talk
about which is
what is science like you guys have a
pretty unique take on what science is
that it could be done with a machete and
a pair of boots out in the jungle it can
be done in a Laboratory
um yeah what is science
science is a method for correcting for
bias
and that method is pretty well known it
has had a few updates along the way but
the the basic idea is it is a slightly
cumbersome mechanism for correcting for
human bias and the result is that it
produces a set of models and a a scope
of knowledge that improves over time and
what improves means is
it explains more while assuming less and
and fits with all of the other things
that we think are true maximally right
ultimately uh all true narratives must
reconcile and that includes the
scientific narratives that we tell at
different scales right the nanoscale has
to fit with the macroscopic scale even
if we don't understand how they fit
together yet so ultimately we're sort of
filling in from both sides what we
understand and what we expect is that
they will meet in the middle like a
bridge and um if they don't it means we
got something wrong somewhere yeah so
science is not the methods of science
it's not the glassware and the expensive
instrumentation and it's not the
um indicators uh that you're a
scientists because you're wearing these
things you know it's not the lab coat
and it's not the conclusions of science
it's not the things that we think we
know many of which things are actually
true and some of which aren't science is
the process
and all those other things are sort of
Hallmarks that may or may not be
accurate proxies uh when you're trying
to figure out is that person doing
science is this science over here but
what science is is actually the process
and it's worth saying that you don't
need it for Realms that are not
counterintuitive right you don't need to
do science in order to figure out where
the death chair is before you sit down
right it is apparent to you where the
deaf chair is because you're built to
perceive it directly now every so often
we all have the experience of looking at
something and not being able to figure
out what we're seeing there's some
optical illusion the way we are sitting
where we are in relation to the object
we're looking at and then you will go
through a scientific process you know if
that is a so and so that also suggests
this and I can see that that's not true
so what could it be right that that
process is scientific but by and large
the direct perception of objects around
you because it is intuitive because it's
built to be intuitive your system is
built to understand it in a way that
makes it intuitive doesn't require this
so we need science where things are
sufficiently difficult to observe or
counter-intuitive so you need a process
to correct for your expectations
what drives all this to me and that gets
missed even though it's sitting in plain
sight is to make progress you must
hunger to know where you are wrong and
if you can derive and again I come at
everything from a business lens in
business if you can derive tremendous
pleasure and quite frankly self-esteem
from your willingness to seek out the
imperfections in your thinking you'll
actually make it if you don't and it's
an ego protective game for you and your
ego is built around being right then
you're you're going under and to your
point about exposing yourself to
undergrads some of the most phenomenal
like
incisive questions challenging my
leadership have come from like interns
who just they've never had a job before
and so they're like oh why are we doing
XYZ and you're like why are we doing
that and if in that moment you're like I
must you know present myself and have a
reason for why we are doing that you
actually talk yourself into something
and because of the market much like
Evolution or reality which is something
I want definitely want to talk about how
there's a weirdness that we're living
through now where people feel like if
they can convince you through language
of something that it actually somehow
affects the underlying truth but in
business the market does not care like
you can convince your team that you're
right but if the market doesn't embrace
it you're gonna fail and there's
something wonderful about that
well I want to I want to push back
slightly uh admittedly this is not an
area of expertise but
it seems to me that there are two things
that business needs to be divided into
two things in order to really understand
what you're getting at the business
where the market is actually uh in a
position to test your understanding of
what is true and what will work and what
people want and things like that is one
thing that's real business and then
there's a kind of rent seeking in which
it may be about
uh you know a company that
does not have a functional product that
is selling the idea that it will have a
product that no one else will have and
its stock price Rises uh as a matter of
speculation that may well be a realm in
which it is uh it is deception and in
fact this this is beyond the scope of
the book but wherever perception is the
mediator of success you have deception
as an important evolutionary Force where
physics dictates whether you've
succeeded or failed you don't have that
problem you can't fool physics so I
don't know what the two words for the
two kinds of business are but the rent
seeking part of business and the actual
production of superior Goods or the same
Goods at a cheaper price that's a
different kind of of business structure
well here's what's interesting uh really
fast on that point I think that they do
fall under the same category so when I
say that the market decides so if your
pitches hey boys and girls we have to
deceive the market and we have to you
know game it and here's how we game it
and so everything is a function of your
goal so if your goal is to deceive and
to you know create a pump in your stock
price there is a way to do that that
will work and there is a way to do that
that won't work and now getting into
honorable goals versus you know
dishonorable goals that that is really
fascinating
um but I think that they they do fall
into the same category of either the
thing you do moves you towards your
goals or it does not
yep I mean I still think there's room
for a division because there is uh you
know the mythology of the market is that
it pays for value and rent seeking
violates that rent seeking effectively
is a failure of the market and so I
don't know I don't know where the
definition of split needs to be but it
does seem to me that although you're
right the the you know whether whether
what you are doing is uh assessing what
you believe the psychology of the market
to be or whether you are assessing what
might be physically possible in terms of
a product those are both real systems
that you're either correct about or not
um but there there does seem to me to be
a distinction between rent seeking and
and uh the production of actual value
and there's a Perfect Analogy to be made
um to academic science of course and so
in Academia if you are a scientist you
are supposed to be seeking an
understanding of reality
um but the way that modern science is
done uh involves a lot of requesting of
Grants from most of the federal
government and just as I imagine in
business although definitely not my area
of expertise the bigger you are the
harder it is to change course and in
Academia in part that means the later in
your career you are the harder it is to
change course and therefore the harder
it's going to be to do something like
Embrace that you were wrong and you know
actual honorable good scientists will
always will always fess up and talk
publicly about when they were wrong but
if your entire lab is contingent on a
model of the universe that is turning
out to look ever less likely it's going
to be much more difficult for you to do
that for you to embrace the wrongness of
you know what might be the livelihoods
of not just you but many of the people
who are working under you how would you
handle it
well you have to restructure things so
that uh what actually matters is being
right in the long term what we have is
an epidemic of corruption inside of
science which has more or less been
spotted first with respect to psychology
and psychology is difficult to do
because you're inherently looking into
the mind and you don't have a direct
ability to measure most of what's there
but the P hacking crisis basically the
abuse of Statistics to create the
impression of discovery which then
resulted in the inability to reproduce a
large fraction of the results in
Psychology is actually the tip of a much
larger iceberg that basically science as
a process is
excellent but science as a social
environment is defective and especially
defective where we have plugged it very
directly into Market incentives and
we've put a scientists at an unnatural
level of competition for a tiny number
of jobs we produce huge numbers of
applicants which means that the
incentive to cheat
is tremendous and those who stick to the
rules probably don't succeed very well
so basically what we have is a uh a race
to discover who is best at appearing
scientific and delivering those things
that um that the field wants to believe
rather than those things that the field
needs to know so the short answer to
your question which isn't especially
operationalizable is you need to put a
firewall between Market forces and the
scientific Endeavor because although
science is an incredibly powerful
process it is also a fragile process
that needs insulation for Market forces
or it cannot work so I would say just in
brief again not particularly
operationalizable but um reward public
error correction right um you know no
matter no matter what stage you are and
what the nature of the error was
um unless there was intentional fraud
which of course does exist
um public error Corrections should be
rewarded uh without shaming uh without
you know loss of of priority in other
things and the ability to do science
because not only do we need people to be
able to see that they've made mistakes
and and actually course correct but we
need people to be taking enough risks
early on that they are likely to
sometimes make errors and so in the
current environment where any error it
can be considered like the death nail
for a career we have ever more timid
scientists and that is making us less
good at science as a society and in fact
it almost seems implausible that people
would go around acknowledging their
errors but it wasn't so long ago that
this was fairly common in fact I used to
study bats and there's a famous example
of this not so long ago a guy named
Pettigrew had Advanced a radical
hypothesis that suggested that the old
world fruit bats the so-called flying
foxes were in fact not part of the same
evolutionary history as the bats that we
see here in the new world for example
the microbats he argued that they were
in fact flying primates
which was a fascinating argument it was
based on their neurobiology looking more
like uh monkey neurobiology than it does
like bat neurobiology which turned out
to be the result of the fact that they
used their eyes rather than echolocation
um so it was wrong and what he said at
the point that it was revealed by the
genes that he had been wrong was
if it is a wrong hypothesis it has been
a most fruitful wrong hypothesis which
was absolutely right the work that was
done to sort that out was tremendously
valuable and so anyway nobody who has
had to course correct and admit an error
finds it pleasant but we have to restore
the rules of the game where the longer
you wait the worse it is so that the
incentive is as soon as you know you're
wrong owning up to it so that you are on
the right side of the puzzle as quickly
as possible that that has to be the
objective
as you guys look at society and where
we're at now so one problem you've
obviously just very eloquently laid out
you've got incentives around admitting
that you're wrong is uh it could be the
death knell of your career what else is
going on that makes you guys have that
um quote that we started the the episode
with around you know sort of the you
didn't use the word disintegrating but
that there's to put my own words to it
there's a crazy making that's happening
at the societal level what
has led to that like what are three or
four factors that are causing that
breakdown
well
part you know
you know the bias that we have is
evolutionary biologists is that we see a
failure to understand what we are as uh
producing short-term reductionist metric
heavy pseudo-quantitative answers to
questions that uh warrant a much more
holistic and emergent approach and so
what are some of the the things that
modern humans have embraced or have been
told to embrace and some of us have and
some of us haven't
um that have helped produce uh problems
for for modern people uh this is not
this is not new with us but
um the ubiquity of screens the change in
parenting styles to protect children
from risk and unstructured play and the
drugging of children legally with
anti-anxiety and anti-depression meds
more likely if they're girls of the
speed if they're boys those three things
in combination all of which were sort of
on the rise in the 90s and hit fever
pitch in the in the odds and early teens
helps reduce a generation that became in
body adults but with vines that had not
had a chance yet to actually learn what
it is to be human and some of that is
reversible and you know really we just
by by chance we were College professors
we were College professors for basically
the entire period of time during which
Millennials were in college so we taught
Millennials from from beginning to end
and almost to a person our students were
amazing and receptive and creative and
and capable and if you you know when
when we talk about the generation of
Millennials it's those people who were
drugged and screened and helicopter and
snowplow parented right so with
individual attention people can be
pulled out of the tailspin but as a
societal level that's exactly what we're
in as a tailspin what is the tailspin
exactly though what is it about those
things that what does it create in
people
I want to address that as part of a a
slight reorientation of the question
so one of the things that is causing the
dysfunction is you know it's not just
the fact of the screens but it's what
they imply that virtually everything
that people know is coming through a
social channel right so it is all open
to manipulation augmentation Distortion
and
what people generally do not pick up in
the normal course of an education even
what we consider to be a high quality
education is interaction with systems
that allow you to check whether or not
that which sounds right actually
comports with logic
so for example
if you interact with an engine you can't
fool an engine into starting you either
figure out why it isn't starting or you
don't and so we advocated for students
that they dedicate some large fraction
of their education to systems that are
not socially mediated in which success
or failure is dictated by a physical
system that tells you whether or not
you've understood or failed to
understand and I mean this can be
mechanics or carpentry but it also can
be you know baking frankly or learning
to play the guitar right or or parkour
anything where success or failure is
non-arbitrary what you don't want is an
education built entirely of I succeeded
when the person at the front of the room
told me I got it because if the person
at the front of the room is a dope which
unfortunately happens too often you may
pick up wrong ideas and feel rewarded
for believing in them and that can
result in tremendous confusion
I would just finally say
that the book really is about what we
have informally called an evolutionary
toolkit and that evolutionary toolkit
the beauty of it what we saw and what
students reported to us in their picking
it up that toolkit allows with a very
small set of assumptions the
understanding of a large fraction of the
phenomena that we care about almost
everything we care about as humans is
evolutionarily impacted and the ability
to go through what you are told about
your psychology or your teeth or
anything like that and say does that
make sense given the highest quality
Darwinism that we've got does it make
sense to be told that our genomes
suddenly went haywire and that's why an
ever increasing fraction of young people
need orthodontia nope not for a moment
does it make sense that we have a piece
of our intestine called the appendix
that is no longer of any value and yet a
huge number of people have uh this thing
become inflamed and burst so that their
lives are placed in Jeopardy nope it
does not the ability to check what
you're being told against a set of law a
a toolkit for logic that is so robust
that you can instantly spot nonsense is
a very powerful enhancement and it does
not involve knowing more it involves
knowing less and having that little bit
that you know be really robust that's
terrific I would just say it doesn't
necessarily involve you knowing less but
being certain of less
it requires that you rest what you know
on less the foundation is more robust
and uh less elaborate we're just about
to ask what it means to know less so
thank you for that
um yeah that is
very interesting when I think about uh I
forget the exact quote but as the island
of your knowledge grows the shore of
your ignorance grows too you know
whatever the the famous quote but it's a
really interesting dichotomy so all
right we've got this generation that's
growing up they're looking at screens
you guys make a pretty interesting
assertion in the book about what screens
do in terms of you're getting emotional
cues from a non-human entity and that it
may play a part in the rise in autism I
found that incredibly interesting
um what I want to better understand is
what's going on
in our brains that so helicopter
parenting or snowplow parenting for
instance like why does that trap us in a
Perpetual childhood you guys talk about
Rites of Passage in the book I'd be very
curious to to hear like how do we begin
to deal with some of these things
whether it's screens whether it's snow
Pub parenting you know if I find myself
a 19 year old and I realize I've been
done dirty I've been on drugs for ages I
was raised essentially by a screen I'm
you know having trouble connecting
having trouble relating and my parents
have taken care of everything for me
what are the symptoms I need to look out
for and then how do I push forward
well uh in terms of symptoms this is
more or less a uh
it's a self-diagnosing problem you
either none of us feel perfectly at home
in modernity because in fact we are not
at home we can't be even you know the
the world that we live in is not the
world of our grandparents it's not even
the world that we were born into we live
as adults in a world that uh just
literally didn't exist when we were born
and without the world even that our
children are born into unless they were
literally born yesterday right exactly
it's changing so fast it can't be but
that said you either are feeling
constantly confused about what you're
seeing and hearing and you don't know
what to think or you've found something
that allows you to move forward and even
if you can't fully manage what it is
you're confronting it should surprise
you less and less and so we provide a
couple of tools in the book we talk
about the precautionary principle and we
talk about chesterton's fence which are
really two sides of the same coin and if
you're your life has been built around
the idea that whatever the newest thing
is the you know the latest wisdom is
what you uh were brought up on then in
all likelihood you are you know taking
various drugs to correct for various
things which may very well be the
symptoms of the last drugs you you took
uh you you know you may be engaging in
all kinds of behaviors uh to fix
mysterious problems maybe you can't
sleep and you know so you're you're
taking some aggressive mechanism to deal
with that the basic point is back away
from that which is novel and untested
and in the direction of that which is
time tested and it will result in a
decrease in anxiety and increase in your
control over your own life and the way
you'll tell is that you will feel less
confused more of the time can you guys
Define chesterton's fence I thought that
was a really great part of the book
yeah
um so gkk Chesterton was a 20th century
political philosopher maybe I'm not sure
exactly how he would have defined
himself but um of of the many
contributions that he made
um to you know I think he was a
conservative
um but of one of the many contributions
that he made was imagining two people on
a walk together and coming across a
fence that appeared to be in their way
and person a says let's get rid of the
fence and person B says well what's it
here for person a says I don't care it
doesn't matter I just want it gone
and person b or Chesterton I suppose uh
in My Telling here says there's no way
that I should let you get rid of the
fence until and unless you can tell me
what its function is if you can tell me
what its function is or was originally
here for then maybe we can talk about
whether or not it's time for it to go
but until you can explain to me what the
function is or was then there's no way
that I should allow you to get rid of it
simply because you see it as an
inconvenience
so um you know the appendix that Brett
already mentioned
um is is a perfect example of this and
we talk about in the book things like
you know chesterton's breast milk you
know we should you know we should we
should be abandoning breastfeeding
um you know we are abandoning
breastfeeding to the degree that we're
doing so at our Peril uh chesterton's
play not letting children have long
periods of unstructured play in which
adults are not monitoring them and are
not telling them not to bully each other
even though bullying is bad yes but
allowing children to figure out for
themselves in mixed age groups how it is
to navigate risk themselves that is how
those children will grow into competent
young people and you know if you do
arrive at 19 having been drugged into
submission and having had your parents
clear all of the hazards out of the way
for you the thing you can do is start
exposing yourself to risk and risk is
risky you know um this is you know this
is both a tautology and also shocking to
people because you know wait you're
telling me I need to to expose my
children to risk well if you want to
guarantee that your child will make it
to their 18th birthday alive then sure
put them in a cocoon right that's the
way to make sure that their body will
get to 18 is to reduce all risk from
their lives and protect them from
everything but will they have the mind
of an 18 year old at that point no they
will not so you trade a little bit of
security that your child will survive
and you know every time I say anything
like that I get chills you know we have
children that are teenagers now and the
idea that one of them would die and that
they would die taking a risk that we had
implicitly or explicitly encouraged I
don't know how you go on right and you
know parents do but I don't know how you
go on but the bigger risk is that they
get to 18 and they're incompetent they
can't think and they don't know how to
navigate the world especially now where
the world and the future will look
nothing like it did in the past they
need to be able to problem solve and the
way to do that is to be exposed to as
many situations in which they are
navigating on their own as early as
possible selection has really given
parents a the job of both managing risk
and not fully managing risk in other
words it's not that you don't protect
your children but you want to protect
them at a level where they do make
mistakes and those mistakes do come back
to haunt them and it causes them to be
wise adults who are capable of managing
risk when the risks when the stakes are
much higher and that's really the
question it's not do you want your child
to be safe of course you do but you want
them to be safe across their entire life
and if you protect them too much when
they are young they will not be able to
do it when they are older and the risks
are frankly much larger
yeah one of the things that I find most
intoxicating about you guys your book
your podcast is Nuance complexity like
recognizing that by being reductionist
by boiling things down to you know make
them simple but no no further whatever
the quote is that there is a point at
which you can reduce something so far
that you lose what's really going on
um
and finding our way through all of this
complexity though is incredibly
difficult
so as it comes to your own parenting
style how have you guys
employed this the idea that I'm most
interested of yours is is the idea that
the magic happens in the friction so
whether it's male female whether it's
right left it's understanding or safety
and risk it's understanding that it's
either side is problematic how have you
guys navigated that complexity
but we
gambled with neither of us knew
particularly much about rearing children
at the point where we ended up with them
and we more or less gambled on them too
much of a surprise to me at the point
that we ended up with them no no
certainly we knew they were coming for
many months but
um but from the point of view of what
one does to raise children well we
hadn't had a lot of experience with
young kids they just hadn't been in our
lives and we gambled on an idea that I
still think it's not entirely obvious
why it works at all
but if you treat your children
more or less at least cognitively you
just shoot way over their heads right
you talk to them like like adults from
very early on and they cannot respond in
kind but they get much more than you
would think based on what they can say
in response and so we have been
extremely open with our children about
the hazards in the world that they face
and the hazards in our family have been
frankly greater than than most children
would be confronted with at least in the
weird world yeah
um we have been honest with them we you
know we have an explicit rule in our
household and the children could recite
it uh without thought right you are
allowed to break your arm or your leg
you are not allowed to damage your eyes
you're not allowed to damage your skull
you're not allowed to damage your neck
or your back right now when you say that
to a kid and they realize actually it's
not that I'm being told no no no no no
no I'm being told I am actually allowed
to break my arm and nobody is going to
necessarily you know be concerned you
know yes we'll take care of you no
matter what and if you damage your eyes
we'll take care of you then too but the
basic point is there's just a
fundamental distinction between damaging
things which repair pretty well and
damaging things which don't and that
ought to exist in your mind you know
every time you leave the house
understanding that there are certain
things you know that it's not that you
want to avoid bad things and uh go
towards good things and that there's a
whole spectrum of bad and uh you may
need in an instant to navigate you know
if you're driving down the highway
yes the first job is don't crash right
don't crash is a good rule but you can't
always not crash and sometimes you've
got a choice about what you crash into
or how you crash and you know if you've
just got everything filed as a binary
then you're you're in much more danger
so being clear with kids about the
subtleties and the nuance and frankly
about the bind that you're in our
children know that we have made a
conscious decision that in order that
they can manage risk as adults they have
to face risks as children that could
potentially cost them their lives you
know we took our kids into the Amazon
for example that's not a safe place to
be but they're also the kind of kids who
can handle it now so one of the things
that was very important to us was that
our children literally learn how to fall
that when they were climbing up on
things on trees or in jungle gyms that
they would launch themselves
intentionally so they would learn how to
fall safely but metaphorically learning
how to fall is the other thing that you
learn once you are engaged and literally
learning how to fall and maybe maybe
that is the kind of risk that we are in
fact trying to prepare our children for
and that we are arguing that parents
everywhere should be preparing their
children for how to fall safely so that
you get up and can live to maybe not
fall again but if you do Fall Again live
to get up again another day yeah
actually it occurs to me right now the
engineers know this backwards and
forwards right
fail safe that's what you want a system
that fails safely and building that into
your kids is is an essential an
essential skill
so one of the things you talked about in
the book that I was like whoa was when
your son broke his arm I think he was
older because I know when the ones broke
it going down the stairs that one
required immediate medical attention uh
but there was a time where he broke his
arm and it was like a couple days I
think before you actually went and had
it looked at
um
and there's like an actual principle
behind strengthening the bone that you
guys go through and I was very impressed
um talk about that including the notion
that as this is like you guys have
overcome one of the reasons that I
didn't become a father was it seemed so
self-evident to me that you had to do
things like let your kids take risks you
know within confines that you had to
make things hard for them within
confines uh and I wasn't sure that I
would enjoy that process so it was
obvious I would have to do it and not
obvious that I would enjoy it and when
you guys talked about like how we sort
of over coddle things which I could
immediately empathize with I get why
people do it why you want to wrap a
broken arm in the thickest cast you can
possibly find but that even that isn't
always the right answer yeah no it's
it's really not that we our brains are
anti-fragile and our bones are
anti-fragile and they they become
stronger with stressors and Society
seems to be imagining that what we all
are is fragile and by imagine that we're
Fragile by creating conditions that
imagine that we're fragile that becomes
the reality we become more and more
fragile and less anti-fragile so uh in
the case of our our older son or it was
our younger son but when he was older uh
who broke his arm in the last day of
camp we did get him to
um to an emergency room that day it was
several hours but it was that day and
they told us that at the point that we
got back home to Portland which was um
several several hour drive and it was
going to be many days before we got
there that we should go see an
orthopedist and to have a cast put on to
have a cast put on so we spent several
days
um splinted he splint he spent several
days splinted and with some uh with some
pain medication
um before we ended up going home to to
Portland and where we did not get a cast
but the important thing is this is not
an experiment we ran on our child before
I had to learn it on myself right so
um evolutionarily speaking uh there is a
logic to what one does with broken bones
and it's a very different logic lots of
creatures don't heal so well horses
famously don't heal very well and the
reason for this is fairly obvious a
horse a wild ancestral horse that had a
broken limb wasn't going to recover that
is to say once an animal was hobbled by
a broken limb it was going to be picked
off by a predator so the selection that
creates the capacity to repair wasn't
there on the other hand
sloths which fall out of trees fairly
regularly but don't depend on their
ability to get away from predators
through speed actually survive very
frequently and when we look at sloth
carcasses they very regularly have
breaks that have healed so creatures
that can heal have that capacity our
arms and humans are such creatures we
are such creatures so one wants to be
very careful right if a bone is
misaligned you then want to utilize
medicine in order to get the healing
process to work correctly so it doesn't
heal in a misaligned way but if you've
got a fracture and you haven't
misaligned something there's a whole
other logic that takes over immobilizing
the arm isn't what we're built for in
fact what you're built for is to have
pain and inflammation do the job and the
result when I broke my arm and I just
said you know what I've thought for my
whole life why is it that we rush to get
a doctor to
um to immobilize this and then we
atrophy and have it removed and we have
to rebuild our strength maybe that's not
how it's supposed to be logically
Evolution has prepared us for this let's
see what happens when I broke my arm and
I was certain that it was not misaligned
and I let it go
what I found out was a one has to be
very careful for the first day or two
until you learn what it is that you're
capable of doing but your capacity
begins to return very very quickly and
the degree to which I was better off the
time that I broke my arm that I
fractured my arm and did nothing medical
than the time that I fractured my arm
and did the standard medical thing and
had the cast was night and day different
and the fact is we talked to Toby our
younger son when he broke his arm and we
told him what we were thinking and he
had watched me go through the experiment
and he elected to go through it himself
and lo and behold the same logic applied
in his case
yeah that's really interesting and feels
like there's a lot that we can
extrapolate from that in terms of our
real lives one idea that I find really
enticing and I'm sad that I didn't go
through it when I was a kid are Rites of
Passage do you guys think about that
all the time uh we have dispensed so
this is a classic chesterton's fence
issue where it used to be that there
were these you know Hallmarks of having
passed through a certain developmental
State and at some point I think people
started to feel that these things were
primitive and they dispensed with all of
them and much to our Peril because it
you know what you are is a creature
that starts out utterly helpless and
ends up incredibly capable but there are
moments at which you take on new
responsibility right now it's arbitrary
is an 18 year old really an adult in
many ways yes in some ways no it's not
really a moment at which you become an
adult but you do need a moment at which
we say actually at this point these
responsibilities are ones we believe you
can handle and going forward that's what
they are and the ceremony itself
instantiates it the ceremony helps make
it real and maybe it's at 18 maybe it's
at 15 maybe it's at 13 depending on the
tradition maybe it's counting in a
different way in cases
um where perhaps adherence to the
calendar is not the thing but you know
the moment now you are a man now you are
a woman is
has got to be an empowering one and it's
one of the things that is almost
universally lost for us us weird people
and you know it may well be the case
just as is the case with something like
follow through in sport what you do
after you hit the ball actually does not
matter but it is very important that you
intend to follow through and in that
same way going through your life knowing
that at this moment I'm going to be
expected to do this thing whether it's a
Vision Quest or or whatever it may be
knowing that that moment is coming and
that on the far side of it you will be a
different person is a developmental
process in and of itself so it's very
likely the thing that happens as you
anticipate this uh rite of passage that
is really the important developmental
thing but we've just dispensed with them
all
so we've talked a lot about uh Evolution
and all the different things and ways
that it manifests in our life and I want
to bring now people back to where we
started in the book that
you know we're we're at if if instead of
a nuclear clock coming towards 12 you
guys would say that from just a societal
uh standpoint we're edging up somewhere
in there
um talk to us about the fourth Frontier
but to understand the fourth Frontier I
think we have to understand the first
three Frontiers uh so if you can walk us
through that it was a really interesting
idea it was the part of your book that I
had to read twice because I was like
whoa there's really something
fascinating here and it it hints at a
very complex answer to a very complex
problem uh was entirely novel for me
I've never heard this idea explored
before and I think that it'll be really
helpful for people to see that you've
thought not just through the problem but
through potential Solutions
well the first thing to realize is that
all evolved creatures are effectively in
a search for opportunity and that
opportunity looks like for an average
creature under average circumstances if
it's a sexually reproducing creature the
average number of offspring that it will
produce that reaches reproductive
maturity themselves will be two doesn't
matter if they produce a hundred babies
or three the average that will reach
that number is two and the reason is
because the population isn't growing or
Contracting so two parents will end up
replacing themselves and know better at
least on average
when you have succeeded evolutionarily
you find some opportunity that allows
that rule to be broken right a creature
that passes over a mountain pass and
ends up in a valley in which it has no
competitors may leave a hundred times as
many offspring as it would have if it
had remained in its initial habitat and
so these places where creatures discover
an unexploited or underexploited
opportunity and their population can
grow are Frontiers and the feeling of
growth is the feeling of evolutionary
success the problem is all of these
things are limited right no matter what
opportunity you've found
the population will grow until that
opportunity is no longer under exploited
at which point the zero-sum Dynamics
will be restored but let's just lay out
the the first three types of Frontier
before perhaps you um you expand on what
the fourth Frontier is so
um the the first type of Frontier being
the one that most people think of when
you hear the word when they hear the
word frontier which is a geographic
Frontier so we begin the book by talking
about the beringians the first Americans
who came over
um from through boringia across what is
now the Bering Strait from Asia into the
new world something between 10 and 25
000 years ago they were coming into two
continents that had never before been
inhabited by humans and that was a vast
Geographic Frontier
the second type of Frontier might be
called a technological Frontier in which
you innovate something that allows you
to make use of resource that you
heretofore had not had access to so for
instance the terracing of hillsides to
allow water to be held and agricultural
systems to be to be done where
previously all the water would have run
off taking the nutrients in the water
with it that would be an example of a
technological Frontier and then the
third type of Frontier which is
ubiquitous throughout human history is a
transfer of resource Frontier and this
is really not a frontier it's it's just
it's theft right and so the beringians
coming into the new world for the first
time again 10 to 25 000 years ago we're
experiencing a geographic Frontier
thousands of years later when Europeans
came into the new world from the other
direction from from the East they landed
in a space that already had tens of
millions of people in it and basically
took over and that was a transfer of
resource moment and a transfer resource
Frontier basically theft so Geographic
Frontiers and technological Frontiers
are not inherently theft transfer
resource is and so we are proposing a
fourth Frontier so I'll just say a
transfer of resource is the explanation
for almost all wars and genocide from
the point of view of some population the
resources of some other population that
cannot be defended are as if a frontier
but the idea of the overarching idea is
that all creatures are seeking these
non-zero-sum opportunities that they are
experienced as growth that they are
inherently
self-destablizing that they
cause the growth of populations that
then restores the zero-sum Dynamics
restores the austerity which doesn't
feel so good and the population is then
in the search for the next non-zero-sum
growth Frontier
the problem is we can't keep doing that
right that process made us what we are
and we've been tremendously successful
at it but there are no more Tech uh
Geographic Frontiers on Earth we've
found it all technologically we've done
an excellent job of figuring out how to
exploit the world in fact over exploit
the world transfer resource is a world
destabilizing not only is it a
Despicable process but it is a lethal
process from the point of view of the
danger it puts us at we simply have
Weaponry that is too powerful we are too
interconnected and so in a sense our
Fates are all now linked and we have to
agree to put that uh competition aside
and then the question is well what do we
do do we face
do we accept the zero-sum Dynamics and
live with austerity that doesn't sound
like a a very good sales pitch even even
if it was what we had to do so what we
propose in the book is that there's
actually an alternative to this
that one can produce a steady state that
feels like growth to the people who
experience it without having to discover
new resource and that may sound
Preposterous it may sound utopian we are
not utopians we regard Utopia as the
worst idea human beings ever had or at
least very close to the top of that list
but there's nothing uh undoable about a
system that feels like Perpetual growth
in the same way there is nothing utopian
about the idea that it's always
Springtime inside your house right it's
always Pleasant inside your house that's
not a violation of any physical law it's
just a simple matter of the fact that we
can use energy to modulate the
temperature with the negative feedback
system and we can keep it very pleasant
in your house all the time and the point
is can that be done in our larger
environment such that human beings are
liberated to do the things that we are
uniquely positioned to do to generate
right
Beauty to experience love to feel
compassion to enhance our understanding
of the world all of those things are the
kinds of things that are worthy of us as
an objective
and what we need in order for more
people to spend their time pursuing
those things is a system in which we are
freed from competing one lineage against
the others uh for a limited amount of
resources and and the uh uh so that you
know we are condemned to violence
against each other in order to pursue
these things so in essence the fourth
Frontier is a steady state designed to
liberate people we should say it is not
something we believe we can blueprint
from here we know enough to navigate our
way in that direction but we cannot
blueprint it is something we will have
to prototype and navigate to but the
good news is
although we here probably would not live
to see the final product
things would start getting better
immediately Upon Our recognition that
pursuing the fourth Frontier was the
right thing to do suddenly there would
be a tremendous amount of useful work to
be done in discovering what the various
mechanisms of that new way of being are
all right you guys are gonna have to
give me a little more than that in the
book you talk about you give an example
and it was the thing that really allowed
me to begin to understand how we could
achieve a steady state that gave us
those things
um
I don't know if you remember the example
that you gave in the book I do so if you
don't let me know and I will refresh
your memory but I'm talking about the
Mayans is that right in the book you
specifically talk about craftsmanship
but if you've got something for me on
the mines I'll take it yeah well I mean
we I think we do we do both right
um and
uh let's see well maybe maybe remind us
of exactly what we say about
craftsmanship I remember that we talk
about it but I'm not sure exactly what
the context is here the idea was
basically that so we have this inherent
desire for growth but it isn't
necessarily growth itself it's sort of
an now I'm using my own words it's the
neurochemical state of feeling this deep
sense of satisfaction at having
something of import is probably the the
easiest way to think of it and that gave
me something to grasp onto because I so
I often get asked the question I've had
financial success in my life and the
irony of my life is that I'm constantly
going around trying to convince people
that money is not going to do for you
what you think it will it's very
powerful but it isn't what most people
think they think it will make them feel
better about themselves and money is
just absolutely incapable of doing that
and so when you realize the only thing
that matters is how you feel about
yourself you start playing a game of
neurochemistry and so this idea of
craftsmanship felt like that to me yeah
so you know recognizing
the long-term hormonal glow that you get
from producing something of lasting
value and beauty and meaning in the
world as opposed to only being exposed
to short-term stuff you know the
difference between buying something at
Ikea and putting it together with Allen
wrenches on your floor and of either
making yourself or coming to know a
Craftsman who really builds things with
care and knowledge with the intention
that you will be able to pass this on to
your children or your friends or you
know whomever later on this is a piece
with with lasting Beauty lasting
function that was built with someone who
knew something about the wood or the
metals or whatever the materials are
this is a way into finding the kinds of
meaning that a fourth Frontier mentality
can provide yeah I think the distinction
is one between
um the satisfaction of Life coming from
consuming which is inherently empty
versus uh producing and producing
doesn't necessarily have to mean stuff
it can be meaning or Insider or any one
of a number of other things but what we
say about the the Maya in the book what
we argue is that they very conspicuously
this is an extremely long-lived
civilization
um thousands of years of remarkable
success and they had as one of the
things that they produced in all of
their city-states they produced these
incredible monuments which are actually
not what they appear to be we have spent
a lot of time in uh in Mayan territory
and these things look like pyramids in
the sense that the Egyptians produced
them but they are not they are in fact
growing structures so these things got
bigger and bigger over time the longer a
city-state existed in the same place and
then there's the hidden version of this
which are an incredible network of Roads
Stone roads that exist between the
city-states called sock base
in any case the point is the Maya were
producing things that stood in for
population growth they were taking some
fraction of their productivity and they
were dedicating it to these massive
Public Works projects and the thing
about a massive Public Works project is
that it brings a kind of reality and
cohesion to the people involved I mean
imagine yourself
living in one of these amazing cities
and the public monuments made of stone
that speak to the power and the
durability of your people are you know
part of this uh this public space
these things allow the following process
if it's not just the pyramid that you
you know you it's a line item on a
budget you build the Pyramid it's done
but in fact what you do is you augment
it well then in good years
you will have that to augment and you
will take some fraction of the
productivity that might be turned into
more people which would then result in
more austerity you can invest it in
these Public Works projects and then in
a lean year instead of having not enough
to go and feed all of the mouths that
have been created you can just simply
not augment the Public Works project
that is a natural damper for the kind of
ebb and flow the boom and bust that we
have suffered so mightily under under
our modern economic systems so
the production of meaning the production
of uh
shared space that actually augments the
ability of people to interact with each
other these things are models of what we
should probably be seeking as a society
A system that tamps down the
fluctuations that provides Liberty to
people that's really the key thing right
we want realized Liberty for individuals
so that they can pursue what is
Meaningful rather than satisfying
themselves with consumption that's sort
of a rough outline of what a fourth
Frontier would look like
nice I love it
um in the book I can't remember if it
was Liberty that you were talking about
specifically but you talk about it's an
emergent property I assume you mean the
same with Liberty
how do we create the bed from which
Liberty will emerge
well we argue in the book is that
Liberty is a special value and the
reason that is a special value is
they're really two ways to to delineate
it you can be technically free but not
really free right if you're concerned
about being wiped out by a health care
crisis or you're concerned that you you
may lose your job and have to find
another in a different industry you're
not really free even if technically you
could go out and start an oil company
it's not going to happen so what we
argue Is that real Liberty realized
Liberty is Liberty you can act on and in
order for a person to be liberated
they're more mundane concerns their
safety uh their sustenance all those
things have to be taken care of and
therefore we can know that we have
succeeded when somebody has real Liberty
that they are capable of acting on it's
a proxy and what we argue is that the
objective ought to be to provide real
Liberty for as many people as possible
hopefully ultimately everyone would be
liberated to do something truly
remarkable rather than only Elites
having that freedom
I guess I would just say um
as as high a fraction of the population
as possible uh if we say as many people
as possible it might sound like we're
also interested in maximizing population
growth and of course you know of course
we're not you know I think we will we
will Peak hopefully at some point soon
and then population may start going down
through attrition but then at every
moment in uh in human history going
forward
um the vast number of the greatest
number of people possible who have
maximum Liberty uh will be a success
and let me just uh
refine that slightly
the objective is the maximum number of
liberated people but not living
simultaneously ultimately the way to
Grant The Marvelous liberated life to
the maximum number of people is to get
sustainable at the level that humans can
live indefinitely on the planet rather
than having a clock ticking where we
just simply don't have the resources to
continue doing what we're doing with so
much disruption in Tech and finances you
have to get educated check out this
episode with Rao Paul to learn about how
to protect yourself financially do you
think that AI presents a mega threat to
our economy it's very exciting
technology