Transcript
RL4j4KPwNGM • Max Tegmark: AI and Physics | Lex Fridman Podcast #155
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0486_RL4j4KPwNGM.txt
Kind: captions
Language: en
the following is the conversation with
max tag mark his second time on the
podcast
in fact the previous conversation was
episode number one
of this very podcast he is a physicist
and artificial intelligence researcher
at mit co-founder of the future of life
institute
and author of life 3.0 being human
in the age of artificial intelligence
he's also the head of a bunch of
other huge fascinating projects and has
written
a lot of different things that you
should definitely check out he has been
one of the key humans
who has been outspoken about long-term
existential risks of ai
and also its exciting possibilities and
solutions to real-world problems
most recently at the intersection of ai
and physics
and also in re-engineering the
algorithms
that divide us by controlling the
information we see
and thereby creating bubbles and all
other kinds of
complex social phenomena that we see
today in general he's one of the most
passionate and brilliant people
i have the fortune of knowing i hope to
talk to him many more times
on this podcast in the future quick
mention of our sponsors
the jordan harbinger show for sigmatic
mushroom coffee
better help online therapy and
expressvpn
so the choice is wisdom caffeine
sanity or privacy choose wisely my
friends
and if you wish click the sponsor links
below to get a discount
to support this podcast as a side note
let me say that
much of the researchers in the machine
learning and
artificial intelligence communities do
not spend much time
thinking deeply about existential risks
of ai
because our current algorithms are seen
as useful but dumb
it's difficult to imagine how they may
become destructive
to the fabric of human civilization in
the foreseeable future
i understand this mindset but it's very
troublesome
to me this is both a dangerous and
uninspiring perspective
reminiscent of the lobster sitting in a
pot of lukewarm water
that a minute ago was cold i feel a
kinship with this lobster
i believe that already the algorithms
that drive our interaction on social
media
have an intelligence and power that far
outstrip the intelligence and power of
any one human being
now really is the time to think about
this to define the trajectory
of the interplay of technology and human
beings in our society
i think that the future of human
civilization very well may be at stake
over this very question
of the role of artificial intelligence
in our society
if you enjoy this thing subscribe on
youtube review it on apple podcast
follow on spotify
support on patreon or connect with me on
twitter
lex friedman and now here's my
conversation
with max tag mark so people might not
know this but you were actually episode
number one
of this podcast just a couple of years
ago and
now we're back and it so happens that a
lot of exciting things happened in
both physics and artificial intelligence
both fields that you're super passionate
about
can we try to catch up to some of the
exciting things happening in
artificial intelligence especially in
the context of
the way it's cracking open the different
problems of the sciences
yeah i'd love to especially now as we
start 2021 here
it's a really fun time to think about
what were the biggest
breakthroughs in ai not the ones
necessarily that media wrote about but
that
really matter and and what does that
mean for our ability to do better
science
what does it mean for our ability to
help people around the world and what
does it mean for new um
problems that they could cause if we're
not smart enough to avoid them so you
know
what do we learn basically from this yes
absolutely so
one of the amazing things you're part of
is the ai institute for artificial
intelligence and
fundamental interactions what's up with
this institute
what are you working on what are you
thinking about the idea
is something i'm very on fire with which
is basically ai meets physics
and you know it's been almost five years
now since i shifted my own
mit research from physics to machine
learning and in the beginning i noticed
a lot of my colleagues even though they
were polite about it well i kind of
what is max doing what is this weird
stuff he's lost his mind
then but then gradually i uh
together with some colleagues we're able
to persuade more and more
of the other professors in the our
physics department to get interested in
this
and and um now we got this amazing nsf
center so
20 million bucks for for the next five
years mit
and a bunch of neighboring universities
here also
and i noticed now those callings looking
at me funny have stopped asking
what the point is of this because it's
becoming more clear
and i really believe that of course ai
can help physics a lot
to do better physics but physics can
also help
ai a lot both by
building better hardware my colleague
marin soljacic for example
is working on an optical chip
for much faster machine learning where
the computation is done
not by moving electrons around and but
we're moving
photons around dramatically less energy
use
faster better um we can
we can also help ai a lot i think by
having a
different set of tools and a different
maybe more audacious attitude
you know ai has to significant extent
been
an engineering discipline where you're
just trying to make things that work
and being less more interested in maybe
selling them then figuring out exactly
how they work and proving theorems about
that they will always work
right contrast that with physics you
know when elon musk sends a rocket
to the international space station they
didn't just train with machine learning
oh let's fire it a little bit left more
to the left a bit more to the right
though that also missed let's try here
no you know we figured out newton's laws
of gravitation
and other and got other things and got a
really deep fundamental understanding
and that's what gives us such confidence
in
in rockets and my vision is that
in the future all machine learning
systems that actually have impact on on
people's lives will be understood at a
really really deep level
right so we trust them not because some
sales rep told us to
but because they've earned our trust we
can
and really safety critical things even
prove that they will always do
you know what we expect them to do
that's very much the physics mindset so
it's interesting if you look at big
breakthroughs that have happened in
machine learning
this year you know from dancing robots
you know it's pretty fantastic not just
because it's cool but
if you just think about not that many
years ago
this youtube video at this darpa
challenge where the mit robot
comes out as the car and face plants
yeah
how far we've come and just a few years
similarly alpha fold to
you know crushing the protein folding
problem
we can talk more about implications for
medical research and stuff but
hey you know that's huge progress
i you can look at
gpt-3 they can spout off
english text which sometimes really
really blows you away
you can look at the google deepmind's
mu0 which doesn't just kick your butt
and
go and chest and shogi but also in all
these atari games and you don't even
have to teach it the rules now
you know what all of those have in
common is besides being powerful
is we don't fully understand how they
work
and that's fine if it's just some
dancing robots and the worst thing that
can happen is the face plant right
or if they're playing go and the worst
thing that can happen is that they make
a bad move and lose the game right
it's less fine if that's what's
controlling your self-driving car
or your nuclear power plant and uh
we've seen already that even though
hollywood
had all these movies where they try to
make us worry about the wrong things
like machines turning evil the actual
bad things that have happened with
automation have not been machines
turning evil they've been caused by over
trust
in things we didn't understand as well
as we thought we did right even
very simple automated systems like
what the boeing put into the 737 max
right yes killed a lot of people was it
that that little simple system was evil
of course not
but we didn't understand it as well as
we should have right
and we trusted without understanding
exactly we over trust we didn't even
understand that we didn't understand
right
the humility is really at the core of
being a scientist
i think step one if you want to be a
scientist is don't ever fool yourself
into thinking you understand things when
you actually don't
yes right that's probably good advice
for humans in general
i think humility in general and it was
good but in science it's so spectacular
like why
did we have the wrong theory of gravity
ever from aristotle
onward and close until like galileo's
time like why would we believe something
so dumb
as that if i throw this water bottle
it's going to go up with constant speed
until it realizes that its natural
motion is down it changes its mind you
know
because we people just kind of assumed
aristotle was right he's an authority we
understand that
why did we believe things like that the
sun is going around the earth
why did we believe that time flows at
the same rate for everyone until
einstein
same exact mistake over and over again
we just
weren't humble enough to acknowledge
that we actually didn't know for sure
we assumed we knew so we didn't discover
the truth because we assumed
there was nothing there to be discovered
right there was something to be
discovered about the
737 max and if you had been a bit more
suspicious and tested it better we would
have found it
and it's the same thing with most harm
that's been done
by automation so far i would say so i
don't know if you did you
hear of a company called knight capital
no
so good that means you didn't invest in
them earlier
they deployed this automated trading
system yes
all nice and shiny they didn't
understand it as well as they thought
and it went about losing 10 million
bucks per minute
yeah for 44 minutes straight no until
someone presumably was like oh
shut this up you know was it evil no
it was again misplaced trust something
they didn't fully understand right and
um
there have been so many um even when
people have been killed by
robots it's quite rare still but in act
factory accidents it's in every single
case been not
malice just that the robot didn't
understand that a human is
different from an auto part or whatever
and and
we um so this is where i think there's
so so much
opportunity for physics approach where
you just
aim for a higher level of understanding
and if you look at the
all these systems that we talked about
from the from
reinforcement learning systems and
dancing robots
to all these neural networks that power
gpt 3
and and go playing software stuff
they're all basically black boxes
much like not so different from if you
teach a human something you have no idea
how their brain works right
except the human brain at least has been
error corrected
during many many centuries of evolution
in a way that these
some of these systems have not right and
my
my mit research is entirely focused on
demystifying this black box
intelligible intelligence is my slogan
that's a good line intelligible
intelligence yeah it's not that we
shouldn't settle for something that
seems intelligent but we should it
should be intelligible so that we
actually trust it because we understand
it right
like again elon trusts his rockets
because he understands newton's laws and
thrust and how everything works uh and
let me tell you what can i tell you why
i'm optimistic about this yes
i think i think there's we've made a bit
of a mistake
yeah where we some people still think
that somehow we're never going to
understand neural networks
and we're just going to have to learn to
live with this it's this very powerful
black box
basically for those you know haven't
spent time
building their own it's super simple
what happens inside you send in a long
list of numbers
and then you do a bunch of operations on
them
multiply by matrices et cetera et cetera
and some other numbers come out that's
the output of it
and then there are a bunch of knobs you
can tune
and when you change them you know it
affects the computation
the input output relation and then you
just give the computer some definition
of good
and it keeps optimizing these knobs
until it performs as
good as possible and often you go like
wow that's really good
this robot can dance or this machine is
beating me a chest now
and in the end you have something which
even though you can look inside it you
have
very little idea of how it works you
know you can print out
tables of all the millions of parameters
in there
is it crystal clear now how it's working
you know of course not right
so many of my colleagues seem willing to
settle for that and i'm like
no that's like the halfway point
uh some have even gone as far as sort of
guessing that the mr the instructability
of this is
where the some of the power comes from
and sort of some sort of mysticism
i think that's total nonsense i i think
the real power of
neural networks comes not from
inscrutability but from
differentiability and what i mean by
that
is simply that
the output depends changes only smoothly
if you
tweak your knobs and then you can use
all these powerful methods we have for
optimization in science we can just
tweak them a little bit and see did that
get better or worse
that's the fundamental idea of machine
learning that the machine itself
can keep optimizing until it gets better
suppose you wrote this
an algorithm instead in python or some
other programming language
and then what what the knobs did was
they just changed random letters in your
in your code that would just epically
fail right you change one thing and
instead of saying print it says
synt syntax error you don't even know
was that
for the better or for the worse right
this to me is
re this is what i believe is the
fundamental power of
neural networks and just to clarify the
changing of the different letters in a
program
would not be a differentiable process it
would make it an invalid program
typically and then you wouldn't even
know if you changed more letters if it
would make it work again right
so that's the magic of uh neural
networks
uh the inscriptibility the
differentiability that every
every setting of the parameters is a
program and you can tell is it better or
worse right and
so so you don't like the poetry of the
mystery of neural networks as the source
of its power
i generally like poetry but not in this
case
it's so misleading and it's above all it
it shorts changes us
it fails it makes us underestimate what
we can the good things we can accomplish
because
so what we've been doing in my group is
basically step one
train the mysterious neural network to
do something well
and then step two do some additional
ai techniques to see if we can now
transform this
black box into something equally
intelligent
that you can actually understand so for
example i'll give you one example this
ai feynman
project that we just published right so
we took the 100 most famous
or complicated equations from one of my
favorite
physics textbooks in fact the one that
got me into physics in the first place
the feynman lectures on physics and uh
so you have a formula you know maybe it
has
what goes into the formula is six
different
variables and then what comes out as one
so then you can make like a giant excel
spreadsheet with seven columns
you put in just random numbers for the
six columns for those six input
variables and then
you calculate with the formula the
seventh column the output so maybe it's
like the force
equals in the last column some function
of the other
and now the task is okay if i don't tell
you what the formula was
can you figure that out from looking at
my spreadsheet i gave you yes
this problem is called symbolic
regression
if i tell you that the formula is what
we call a linear formula
so it's just that the output is
some sum of all the things input the
times some constants
that's the famous easy problem we can
solve we do it
all the time in science and engineering
but the general one if it's more
complicated functions with logarithms or
cosines or other math
it's a very very hard one and probably
impossible to do
fast in general just because the number
of formulas with
n symbols you know just grows
exponentially just like the number of
passwords you can make
grow dramatically with length so
so we but we had this idea that if you
first have a neural network that can
actually approximate the formula you
just trained it
even if you don't understand how it
works that can be
a first step towards actually
understanding how it works
so that's what we do first
and then we study that neural network
now and
put in all sorts of other data that
wasn't in the original training data and
use that to discover simplifying
properties of the formula
and that lets us break it apart often
into many simpler pieces
and a kind of divide and conquer
approach that we so we were able to
solve all of those 100 formulas discover
them automatically
plus a whole bunch of other ones and
it's a
it's actually kind of humbling to see
that this code which
anyone who wants now is listening to
this can type pip
install ai fineman on the computer and
run it you know it can actually do
what johannes kepler spent four years
doing when he stared at mars data
until he's like funny eureka this is an
ellipse yeah
this will do it automatically for you in
one hour right or max planck
he was looking at at how much radiation
comes out from
at different wavelengths from a hot
object and discovered the famous
blackbody formula this discovers it
automatically
i'm actually excited about
seeing if we can discover not just old
formulas again
but new formulas that no one has seen
before
and do you like this process of using
kind of a neural network to
find some basic insights and then
dissecting the neural network to then
gain
the final so that's in that way
you've uh forcing the explainability
issue
uh you know really trying to analyze the
neural network
for the things it knows in order to come
up with the final
beautiful simple theory underlying the
whole
the initial the initial system that you
were looking at i love that
and and the reason i'm so optimistic
that it can be generalized
so much more is because that's exactly
what we
do as a human scientist think of galileo
whom we mentioned right
i bet when he was a little kid if his
dad threw him an apple
he would catch it why because
he had a neural network in his brain
that he had trained to predict
the parabolic orbit of apples that are
thrown under gravity
if you throw a tennis ball to a dog it
also has this same ability
of deep learning to figure out how the
ball is going to move and catch it
but galileo went one step further when
he got older
he went back and was like wait a minute
i can write down a formula yes y equals
x squared a parabola
you know and he helped revolutionize
physics as we know it right so there was
a basic neural network
in there from childhood that captured
like the base
the experiences of observing different
kinds of trajectories
and then he was able to go back in with
another extra little
neural network and analyze all those
experiences
and be like wait a minute there's a
deeper rule here
exactly he was able to distill out in
symbolic form
what that complicated black box nor like
was doing right not only did he the
formula he get
he got ultimately become more accurate
you know and similarly this is how he
how newton got newton's laws which is
why elon can send rockets to
the space station now right so it's not
only more accurate
but it's also simpler much simpler and
it's so simple that we can actually
describe it to our friends and each
other right
we've talked about it just in the
context of physics now but
hey you know isn't this what we're doing
when we're talking to each other also
we go around with our neural networks
just like dogs and cats and
chipmunks and bluejays and we experience
things in the world
but then we humans do this additional
step on top of that where we then
distill out certain high-level
knowledge that we've extracted from this
in a way that we can communicate it
to each other in a symbolic form in
english
in this case right so if we can do it
and we believe that we are information
processing entities
then we should be able to make machine
learning that does it also
well do you think the entire thing could
be learning
because they're uh this dissection
process like for ai feynman
the secondary stage feels like something
like
reasoning and the initial step feels
like more like the
the more basic kind of differentiable
learning
do you think the whole thing could be
differentiable learning
do you think the whole thing could be
basically neural networks on top of each
other it's like turtles all the way down
could be neural networks all the way
down i mean that's a really interesting
question we know that in
in your case it is neural networks all
the way down because
you have in your skull as a bunch of
neurons doing their thing right
yeah but uh if you ask the question
more generally what what algorithms are
your brain is your brain
are being used in your brain right i
think super interesting to compare
i think we've gone a little bit
backwards historically because
we humans first discovered good
old-fashioned ai the logic based ai that
we often called
gophi for good old-fashioned ai
and then more recently we did machine
learning
because it required bigger computers so
we had to discover it later
so we think of machine learning with
neural networks
as the modern thing and the logic based
ai as the old-fashioned thing
but if you look at evolution on earth
right it's actually been the other way
around
i would i would say that for example
an eagle has a better vision system
than i have using and
dogs are just as good at casting tennis
balls as i am
all this stuff which is done by training
a neural network
and not interpreting it in words you
know
it's something so many of our animal
friends can do at least as well as us
right what is it that we humans can do
that the chipmunks
and the eagles cannot it's more to do
with this logic based stuff right where
we can
extract out information in symbols
in language and now even with equations
if you're a scientist
right so basically what happened was
first we built these computers that
could
multiply numbers real fast and
manipulate symbols and we felt they were
pretty dumb
and that then we made neural networks
that can see as well as a cat can
and do a lot of this inscrutable black
box
neural networks what we humans can do
also is
put the two together in a useful way yes
artificially in our own brain
yes in our own brain so if we ever want
to get
artificial general intelligence that can
do all
jobs as well as humans can right then
that's what's going to be required
to be able to combine the neural
networks with with um
symbolic combine the old ai with a new
ai in a good way
we do it in our brains and there seems
to be basically two strategies i see in
industry now
one scares the heebie jeebies out of me
and the other one i find much more
encouraging
okay which one can we break them apart
which which are the two
the one that scares the heebie-jeebies
out of me is this attitude that we're
just gonna make ever bigger systems that
we still don't understand
until they can do be as smart as humans
i
what could possibly go wrong right yeah
i think it's just such a reckless thing
to do and unfortunately
and if we actually succeed as a species
to build artificial general intelligence
then we still have no clue how it works
i think at least 50 chance
we're going to be extinct before too
long it's just going to be an utter
epic uh own goal you know it's that 44
minutes
losing money problem or like the paper
clip problem like where
we don't understand how it works and
it's just in a matter of seconds runs
away in some kind of direction that's
going to be
very problematic even long before you
have to worry about the machines
themselves
somehow deciding to do things and to us
that we have to worry about people
using machines that are short of ai agin
power
to do bad things i mean just take your
moment
and if if anyone is not worried
particularly about
advanced ai just take 10 seconds
and just think about your least favorite
leader on the planet right now don't
tell me who it is
i want to keep this a political but just
see the face in front of you that person
for 10 seconds
yes now imagine that that person has
this incredibly powerful ai
under their control and can use it to
impose their will on the whole planet
how does that make you feel
yeah so the can can we break that apart
just
briefly for the 50
chance that we'll run into trouble with
this approach do you see
the bigger worry in that leader or
humans
using the system to do damage or
are you more worried and i think i'm
in this camp more worried about like
accidental
unintentional destruction of everything
so like humans trying to do good and
like
in a way where everyone agrees it's kind
of good it's just they're trying to do
good without understanding
because i think every evil leader in
history thought they're
to some degree thought they were trying
to do good oh yeah i'm sure hitler
thought he was doing
good yeah they're good too i've been
reading a lot about stalin i'm sure
stalin
is from he legitimately thought that
communism was good for the world and
that he was doing good
i think mao zedong thought what he was
doing with a great leap forward is good
too
yeah i'm actually concerned about both
of those
uh before i promised to answer this in
detail but before we do that
let me finish answering the first
question because i told you that there
were two
different boosts we could get to
artificial general intelligence and one
scares the species out of me
which is this one where we build
something we just say bigger neural
networks ever more hardware and just
train the heck out and more data and
poof now it's very powerful that i think
is the the most unsafe and reckless
approach
the alternative to that is the
intelligent
intelligible intelligence approach
instead where
we uh say neural networks is just a tool
to this like for the first step to get
the intuition
but then we're going to spend also
serious resources
sources on other ai techniques for
demystifying this black box and figuring
out what's it actually doing
so we can convert it into something
that's equally intelligent but that we
actually understand
what it's doing maybe we can even prove
theorems about it that
this car here will never be hacked
when it's driving because here's the
proof uh
there is a whole science of this it
doesn't work for neural networks
there are big black boxes but it works
well and we're certain other kinds of
codes right
that approach i think is much more
promising that's exactly why i'm working
on it frankly not just because i think
it's cool for science
but because i think the more we
understand
i mean these systems the better the
chance is that
we can make them do the things that are
good for us that are actually intended
not unintended so do you think it's
possible to prove
things about something as complicated as
a neural network
that's the hope well ideally there's no
reason there has to be a neural network
in the end
either right like we discovered newton's
laws of gravity
with neural network in newton's head
yes but that's not the way it's
programmed into the
navigation system of elon musk's rocket
anymore right it's written
in c plus plus or i don't know what
language he uses exactly
yeah and then there are software tools
called symbolic verification
the darpa and the us military has done a
lot of really great research on this
because they really want to understand
that when they build weapon systems they
don't just
go fire at random or malfunction right
and
there's even a whole operating system
called cell 3 that's been developed by a
darpa grant where
you can actually mathematically prove
that this thing can never be hacked
well one day i hope that will be
something you can say about the os
that's running on our laptops too
as you know we're not there but i think
we should be ambitious
frankly yeah and and
if we can use machine learning to help
do the proofs and so on as well
right then it's much easier to verify
that a proof is correct
than to come up with a proof in the
first place that's really the core idea
here
if someone comes on your on your podcast
and says they they proved the riemann
hypothesis
or some new sensational new theorem
it's not me oh it's much easier for some
one else to take some smart math grad
students to check oh there's an error
here on equation equation five
or this really checks out than it was to
discover the proof
yeah although some of those proofs are
pretty complicated but yes it's still
nevertheless much easier
to uh to verify the proof i love the
optimism
you know we kind of even with the
security of systems
there's a kind of cynicism that pervades
people who think about this which is
like oh it's hopeless
i mean in the same sense exactly like
you're saying when y'all now work so
let's go must understand what's
happening
uh with security people are just like
well it's always going there's always
going to be
um attack vectors yeah like uh
waste ways to attack the system but
you're right we're just very new with
these computational systems
we're new with these intelligent systems
and and it's
not out of the realm of possibility just
like people didn't understand the
movement of the stars and the planets
and so on
yeah it's it's entirely possible that
like
within hopefully soon but it could be
within a hundred years
we start to have an obvious like laws of
gravity about
intelligence and uh
god forbid about consciousness too that
one is
agreed you know i think of course if
you're
selling computers that get hacked a lot
that's in your interest as a company
that people think it's impossible to
make it safe so nobody's going to get
the idea of suing you but
i want to really inject optimism here it
the there it
it's it's absolutely possible to do
much better than we're doing now and you
know
your laptop does so much stuff you don't
need
the music player to be super safe in
your in your future
self-driving car right if someone hacks
it and starts playing music you don't
like
the world on end but what you can do is
you can break out and say the drive
computer
that controls your safety must be
completely physically
decoupled entirely from the
entertainment system and
it must physically be such that it can't
take on over-the-air updates
while you're driving and it can be it
can have
it's not that it can have ultimately a
some operating system on it which is
symbolically verified and proven
uh that that it's always going to do
what it's going to what it's supposed to
do right
we can basically have and companies
should take that into two they should
look at everything they do
and say what are the few systems in our
in our company
that threaten the whole life of the
company if they get hacked you know and
have
the highest standards for them and then
they can save money by going
for the el cheapo poorly understood
stuff for the rest you know
this this is very feasible i think and
coming back to
the bigger question about that you
worried about that that there will be
unintentional
failures i think there are two quite
separate risks here right
we talked a lot about one of them which
is that the goals are noble of the human
the human says i want this airplane to
not crash
because this is not muhammad atta now
flying the airplane right
and now there's this technical challenge
of making sure that the
the autopilot is actually going to
behave as as the pilot wants
if you set that aside there's also the
separate question how do you make sure
that the goals of the pilot are actually
aligned with the goals of the passenger
how do you
make sure very much more broadly that if
we can all agree as a species that we
would like things to kind of go well for
humanity as a whole
that the goals are aligned here the
alignment problem
and um yeah there's been a lot of
progress in in the sense that there's
suddenly
huge amounts of research going on on it
about it i'm very grateful to elon musk
for giving us that money five years ago
so we could launch the first research
program on
technical ai safety and alignment
there's a lot of stuff happening
but i think we need to do more than just
make sure little machines do always what
their owners do
you know that wouldn't have prevented
september 11. if muhammad after said
okay okay autopilot please fly into
world trade center you know and it's
like okay
that even happened in a different
situation there was this depressed
pilot named andreas lubitz right he told
his german wings passenger jet to fly
into the alps
he just told the computer to change the
altitude to 100 meters or something like
that and you know what the computer said
all right okay okay and it had the
freaking topographical map of the alps
in there it had gps
everything no one had bothered teaching
it even the basic kindergarten ethics of
like no we never want airplanes to fly
into
mountains under any circumstances
and so we have to think beyond just
the the technical
issues and think about how do we align
in general incentives on this planet
for the greater good so starting with
simple stuff like that
every airplane that has a computer in it
should be taught
whatever kindergarten ethics that's
smart enough to understand like
no don't fly into fixed objects if the
pilot tells you to do so
then go on autopilot mode
send an email to the cops and land at
the latest airport
nearest airport you know any car
with a forward-facing camera should just
be programmed by the
by the manufacturer so it will never
accelerate into a human ever
that would have avoid things like the
nice
attack and many horrible terrorist
vehicle attacks where they deliberately
did that right
this was not some sort of thing oh you
know us and china
different views on no there was not a
single car manufacturer in the world
in the world right who wanted the cars
to do this they just hadn't thought to
do the alignment and if you look at
more broadly problems that happen on
this planet
the vast majority have to do a poor
alignment i mean
think about let's go back really big
because i know this is you're so good
yeah in the very so long ago in
evolution we had these genes
and you they wanted to make copies of
themselves that's really all they cared
about
so they some genes said hey i'm going to
build a brain on this body i'm in so
that i can get better at making copies
myself yes
and then they decided for their benefit
to get
copied more to align your brain's
incentives with
their incentives so it didn't want
you to starve to death so it gave you an
incentive to eat
and it wanted you to make copies of of
the genes
so it gave you incentive to fall in love
and
do all sorts of naughty things to make
copies of it
of itself right yeah so that was
successful value alignment done
on the genes but they created something
more intelligent than themselves
but they made sure to try to align the
values but then something went a little
bit
raw against the idea of what the genes
wanted because
a lot of humans discovered hey you know
yeah we really like this
business about sex that the genes have
made us enjoy
but we don't want to have babies right
now yeah so we're going to hack the
genes
and use birth control and i really feel
like drinking a coca-cola right now but
i don't want to get a potbelly so i'm
going to drink diet coke you know
we have all these things we've figured
out because we're smarter than the jeans
how we can actually subvert their
intentions so
it's not surprising that this we humans
now when we're in the role of these
genes creating
other non-human entities with a lot of
power have to face the same exact
challenge how do we make other powerful
entities have
incentives that are aligned with ours
and so they won't hack them
corporations for example right we humans
decided to create corporations
because can benefit us greatly now all
of a sudden there's a supermarket i can
go buy food there i don't have to hunt
awesome and then to
make sure that this corporation would do
things that were good for us and not bad
for us we created institutions to keep
them in check
like if the local supermarket sells
poisonous food
then those
some the owners of a supermarket have to
spend some years reflecting
behind bars right so we created
incentives to get align them but of
course just like we were able to see
through this thing
and you developed birth control if
you're powerful corporation you also
have an incentive to try to hack
the institutions that are supposed to
govern you because you ultimately as a
corporation have an incentive to
maximize your profit
it's like you have an incentive to
maximize the enjoyment your brain has
not for your genes so if they can figure
out a way of
of bribing regulators then they're going
to do that
in the u.s we kind of caught on to that
and made laws against corruption and
bribery
then in the late 1800s
teddy roosevelt realized that no we were
still being kind of hacked because the
massachusetts railroad companies had
like a bigger budget than
the state of massachusetts and they were
doing a lot of very
corrupt stuff so he did the whole trust
busting thing to try to align
these other non-human entities the
companies again more with
the incentives of americans as a whole
um
it's not surprising though that you know
this is a battle you have to keep
fighting
now we have even larger companies than
we ever had before
and of course they're going to try to
again
support the institutions
not because you know i think people make
a mistake of getting all too um
black thinking about things in terms of
good and evil like arguing about whether
corporations are good or evil or whether
robots are good or evil
a robot isn't good or evil
it's tool and you can use it for great
things like robotic surgery or for bad
things
and a corporation also is a tool of
course
and if you have good incentives to
corporation it'll do great things like
start a hospital or a grocery store
if you have any bad incentives then it's
going to start maybe
marketing addictive drugs to people and
you'll have an opioid epidemic right
it's all about i don't want we should we
not make the mistake of getting into
some sort of fairy tale good evil thing
about
corporations or robots we should focus
on putting the right incentives in place
my optimistic vision is that if we can
do that you know
then we can really get good things we're
not doing so great with that right now
either on ai i think or on other
intelligent non-human entities like big
companies like we just have a new um
secretary of defense there's going to
start up now in
in the biden administration who is was
an active member of the board of
raytheon
for example yeah so you know
i have nothing against raytheon i'm all
i'm not a pacifist but
there's a obvious conflict of interest
if someone
is in the job where they decide who's
they're going to contract with
and i think somehow we have uh maybe we
need another teddy roosevelt to come
along again
and say hey you know we want what's good
for all americans
and we need to go do some serious
realigning again of the incentives that
we're giving
to these big companies and um
then we're going to be better off it
seems that naturally with human beings
just like you beautifully described the
history of this whole thing
uh of it all started with the genes and
they're probably pretty upset
by all the unintended consequences that
happened since
but the it seems that it kind of works
out like it's
in this collective intelligence that
emerges at the different levels
it seems to find sometimes last minute a
way to
realign the values or keep the values
aligned
it's almost um it finds a way like uh
different leaders different humans pop
up all over the place that
uh reset the system do you want i mean
do you have an explanation why that is
or is that just survivor bias
and also is that different somehow
fundamentally different than
with the ai systems where
you're no longer dealing with something
that was a direct
maybe companies are the same a direct
byproduct of the evolutionary process
i think there is one thing which has
changed that's why
i'm not all optimistic that's why i
think there's about a 50 percent chance
if we if we take the dumb route with um
artificial intelligence that
we will human humanity will be extinct
in this century
first just the big picture yeah
companies
need to have the right incentives even
governments right
we used to have governments usually
there were just some king
you know who was was the king because
his dad was the king you know
and and then there were some benefits of
having this powerful
kingdom because or empire of any sort
because then it could
prevent a lot of local squabbles so at
least everybody in that region would
stop warning against each other and
their incentives of different cities in
the kingdom became more aligned right
that's that was the whole selling point
harare yeah noah harari has a beautiful
piece on how empires were collaboration
enablers and then we also
hirari says invented money for that
reason so we could
have better alignment and we could do
trade even with people we didn't know
so this sort of stuff has been playing
out since time immemorial right
what's changed is that it happens on
ever larger scales right technology
keeps getting better because science
gets better so now we can communicate
over larger distances
transport things faster over larger
distances and so
the entities get ever bigger but our
planet is not getting bigger anymore
so in the past you could have one
experiment that just
totally screwed up like easter island
where they actually managed to have such
poor alignment that
when they went extinct people there
there was no one else to come back and
replace them right
if elon musk doesn't get us to mars and
then we
go extinct on a global scale and we're
not coming back
that's that's the fundamental difference
and that's an ex
mistake i would rather we don't make for
that reason
in the past of course history is full of
fiascos
right but it was never the whole planet
and and then okay now there's this nice
uninhabited land here some other people
could move in and
organize things better this is different
the second thing which is also different
is that technology
gives us so much more empowerment right
both to the good things and also to
screw up
in the stone age even if you had someone
whose goals were really poorly aligned
like maybe he was really pissed
off because his stone age girlfriend
dumped him and he just wanted to
if he wanted to like kill as many people
as he could yeah how many could he
really take out with a rock and a stick
before he was overpowered right right
just handful right yeah now
with today's technology if we have an
accidental nuclear war between
russia and the u.s which we almost have
about a dozen times and then we have a
nuclear winter it could take out seven
billion people
or six billion people we don't know uh
so this
the scale of the damage is bigger that
we can do and if
if um there's obviously no law of
physics that says that technology will
never get powerful enough that
we could wipe out our species entirely
that would just be
fantasy to think that science is somehow
doomed not get more powerful than that
right
and and it's not at all unfeasible in
our lifetime that someone could design a
designer pandemic
which spreads as easily as covet but
just basically kills everybody
we already had smallpox they killed
one-third of everybody who got it
and and um what do you think of the
here's an intuition maybe it's
completely naive and this optimistic
intuition i have which
it seems and maybe it's a biased
experience that i have
but it seems like the most brilliant
people i've met in my life
all are really like fundamentally good
human beings
and not like naive good like they really
want to do good for the world in a way
that
well maybe is aligned to my sense of
what good means
and so i have a sense that the
the people that will be defining the
very cutting edge of technology
there will be much more of the ones that
are doing good versus the ones that are
doing evil
so the race i'm optimistic on the
us always like last minute coming up
with a solution so
if there's an engineered pandemic that
has the
capability to destroy most of the human
civilization
it it feels like to me either leading up
to that
before or as it's going on there will be
we're able to rally the the collective
genius of the human species
i could tell by your smile that you're
uh at least some percentage uh
um doubtful but is that
could that be a fundamental law of human
nature
that evolution only creates it creates
uh
like karma is beneficial good is
beneficial and therefore will be all
right
i hope you're right i would really would
love
it if you're right if there's some sort
of law of nature that says that we
always get lucky in the last second
because of karma but you know
i pref i prefer uh
not playing it so close and gambling on
that
and i i think um in fact i think it can
be dangerous
to have too strong faith in that yes
because it it
makes us complacent like if someone
tells you you never have to worry about
your house burning down then you're not
going to put in a smoke detector because
why would you need to right
right even if it's sometimes very simple
precautions we don't take them
if you're like oh the government is
going to take care of everything for us
i can always trust my politicians
you know we advocate our own
responsibility i think it's a healthier
attitude to say yeah maybe things will
work out
maybe i'm actually gonna have to myself
step up and take responsibility
uh and the stakes are so huge i mean
if we do this right we can develop all
this ever more powerful technology and
cure all diseases
and create a future where humanity is
healthy and wealthy for not just the
next election cycle but like
billions of years throughout our
universe that's really worth working
hard for
and not just you know sitting and hoping
for some sort of fairy tale karma
well i i just mean so you're absolutely
right from the perspective of the
individual like for me
like the primary thing should be to take
responsibility and to
build the solutions that your skill set
allows
yeah which is a lot i think we
underestimate often very much how much
good we can do right if the if you or
anyone listening to this
is completely confident that our
government would do a perfect job on
handling any future crisis
with engineered pandemics or
future ai people out there
on on what actually happened in 2020
do you feel that the government by and
large around the world has handled this
flawlessly
uh that's a really sad and disappointing
reality that uh
hopefully is a wake-up call for
everybody uh for the scientists for the
for the re for the engineers for the
researchers in ai especially it was
disappointing to see
how inefficient we were as
spread collecting the right amount of
data in a privacy
preserving way and spreading that data
and utilizing that data to make
decisions
all that kind of stuff yeah i think when
something bad happens to me
i made myself uh a promise
many years ago that i would not
be a whiner so when something bad
happens to me
of course it's a process the
disappointment but then
i try to focus on what did i learn from
this that can make me a better person in
the future
and there's usually something to be
learned when i fail
and i think we should all ask ourselves
what
can we learn from the pandemic about how
we can do better in the future
and you mentioned there's a really good
lesson you know we were not as resilient
as we thought we were and we were not as
prepared
maybe as we wish we were you can even
see very stark contrast around the
planet
south korea right they have over 50
million people
do you know how many deaths they have
from covet last time i checked
no about 500
why is that well
the short answer is that they had
prepared
they were incredibly quick incredibly
quick to get on it
with very rapid testing and contact
tracing
and so on which is why they never had
more cases than they could contract
trace effectively right they never even
had to have the kind of big lockdowns we
had in the west
but the deeper answer to it's not just
the koreans are just somehow better
people
the reason i think they were better
prepared was because they
had already had a pretty bad hit
from the sars pandemic or which never
became a pandemic
something like 17 years ago i think so
it's kind of fresh memory that you know
we need to be prepared for pandemics
so they were right and
so maybe this is a lesson here for all
of us to draw
from covid that rather than just wait
for the next pandemic or the next
problem with ai getting out of control
or anything else
maybe we should just actually set aside
a tiny fraction of our gdp
to have people very systematically do
some horizon scanning and say okay
what are the things that could go wrong
and let's duke it out and see which are
the more likely ones and which are the
ones that are actually actionable
and then be prepared so
one of the observations as one little
ant
slash human that i am of disappointment
is the political division
over information
that has been observed that i observed
this year
that it seemed uh the discussion was
less about
um sort of uh
what happened and understanding what
happened deeply
and more about there's different
truths out there and it's like a
argument my truth is better than your
truth
and it's it's like red versus blue or
different like it was like this
ridiculous
discourse that doesn't seem to get at
any kind of notion of the truth
it's not like uh some kind of scientific
process even science got politicized in
ways that's very
heartbreaking to me uh you have an
exciting project
on the ai front uh of trying to
rethink one of the you mentioned
corporations
there's one of the other collective
intelligence systems that have emerged
all this is social networks and just to
spread
the internet is the spread of
information on the
uh the internet our ability to share
that information there's all different
kinds of news sources and so on
and so you said like that's from first
principles let's rethink
how we think about the news how we think
about information
can you talk about this uh amazing
effort that you're undertaking
well i'd love to this has been my big
covet project
it's been nights and weekends on ever
since the
lockdown to segue into this actually let
me come back to what you said earlier
that you
had this hope that in your experience
people who you felt were very talented
or often
idealistic and wanted to do good frankly
i feel the same about all people
by and large there are always exceptions
but i think the vast majority of
everybody regardless of education and
whatnot
really are fundamentally good right so
how can it be
that people still do so much nasty stuff
right
i think it has everything to do with
this with the information that we're
given
yes you know if you go into sweden
500 years ago and you start telling all
the farmers that those danes in denmark
they're so terrible people you know and
we have to
invade them yes because they've done all
these terrible things that you can't
fact check yourself
a lot of people swedes did that right
and if
and um we've seen we're seeing
so much of this today in the world
both geopolitically you know where we
are told that that
china is bad and russia is bad and
venezuela is bad and people in
those countries are often told that we
are bad and we also see it at a
micro level you know where people are
told that oh those who voted for the
other party are bad people
it's not just an intellectual
disagreement but they're
bad people and um we're getting ever
more divided
and so how do you reconcile this with
with this
intrinsic goodness i and people i
i think it's pretty obvious that it has
again to do with this with the
information they were fed
and given right we evolved to live in
small groups where you might know 30
people in total right so you
then had a system that was quite good
for assessing who you could trust and
who you could not and if someone told
you
that you know joe there is a jerk
but you had interacted with him yourself
and seen him in action and
and you would quickly realize maybe that
that's actually not
quite accurate right but now that we the
most people on the planet are people
we've never met
it's very important that we have a way
of trusting the information we're given
and so okay so where does the news
project come in well
throughout history you can go read
machiavelli you know from the 1400s and
you'll see how already then they were
busy manipulating people with propaganda
and stuff
propaganda is not new
at all and the incentives to manipulate
people is
just not new at all what is it that's
new
what's new is machine learning meets
propaganda
that's what's new that's why this has
gotten so much worse you know some
people like to blame
certain individuals like in my liberal
university bubble many people blame
donald trump and say it was his fault
i see it differently i think
what donald trump just had this extreme
skill at playing this game
in the machine learning algorithm age
a game he couldn't have played you know
10 years ago so what's changed which
changes
well facebook and google and other
companies
and i don't i'm not bad matching them i
have a lot of friends who work for these
companies
good people they deployed machine
learning algorithms
just to increase their profit a little
bit to just maximize the time
people spent watching ads and they had
totally underestimated how effective
they were going to be this was again the
black box
non-intelligible intelligence they just
noticed oh we're getting more ad revenue
great it took a long time until they
even realized why
and how and how damaging this was for
society
because of course what the machine
learning figured out was
that the by far most effective way of
gluing you to your little rectangle
was to show you things that triggered
strong emotions anger
etc resentment and uh
if it was true or not didn't really
matter it was also easier to find
stories that weren't true
if you weren't limited that's just the
limitations let's show people
that's a very limiting fact and before
and
long we got these amazing filter bubbles
on a scale we had never seen before
couple this to the fact that also
the online news media were so effective
that they
killed a lot of print journalism there's
only there's
less than half as many journalists now
in america i believe
as there was you know a generation ago
he just couldn't compete with the online
advertising
so all of a sudden most people
are not getting even reading newspapers
they get get their news from social
media
and most people only get news in their
little bubble
so along comes now some people like
donald trump
who've figured out what among the first
successful politicians to figure out how
to really play this new game
and become very very influential but i
think donald trump was a sim
well he he took advantage of it he
didn't
create the fundamental conditions were
created by
machine learning taking over the news
media
so this is what motivated my little
covid project here so
you know i said before machine learning
and tech in general is not evil but it's
also not good it's just a tool
that you can use for good things or bad
things and as it happens
machine learning and news was mainly
used by the big players
big tech to manipulate people watch as
many ads as possible
which have this unintended consequence
of really screwing up our democracy into
and fragmenting it into filter bubbles
so i thought well
machine learning algorithms are
basically free they can run on your
smartphone for free also if someone
gives them away to you right
there's no reason why they only have to
help the big guy
to manipulate the little guy they can
just as well help the little guy
to see through all the manipulation
attempts from the big guy so
this project is called you can go to
improve the news.org
the first thing we've built is that it's
a little news aggregator
looks a bit like google news except it
has these sliders on it to help you
break out or your filter bubble
so if you're reading you can click click
and go to your favorite topic
and then um if you just slide the left
right slider away all the way over to
the left
there's two sliders right yeah there's
the one the most obvious one is the one
that has left right labeled on it
you go to the left you get one set of
articles you go to the right you see a
very different truth yeah appearing oh
that's literally
left and right on the political spectrum
on the political yeah so if you're
reading about
immigration for example it it's very
very
noticeable and and i think step one
always if you want to
not get manipulated is just to be able
to recognize
the techniques people use so it's very
helpful to just see
how they spin things on the two sides
i think many people are under the
misconception that the main problem is
fake news it's not
we i had an amazing team of mit students
where we did an academic project
to use machine learning to detect the
main kinds of bias over the summer and
yes of course sometimes there's fake
news where someone just claims something
that's
false right like oh hillary clinton just
got divorced or something
yes but what we see much more of is
actually just omissions
if you go to there's some stories
which just won't be mentioned by the
left or the right because it doesn't
suit their agenda
and then they'll instead mention other
ones very very very much
so for example we've had
a lot a number of stories about
the trump family's financial dealings
and then there's been some a bunch of
stories about the biden families hunter
binds financial dealings right
surprise surprise they don't get equal
coverage on the left and the right
right one side loves to cover the biden
hunter biden stuff and one side loves to
cover the trump you
never guess which is which right yeah
but the great news is if you want to if
you're a
normal american citizen and you dislike
corruption in all its forms
then slide slide you can just get look
at both of the
sides and you'll see all the corrupts
those political corruption stories
it's really liberating to just take in
the both sides the spin on both sides
it somehow unlocks your mind to like
think on your own
to realize that that i don't know it's
the same thing that was useful right in
the soviet union times for
when when everybody was much more aware
that they're
surrounded by propaganda right it's so
interesting what you're saying actually
so noam chomsky
you know used to be our mit colleague
once said that
propaganda is to democracy what violence
is
to totalitarianism and and
what he means by that is if you have a
really totalitarian government you don't
need propaganda
right people will do what you do what
you want them to do
anyway out of fear right yes but
otherwise you need propaganda so i would
say actually that the propaganda is much
higher quality
in democracies much more believable and
it's
brilliant it's really striking when i
talk to colleagues
science colleagues like from russia and
china and so on
i noticed they are actually much more
aware of the propaganda in their own
media than many of my american
colleagues are about the
propaganda and western media that's
brilliant that means the propaganda in
the western media is just better yes
that's so big better in the west
oh man
that's good but when you when once you
realize that you realize there's also
something very optimistic there that you
can do about it right
because first of all omissions
as long as there's no outright
censorship you can just look at both
sides
and pretty quickly piece together a much
more accurate idea of what's actually
going on right and develop a natural
skepticism too
yeah just a analytical scientific
mind about how you're taking the
information yeah and i i think
i have to say sometimes i feel that some
of us in the academic bubble are
too arrogant about this and somehow
think oh it's just people who aren't as
educated when we are often just as
gullible also you know
we read only our media and and don't see
through things
anyone who looks at both sides like this
in comparison will immediately start
noticing
the shenanigans being pulled and you
know i think
what i tried to do with with this app is
that the
big tech has to some extent uh try to
blame the individual
for being manipulated much like big
tobacco
tried to blame the individuals entirely
for smoking and then
later on you know our government stepped
up and said actually you know
you can't just blame little kids for
starting to smoke we have to have more
responsible advertising and this and
that
i think it's a bit the same here it's
very convenient for a big tech to blame
so it's just people who are so dumb and
get fooled yeah
the blame usually comes in saying oh
it's just human psychology
people just want to hear what they
already believe but professor david rand
at mit
actually partly debunked that with a
really nice study showing that people
are tend to be interested
in hearing things that go against what
they believe
if it's presented in a respectful way
like suppose for example
that um you have a company
and you're just about to launch this
project and you're convinced it's going
to work and someone says you know
lex i hate to tell you this but this is
going to fail and here's why
would you be like shut up i don't want
to hear it
yeah would you you would be interested
right and also
if you're on an airplane back in the co
precovered times you know
and the guy next to you is clearly from
the opposite side of the political
spectrum
but is very respectful and polite to you
wouldn't you be kind of interested to
hear a bit about
how he or she thinks about things of
course
but it's not so easy to find out
respectful disagreement now because like
for example if you
are a democrat and you're like i want to
see something on the other side so you
just go
breitbart.com and then after the first
10 seconds
you feel deeply insulted by something
and then they
it's it's not going to work or if you
take someone who votes republican
and they go to something on the left and
they just get very offended very quickly
by them having put a deliberately ugly
picture of donald trump on the front
page or something
it doesn't really work so this news
aggregator also has this nuanced slider
which you can pull to the right and then
to make it easier to get exposed
to actually more sort of academic style
or more respectful um
portrayals of different views and and
finally the one
kind of bias i think people are mostly
aware of is
the left right right because it's so
obvious because
both left and right are very powerful
here right
both of them have well-funded tv
stations and newspapers and it's kind of
hard to miss
but there's another one the
establishment slider
which is it's also really fun i i i love
to play with it
that's more about corruption yes because
if you have a society
that where almost all the powerful
entities
want you to believe a certain thing
that's what you're going to read in
both the big media mainstream media on
the left and on the right of course
and the powerful companies can push back
very hard
like tobacco companies pushed back very
hard back in the day when peop
some newspaper started writing articles
about tobacco being dangerous
so it was hard to get a lot of coverage
about it initially
and also if you look geopolitically
right of course
in any country when you read their media
you're mainly going to be reading a lot
about articles about how
our country is the good guy and the
other countries are the bad guys
right so if you want to have a really
more nuanced understanding you know
like the germans used to be told that
the the
the british used to be told that the
french were the bad guys and the french
used to be told that
british were the bad guys now they
visit each other's countries a lot and
have a much more nuanced understanding i
don't think there's going to be any more
wars between france and germany
but on the geopolitical scale
it's just just as much as ever you know
a big cold war now you as china
and so on and if you want to if you want
to get a more nuanced
understanding of what's happening
geopolitically then it's really fun to
look at this
establishment slider because it turns
out there are tons of
little newspapers both on the left and
on the right
who sometimes challenge establishment
and say
you know maybe we shouldn't actually
invade iraq right now maybe this weapons
and mass destruction thing was bs
if you look at the journalism research
afterwards you can actually see that
quite clearly that both
cnn and fox were very pro
let's and get rid of saddam there are
weapons of mass destruction
then there were a lot of smaller
newspapers they were like wait a minute
this evidence seems a bit sketchy and
maybe we
but of course they were so hard to find
most people didn't even know they
existed right
yet it would have been better for
american national security if those
voices had also come up
i think it harmed america's national
security actually that we invaded iraq
and arguably there's a lot more interest
in
that kind of thinking too from those
small sources
so like the when you say big it's more
about
kind of uh the reach of the broadcast
but it's not big in terms of the
interest i think
i think there's a lot of interest in
that kind of anti-establishment or like
skepticism towards
you know out-of-the-box thinking there's
a lot of interest in that kind of thing
do you see this news project or
something like it being um
basically taking over the world as as
the main way we consume information like
what's how do we get
how do we get there like how do we you
know
so okay the idea is brilliant it's a
it's a you you're calling it your little
project
in 2020 but how does that become the new
way we consume
information i hope first of all just to
plan the little seed there because
normally the big barrier of doing
anything in media is you need a ton of
money but
this cost me no money at all i've just
been paying myself you
pay a tiny amount of money each month to
amazon to run the thing in their cloud
we're not there never will never be any
ads the point is not to make any money
off of it
and we just train machine learning
algorithms to classify the articles and
stuff so it just kind of runs by itself
so
if it actually gets good enough at some
point that it starts catching on it
could
scale and if other people carbon copy it
and make other versions that are better
that's the more the merrier
i think there's a real
opportunity for machine learning to
empower the individual
the list of the powerful players
uh it's as i said in the beginning here
it's been mostly the other way around so
far that the big players have the ai
and then they tell people this is the
truth
this is how it is but it can just as
well
go the other way around and when the
internet was born actually a lot of
people had this hope that maybe this
will
be a great thing for democracy make it
easier to find out about things
and maybe machine learning and things
like this can uh can
actually help again and i have to say i
think it's impo it's more important than
ever
now right because this is very linked
also to the whole
future of life as we discussed earlier
right we're getting this ever more
powerful attack you know
it's frank it's pretty clear if you look
on the one or two generation three
generation time scale that there are
only two ways this can end
geopolitically yeah either it ends great
for all humanity
or ends terribly for all of us
there's there's really no in between it
and
we're so stuck in and because you know
technology knows no borders
and you can't have people fighting
when the weapons just keep getting ever
more powerful
uh indefinitely eventually
lux the luck runs out and and you know
right like right now we have i love
america
but the the fact of the matter is
what's good for america is not opposites
in the long term to what's good for
other countries it it would be if this
was some sort of
zero-sum game like it was thousands of
years ago when the only way one country
could get more resources
was to take land from other countries
because that was basically the resource
right you look at the map of europe some
countries kept getting bigger and
smaller
endless wars but then since
1945 there hasn't been any war in
western europe and they all got way
richer because of tech so the optimistic
outcome is
that the big winner in this century is
going to be america and china
and russia and everybody else because
technology just makes us all
healthier and wealthier and we just find
some way of
keeping the peace on this planet but i
think
unfortunately there are some pretty
powerful forces right now that are
pushing in exactly the opposite
direction and trying to demonize other
countries
which just makes it more likely that
this ever more powerful tech
we're building is gonna use disastrous
ways
yeah for aggression versus cooperation
that kind of thing yeah even look at
look at this military ai now right
it's 120 it was so awesome to see these
dancing robots i loved it
right but one of the biggest growth
areas in robotics now
is of course autonomous weapons right
and and 2020 was like the
best marketing year ever for autonomous
weapons because
in both libya civil war and in
nagorno-karabakh
they made the decisive difference right
and everybody else is like watching this
oh yeah we want to build autonomous
weapons
too and in libya
you had on one hand our ally the united
arab emirates that were flying their
autonomous weapons that they bought from
china
bombing libyans and on the other side
you had our other ally turkey flying
their
drones and
they had no skin in the game any of
these other countries and of course
there was the libyans who really got
screwed
in nagorno-karabakh you had actually
again now turkey is sending drones built
by this company that was actually
founded by a guy who went to mit
aeroastro do you know that
background dr yeah so mit has a direct
responsibility for
ultimately this and a lot of civilians
were killed there you know and
so because it was militarily so
effective
now now suddenly there's a huge push oh
yeah yeah let's
go build ever more autonomy
into these these weapons and it's going
to be great
and uh i think actually
people who are obsessed about some sort
of future terminology scenario right now
or
should start focusing on the fact that
we have
two much more urgent threats happening
for machine learning one of them is the
whole
destruction of democracy that we've
talked about now where
where our flow of information is being
manipulated by machine learning and the
other one is
that right now you know this is the year
when the big arms race
in out of control arms race and at least
thomas weapons is going to start or
it's going to stop so you have a sense
that there is uh
like 2020 was a instrumental catalyst
for the race
of for the autonomous weapons race yeah
because it was the first year when
when they proved decisive in the
battlefield and
and these ones are still not fully
autonomous mostly they're remote
controlled right
but you know we could
very quickly make things about you know
the size and cost of a smartphone
which you just put in the gps
coordinates or the face of the one you
want to kill his skin color or whatever
and it flies away and you know does it
and
the the real good reason why the u.s and
all the other superpowers
should put the kibosh on this is the
same reason
we decided to put the kibosh on bio
weapons
so you know we gave the future of life
award that we can talk more about later
matthew messelston from harvard before
for convincing nixon to ban bioweapons
and
i asked him how did you do it and he was
like well
i just said look we don't want there to
be a 500
weapon of mass destruction that even
all our enemies can afford even
non-state actors
and nixon was like
good point you know it's in america's
interest that the power of weapons are
all really expensive
so only we can afford them or maybe some
more stable adversaries right
nuclear weapons are like that but
bioweapons were not like that that's why
we banned them
and that's why you never hear about them
now that's why we love biology
so you have a sense that it's possible
for the big
powerhouses in terms of the the big
nations in the world to agree that
autonomous weapons is not a race we want
to be on that it doesn't end well
yeah because we we know it's just going
to end in mass proliferation and every
terrorists terrorist
everywhere is going to have these super
cheap weapons that they will use against
us
and it and our and our politicians have
to constantly worry about being
assassinated every time they go outdoors
by some anonymous
little mini drone you know we don't want
that and
if even if the u.s and china and
everyone else could just agree that
you can only build these weapons if they
cost at least 10 million bucks
that would that would be a huge win for
the superpowers
and frankly for everybody the um
you don't and people often push back and
say well it's so hard to prevent
cheating but hey you could say the same
about bioweapons you know
take any of your rmit colleagues in
biology
of course they could build some nasty
bioweapon if they really wanted to
but first of all they don't want to
because they think it's disgusting
because of the stigma
and second even if there's some sort of
nut case in want to
it's very likely that some other grad
students or someone would rat them out
because everyone else thinks it's so
disgusting right
and in fact we now know there was even a
fair bit of cheating on the bioweapons
ban
but none no countries used them because
it was so stigmatized
that it just wasn't worth revealing that
they had cheated
you talk about drones but you kind of
think
that drones is the remote operation
which they are
mostly yes but you're not taking the
next
intellectual step of like where does
this go
you're kind of saying the problem with
drones is that you're removing yourself
from
direct violence therefore you're not
able to sort of maintain the common
humanity required to make the proper
decision strategically
but that's the criticism as opposed to
like if this is
automated and just exactly as you said
if you automate it and there's a race
then he's going to the technology and
you get better and better and better
which means getting cheaper and cheaper
and cheaper yeah
and unlike perhaps nuclear weapons which
is connected to
uh resources in a way like it's hard to
get the
it's hard to engineer yeah it feels like
it's you know um there's too much
overlap between the tech industry
and autonomous weapons to where you
could have smartphone type of
uh cheapness if you look at drones uh
you know it's a it's a you know for a
thousand dollars you can have an
incredible system that's able to
maintain flight autonomously for you and
take pictures and stuff
you could see that going into the
autonomous weapon space that's
um but like why is that not thought
about or discussed enough
in the public do you think you see those
dancing boston dynamics robots
and everybody has this kind of um
like as if this is like a far future
yeah they have this like
fear like oh this will be terminator in
like some
i don't know unspecified 20 30 40 years
and they don't think about well this is
like some
much less dramatic version of that is
actually happening
now it's not going to have it's not
going to be legit it's not going to be
dancing
but it's this already has the capability
to use artificial intelligence to
kill humans yeah the boston dynamics
leg robots i think the reason we imagine
them holding guns is just because you've
all seen arnold schwarzenegger right
right
that's our reference point that's pretty
useless
that's not going to be the main military
use of them they might be useful in law
enforcement
in the future and then there's a whole
debate about do you want robots showing
up at your house with guns telling you
to
uh who'll be perfectly obedient to
whatever dictator controls whatever
but let's leave that aside for a moment
and look at what's actually relevant now
so there's a spectrum of things you can
do with ai in the military and again
to put my card on the table i'm not the
pacifist i think we should
have good defense um
so for example a predator drone
is a room basically a fancy little
remote controlled airplane
right there's a human can piloting it
and the decision ultimately about
whether to kill somebody with it is made
by a human still
and this is a line i think we should
never cross
there's a current dod policy again you
have to have a human in the loop
i think algorithms should never make
life or death decisions they should be
left to humans
now why might we cross that line
well first of all these are expensive
right so
for example when uh when
when azerbaijan had all these drones and
armenia didn't have any they start
trying to jerry-rig little cheap things
fly around and but then of course their
minions would jam them
as areas would jam them and remote
control things can be jammed
that makes them inferior also there's a
bit of a time delay
between you know if we're piling
something far away
speed of light and the human has a
reaction time as well
it would be nice to eliminate that
jamming possibility in the time delay by
having it
fully autonomous but now you might be so
then
if you do but now you might be crossing
that exact line you might program it to
just
oh yeah the air drone go hover over
this country for a while and whenever
you find someone who
is a bad guy you know kill them
now the machine is making these sort of
decisions and you and some people who
defend this
still say well that's morally fine
because we are the good guys
and we will tell it the definition of
bad guy that we think
is moral but now
you'll be very naive to think that if
isis
buys that same drone that they're gonna
use our definition of bad guy
maybe for them bad guy is someone
wearing a us army uniform right
or maybe maybe there will be some
weird uh ethnic group who decides that
someone of another ethnic group they are
the bad guys right
the thing is human soldiers with all
our faults right we still have some
basic wiring
in us like no it's not okay to kill kids
and civilians and thomas weapon has none
of that
it's just going to do whatever is
programmed it's like the perfect adolf
eichmann
on steroids like they told him out of
icman you know he wanted to do this and
this than this to make the holocaust
more efficient and he was like
and off he went and did it right yeah do
we really want to make machines
that are like that like completely
amoral and we'll take the user's
definition of who's the bad guy
and do we then want to make them so
cheap that all our adversaries can have
them like
what could possibly go wrong that's the
that's i think the big argument for why
we want to
this year really put the kibosh on this
and i think you can tell there's a lot
of
very active debate even going on within
the u.s military and undoubtedly
in other militaries around the world
also about whether we should have some
sort of international agreement
to at least require that these weapons
have to be above a certain
size and cost you know so that
um things just don't uh totally spiral
out of control
and finally just for your question now
but is it possible to stop it
because some people tell me oh just give
up you know
but again so so matthew messelcen again
from harvard right who
the bio weapons hero he
had exactly this criticism also with bio
weapons people were like
how can you check for sure that the
russians aren't cheating
and um he told me this
i think really ingenious insight he said
you know max
some people think you have to have
inspections and things and you have to
make sure that people that you can catch
the cheaters with a hundred percent
chance you don't need a hundred percent
they said
one percent is usually enough
because i like if it's just an enemy if
it's another big state
like suppose china and the u.s have
signed the treaty
[Music]
drawing a certain line and saying yeah
these kind of drones are okay but these
fully autonomous ones are not
now suppose you are
china and you have cheated and secretly
developed some clandestine little thing
or you're thinking about doing it you
know what's your calculation that you do
well
you're like okay what's the probability
that we're going to get caught
if the probability is 100 of course
we're not going to do it
but if the probability is five percent
that we're going to get caught then it's
going to be like a huge embarrassment
for us
yeah and it doesn't have it's
we still have our nuclear weapons anyway
so it doesn't really
make an enormous difference in in terms
of deterring the u.s
you know and that fees the stigma that
you kind of
establish like this fabric this
universal stigma over the thing
exactly it's very reasonable for them to
say well you know we probably get away
with it but if we don't
then the u.s will know we cheated and
then they're going to go full tilt with
their program and say look the chinese
are cheaters and that's good
now we have all these weapons against us
and that's bad so
the stigma alone is very very powerful
and again
look what happened with bioweapons right
it's been 50 years now
yeah when was the last time you read
about a bioterrorism attack
the only deaths i really know about with
bioweapons that have happened
when we americans managed to kill some
of our own with anthrax you know the
idiot who sent them to tom daschle and
others and letters right
and similarly the in uh zverdlovsk in
the soviet union they had some anthrax
and some lab there maybe they were
cheating or who knows
and it leaked out and killed a bunch of
russians i'd say that's a pretty good
success right 50 years
just two own goals by the superpowers
and then nothing
and that's why whenever i ask anyone
what they think about biology they they
think it's great
they associate it with new cures new
diseases maybe a good vaccine
this is how i want to think about ai in
the future i want and i want others to
think about ai too
as a source of all these great solutions
to our problems
not as oh ai oh yeah
that's the reason i feel scared going
outside these days
yes it's kind of brilliant that the
bioweapons and nuclear weapons we've
figured out
i mean of course there's still a huge
source of danger but we figured out
some way of creating rules
and social stigma over these weapons
that then
creates a stability to our whatever that
game theoretic stability that is
exactly and we don't have that with ai
and you're kind of
screaming from the top of the uh the
mountain
about this that we need to find that
because um
just like you know it's very possible
you know with future of life as you
point out uh institute uh awards
pointed out that uh you know with
nuclear weapons
we could have destroyed ourselves quite
a few yeah times and it's um
you know it's a learning experience
that uh doesn't it's very costly we gave
you know this future life award
we gave it the first time to this guy
vasily arkhipov you know he was
on most people haven't even heard of
yeah can you say who he is
vasily arkhipov he um
has in my opinion made the
greatest positive contribution to
humanity of any human
in modern history and maybe it sounds
like hyperbole here like i'm just
over the top but let me tell you the
story and i think maybe you'll agree
so during the cuban missile crisis
we americans first didn't know that the
russians had sent four submarines
but we caught two of them and we didn't
know that
that so we dropped practice depth
charges on the one that he was on
to try to force it to the surface
but we didn't know that this nuclear
submarine actually was a nuclear
submarine with a nuclear torpedo
we also didn't know that they had
authorization to launch it without
clearance from moscow
and we also didn't know that they were
running out of electricity their
batteries were almost dead they were
running out of oxygen sailors were
fainting left and right
the temperature was about 110
120 fahrenheit on board it was really
hellish conditions really just
kind of doomsday and at that point these
giant explosions start happening from
the americans dropping these
the captain thought world war iii had
begun they
decided that they were going to launch
the nuclear torpedo and one of them
shouted you know we're all going to die
but we're not going to disgrace our navy
you know we don't know what would have
happened if there had to be a giant
mushroom cloud all of a sudden
you know and against the americans but
since everybody had their hands on the
triggers
it's pretty you don't have to be too
creative to think that it could have led
to an all-out nuclear war
and in which case we wouldn't be having
this conversation now right what
actually took place
was they needed three people to to
approve this
the captain had said yes there was the
communist party political officer he
also said yes let's do it
and the third man was this guy
vasiliyakipov who said yet
yeah uh he for some reason he was just
more chill than the others and he was
the right man at the right time
i don't want us as a species rely on the
right person being there at the right
time
you know we tracked down his family
living in relative poverty outside
moscow
when he flew his daughter he had passed
away
and flew them to london they had never
been to the west even it was incredibly
moving so you have to honor them for
this
uh the next year we gave this future
life award to
stanislav petrov have you heard of him
yes so he
he was in charge of the soviet early
warning
station which was built with soviet
technology
and honestly not that reliable it said
that there were five us missiles coming
in
again if they had launched at that point
we probably wouldn't be having this
conversation he decided
based on just mainly gut instinct to
just not tell
not to not escalate this and i'm very
glad he wasn't replaced by an ai that
was just
automatically following orders and then
we gave the third one to matthew
messelson
last year we gave this award to
these guys who actually use technology
for good
not avoiding something bad but for
something good the guys who
eliminated this disease which is way
worse than covent
that had killed a half a billion people
in in the past its final century
smallpox right so
we mentioned it earlier coved on average
kills less than one percent of people
who get it smallpox about 30 percent
and um they just ultimately victor donov
and bill feige most my colleagues have
never heard of either of them
um one american one russian they
did this amazing effort not only what
was
able to get the us and the soviet union
to team up against smallpox during the
cold war
but bill feige came up with this
ingenious strategy for making it
actually
go all the way to defeat the disease
with without funding for vaccinating
everyone and as a result we haven't had
any we went from 15 million deaths the
year i was born in smallpox
so what do we have in covet now a little
bit short of two million right yes
to zero deaths of course this year and
forever
there have been 200 million people they
estimate who would have died
since then by smallpox had it not been
for this so isn't science awesome can't
that
when you use it for good and the reason
we want to celebrate these sort of
people is
is to remind them of this science is so
awesome
when you use it for good and those
those awards actually uh the variety
there paint's a very
interesting picture so the the the first
two
are looking at it's kind of exciting to
think that these these
average humans in some sense that
they're products of you know billions of
other humans that came before them
evolution and some
little you said gut you know but there's
something
in there that that uh
stopped the annihilation of the human
race
and that's a magical thing but that's
like this deeply human thing
and then there's the other aspect where
that's also very human
which is to build solution to the to the
existential crises that we're facing
like to build it to take the
responsibility and to take
come up with different technologies and
so on yeah and both of those are
deeply human the gut and and the mind
whatever that is the best is when they
work together archipelago
i wish i could have met him of course
but he had passed away
he was really a fantastic military
officer
combining all all the best traits that
we in america admire in our military
because first of all he was very loyal
of course he never even told anyone
about this
during his whole life even though you
think he had some bragging rights right
but he just was like this is just
business just doing my job
it only came out later after his death
and and second
the reason he did the right thing was
not because he was some sort of liberal
or some sort not because he was
just oh you know uh peace and love
it was partly because he had been the
captain on another submarine that had a
nuclear reactor meltdown
and it was his heroism that
helped contain this that's why he died
of cancer later also but he's seen many
of his crew members die and
i think for him that gave him this gut
feeling that you know
if there's a nuclear war between the u.s
and the soviet union
the whole world is going to go through
what i saw
my dear crew members suffer through it
wasn't just an abstract thing for him i
think
it was real and second though not just
the gut the mind right he
he was for some reason very level-headed
personality a very smart guy
which is exactly what we want our best
fighter pilots to be also right i i
never forget neil armstrong when he's
landing on the moon and almost running
out of gas
and he doesn't even change when they say
30 seconds he doesn't even change the
tone of voice just keeps going arc
i think was just like that so when the
explosion starts going off and his
captain is screaming and we should nuke
them and
and all he's like
i don't think the americans are trying
to sink us
i think they're trying to send us a
message
that's a pretty badass yes coolness
because he said yeah if they wanted to
sink us
no and he said listen listen it's
alternating
one loud explosion on the left one on
the right
one on the left one on the right he was
the only one who noticed this pattern
and it is like i think this is them
trying to send us a signal
that they want us to surface and they're
not going to sink us
uh and somehow
this is how he then managed to
ultimately with his
combination of gut and also just
cool analytical thinking was able to
de-escalate the whole thing
and uh yeah so this is some of the best
in humanity i guess coming back to what
we talked about earlier is the
combination of the neural network
you know with uh i'm getting the tearing
up here getting emotional but
he was just he is one of my superheroes
com having both the
heart you know and the mind combined and
especially in that time
uh there's something about the i mean
this is a very
in america people are used to this kind
of idea of being the individual
um of like on your own thinking yeah i
think
under in the soviet union under
communism it's actually much harder
to do that oh yeah he didn't even he
even got
he didn't get any accolades either when
he came back for this right
uh they just wanted to hush the whole
thing up yeah there's echoes of that
with chernobyl there's all kinds of um
uh that's one that's that's a really
hopeful thing
that amidst big centralized powers
whether it's companies
or states there's still the power of the
individual
to think on their own to act but i think
we need to think of people like this
not as a panacea we can always count on
but rather as
a wake-up call you know so
because of them because of archipov we
are alive to learn from this lesson
to learn from the fact that we shouldn't
keep playing russian roulette and almost
have a nuclear war by mistake
now and then because relying on luck is
not a good long-term strategy
if you keep playing russian inlet over
and over again the probability of
surviving just drops exponentially with
time
yeah and if you have some probability of
having an accidental new core every year
the probability of not having one
also drops exponentially i think we can
do better than that so i i think
uh the message is very clear once in a
while happens
and um there's a lot of very concrete
things we can do
to reduce the risk of things like that
happening in the first place
on the ai front if we just link an
effort
yeah so you're friends with you often
talk with elon musk throughout history
did a lot of interesting things together
um he has a
a set of fears about the future of
artificial intelligence
agi do you have a sense
you've we've already talked about the
things we should be worried about with
ai do you have a sense of the shape of
his spheres
in particular about ai
of the which subset of what we've talked
about whether it's
creating you know it's that direction of
creating
sort of these giant computational
systems that are not explainable
they're not intelligible intelligence or
is it um is it the
and then like as a branch of that is it
the manipulation by
big corporations of that or individual
evil people
to use that for destruction or the
unintentional consequences
do you have a sense of where his
thinking is on this from
my many conversations with elon yeah i
i certainly have a model of how how he
thinks it's actually very
much like the way i think also i'll
elaborate on it a bit i just want to
push back on when you said evil people i
i don't think it's very helpful context
concept evil people yes sometimes people
do very very bad things
but they usually do it because they
think it's a good thing yes
because somehow other people had told
them that that was a good thing or given
them
incorrect information or or whatever
right um i i have
i believe in the fundamental goodness of
humanity that if we
educate people well and they find out
how things really are
people generally want to do good and be
good
hence the value alignment as yeah it's
it's
about information about knowledge and
then once we have that we'll
we'll uh we'll likely be able to
uh do good in the way that's aligned
with everybody else who thinks
yeah and it's not just the individual
people we have to align so
we we don't just want people to
be educated to know the way things
actually are and to treat each other
well
but we also need to align other
non-human entities we talked about
corporations there has to be
institutions so that what they do is
actually good for the country they're in
and
we should align do make sure that what
countries do is actually good for
the species as a whole etc uh coming
back to elon yeah
my my uh understanding of of
how elon sees this is really quite
similar to my own which is
one of the reasons i like him so much
and enjoy talking with him so much i
feel he's
quite different from most people in that
he
thinks much more than most people about
the really big picture
not just what's going to happen in the
next election cycle but
in millennia millions and billions of
years from now
right and if you when you look in this
more cosmic perspective it's so obvious
that we are gazing out into this
universe that as far as we can tell
is mostly dead with life being almost
imperceptibly tiny perturbation right
and and he sees this enormous
opportunity for our universe to come
alive
for us to become an interplanetary
species mars is obviously just
first stop on this cosmic
journey and precisely because he thinks
more long-term
it's much more clear to him than to most
people that what we do with this russian
roulette thing we keep playing with our
nukes
as a really poor strategy really
reckless strategy
and also that that we're just building
these ever more powerful ai systems that
we don't understand
it's also it's a really reckless
strategy i feel elon is a human
very much a humanist in the sense that
he wants an awesome
future for humanity he wants to be
us that control the machines
rather than the machines that control us
yes you know
and why shouldn't we insist on that
we're building them after all right
why should we build things that just
make us into some little cog in the
machinery that has no further say in the
matter right it's not my idea of
an inspiring future either yeah the if
if you think on the cosmic scale in
terms of both time and space
so much is put into perspective yeah
whenever i have a bad day that's what i
think about it immediately makes me feel
better
it makes me sad that for us individual
humans at least for now the ride ends
too quickly
we don't get to experience the cosmic
scale
yeah i mean i think of our universe
sometimes as an organism that has only
begun to wake up a tiny bit
uh just like you know when you're in the
the very first
little glimmers of consciousness you
have in the morning when you start
coming around for the coffee
before the coffee even before you get
out of bed before you even open your
eyes
you start start to wake up a little bit
there's something here you know
that's very much how i think of of where
we are you know those all those galaxies
out there you know i think they're
really beautiful but why are they
beautiful
they're beautiful because conscious
entities are actually
observing them and experiencing them
through our telescopes
if i you know i define consciousness
as subjective experience whether it be
colors or emotions or sounds
so beauty is an experience meaning is an
experience
purpose is an experience if there was no
conscious experience of
observing these galaxies they wouldn't
be beautiful if if we do something dumb
with advanced ai in the future here and
earth originating life goes extinct
and that was it for this if there is
nothing else with telescopes in our
universe then
it's kind of game over for me beauty and
meaning and purpose in our whole
universe and i think that would be just
such an opportunity lost frankly and i
think
when elon points this out he gets very
unfairly
maligned in the media for all the dumb
media bias reasons we talked about right
they print precisely the things about
elon out of context
that are really click-baity like he has
gotten so much flack
for this summoning the demon statement
yeah i happen to know exactly the
context because i was in the front row
when they gave that talk it was at mit
you'll be pleased to know it was the
aero astro
anniversary they had buzz aldrin there
from the moon landing
a whole house a kresge auditorium packed
with mit students
and he had this amazing q a it might
have gone for an hour and then we
talked about rockets and mars and
everything at the very end
this one student was actually in my
class asked him what about ai
elon makes this one comment and they
take this out of context
print it goes viral is it like with ai
we're summoning the demon something like
that
and try to cast him as some sort of doom
and gloom dude you know
yeah you know elon yes he's not the doom
and gloom
dude he he is such a positive visionary
and the whole reason he warns about this
is because he realizes more than most
what the opportunity cost is of screwing
up that there is so
much awesomeness in the future that we
can we can and our descendants can
enjoy if we don't screw up right i get
so pissed off when people try to cast
him in some sort of
technophobic luddite and then
at this point it's kind of ludicrous
when when i hear people say that people
who worry about
artificial general intelligence are
luddites because
of course if you look more closely you
have some of the most
outspoken people making warnings are
people like professor stuart russell
from berkeley who's written the
best-selling ai textbook you know
so claiming that he is a luddite who
doesn't understand ai
is the joke is really on the people who
said it but i think more broadly
this message is really not sunk in at
all
what it is that people worry about they
think that elon
and stuart russell and others are
worried about
the dancing robots picking up an
ar-15 and going on a rampage right
they think they're worried about robots
turning evil
they're not i'm not you know the risk is
not malice
it's it's competence the risk is just
that we build some systems that are
incredibly competent which means they're
always going to get
their goals accomplished even if they
clash with our goals
that's the risk why did
we humans you know drive the west
african black rhino extinct
is it because we're malicious evil
rhinoceros haters
no it's just because our goals didn't
align with the goals of those rhinos and
tough luck for the rhinos you know so
when i'm the point is just we don't want
to put ourselves in the position of
those rhinos
creating these something more powerful
than us
if we haven't first figured out how to
align the goals and i am optimistic i
think we could
do it if we worked really hard on it
because i spent a lot of time
around intelligent entities that were
more intelligent than me
my mom and my dad and i was little and
that was fine
because their goals were actually
aligned with mind quite well
but uh we've seen today many examples of
where the goals of our powerful systems
are not so aligned so those
click through optimization algorithms
that are polarized social media right
they were actually
pretty poorly aligned with what was good
for democracy it turned out
and um again almost all problems we've
had in machine learning again came
so far not from malice but from poor
alignment and it's
that's exactly why that's why we should
be concerned about it in the future do
you think it's possible that uh
with with systems like neurolink and
brain computer interfaces
you know again thinking of the cosmic
scale elon's talked about this
but others have as well throughout
history of
figuring out how the exact mechanism of
how to achieve that kind of alignment
so one of them is having a symbiosis
with ai
which is like coming up with clever ways
where we're like
stuck together in this weird uh
relationship
whether it's biological or in some kind
of other way
do you think there's that's a
possibility of
having that kind of symbiosis or do we
want to instead kind of focus on
this uh distinct entities
of us humans talking to these
intelligible
self-doubting ais maybe like stuart
russell thinks about he's like
these these we're we're self-doubting
and full of uncertainty and then have
rai systems that are full on certainty
we communicate back and forth
and in that way achieve symbiosis
i honestly don't know i would say that
because we don't know
for sure what if any of our which of any
of our ideas will work
but we do know that if we don't i'm
pretty convinced that if we don't get
any of these things to work and just
barge ahead then our species is you know
probably going to go extinct this
century
i think it this century you think like
you think we're facing
this crisis as a 21st century christ
like
this century would be remembered
on a hard drive and a hard drive
somewhere or
or maybe by future generations is like
uh like there will be
future future life institute awards for
people that have done something about ai
it could also end even worse where there
isn't we're not superseded by leaving
any ai behind either way we just
totally wipe out you know like on easter
island our century is long
no it's there are still 79 years left of
it right think about how
far we've come just in the last 30 years
so
we can talk more about what might go
wrong but you asked me this really good
question about what's the best strategy
is it neuralink
russell's approach or or whatever i
think um
like you know when when we did the
manhattan project
we didn't know if any of our four ideas
for enriching uranium
and getting out the uranium uranium-235
were going to work but we felt this was
really important
to get it before hitler did so you know
what we did we tried all four of them
here i think it's it's analogous where
there's the greatest threat that's ever
faced our species and of course u.s
national security by implication
we don't know if we don't have any
method that's guaranteed to work but we
have a lot of ideas
so we should invest pretty heavily in
pursuing all of them with an open mind
and hope that one of them at least works
these are
the good news is the century is long you
know
and it might take decades until we have
artificial general intelligence so we
have some time hopefully
but it takes a long time als to solve
these very very difficult problems
it's going to actually be it's the most
difficult problem we're ever trying to
solve as a species
so we have to start now so we don't have
rather than you know begin thinking
about it the night before
some people have had too much red bull
switching on and we have to
coming back to your question we have to
pursue all of these different avenues
and see
if you're my investment advisor and i
was trying to invest
in the future how do you think the human
species
is most likely to destroy itself in
this century
yeah in uh so if if the crisis many of
the crisis we're facing are really
before us within the next hundred years
how do we make explicit the
make known the the unknowns and solve
those problems
to to avoid the biggest starting with
the biggest
existential crisis so as your investment
advisor how are you planning to make
money
destroying yourselves i have to ask i
don't know it's uh it might be the
russian origins
somehow involved at the micro level of
detailed strategies of course this is
these are unsolved problems
for ai alignment we can break it into
three sub-problems
that are all unsolved i think with you
know you want first to make machines
understand our goals then adopt our
goals
and then retain our goals so
to hit on all three real quickly
the problem when andreas lubitz told his
autopilot to fly into the alps
was that the computer didn't even
understand
anything about his goals right it was
too dumb it could have understood
actually
but we would have had to put some effort
in as a
system designer that don't fly into
mountains
so that's the first challenge how do you
how do you program into computers
human values human goals we can start
rather than than saying oh it's so hard
we should start with the simple stuff
as i said self-driving cars airplanes
just put in all the goals that we all
agree on already
and then have a habit of whenever
machines get smarter so they can
understand one level higher goals you
know
put them into uh the second challenge is
uh
getting them to adopt the goals it's
easy for situations like that where you
just program it in but when you have
self-learning systems like children you
know
any parent knows that um
there's a difference between getting our
kids to understand what we want them to
do and to actually
adopt our goals right with with humans
with
children fortunately they
they go through this phrase first
they're too dumb to understand what we
want
our goals are and then they have this
period of
some years when they're both smart
enough to understand them and malleable
enough that we have a chance to
raise them well and then they become
teenagers it's
kind of too late but we have this window
with machines the challenges they might
the intelligence might grow so fast that
that window is pretty short
so that's a that's the research problem
the third one is how do you make sure
they keep the goals
if they keep learning more and getting
smarter
many sci-fi movies are about how you
have something which initially was a
line but then
things kind of go off keel and you know
my kids
were very very excited about their legos
when they were little
now they're just gathering dust in the
basement you know if we put
if we create machines that are really on
board with the goal of taking care of
humanity we don't want them to get as
bored with us and
and as my kids go with legos
so this is another research challenge
how can you make some sort of
recursively self-improving system retain
certain basic goals that said a lot of
adult people still play with legos so
maybe we succeeded with the legos
i like your optimism so not all ai
systems have to maintain the goals right
some
just some fraction yeah so there so
there is a there's a lot of
talented ai researchers now who have
heard of this and want to work on it
not so much funding for it yet
of the billions that go into building a
ai more powerful
it's only a minuscule fraction so far
going into the safety research
my attitude is generally we should not
try to slow down the technology but we
should greatly accelerate the investment
in this sort of safety research
um and also make sure it's been it's
this was very embarrassing last year but
you know the nsf decided to give out the
six of these big institutes we got one
of them for ai and science
you asked me about another one was
supposed to be for ai safety research
and they gave it people studying oceans
and climate and stuff
yeah so i'm all for starting oceans and
climates but we need to actually
have some money that actually goes into
ai safety research also and doesn't just
get grabbed by whatever
um that's a fantastic investment and
then at the higher level
you ask this question okay what can we
do you know what are the biggest risks
i think i think we cannot just consider
this to be only a technical problem
like again because if you solve only the
technical problem
can i play with your robot yes please if
we just
get our machines you know to just
blindly obey
the orders we give them so we can always
trust that it will do what we want
that might be great for the owner of the
robot that might not be so great for the
rest of humanity
if if that person is that least favorite
world leader
or whatever you imagine right so we have
to also take
look at the apply alignment not just to
machines
but to all the other powerful structures
that's why it's so important to
strengthen our democracy again as i said
to have institutions
make sure that the playing field is not
rigged so that
corporations are given the right
incentives to do the things
that both make profit and are good for
people to make sure that countries have
incentives to do things
that are both good for their people and
don't screw up the rest of the world
and this is not just something for ai
nerds you know to geek out on this is
the
interesting challenge for political
scientists economists and
so many other thinkers so one of the
magical things
that uh perhaps makes him
this earth quite unique is that
it's home to conscious beings so you
mentioned consciousness
perhaps as a small aside
because we didn't really get specific to
what how we might
do the alignment like you said it's just
a really important research problem but
yeah
do you think engineering consciousness
into ai systems
is is a a possibility
it's something that we might to one day
do or is there something
fundamental to consciousness that is uh
is there something about consciousness
that is fundamental to
humans and humans only i think it's
possible
i think both consciousness and
intelligence
are information processing certain types
of information processing
and that fundamentally it doesn't matter
whether the information is processed by
carbon atoms in neurons and brains or by
silicon atoms and
and so on in our technology
some people disagree this is what i
think is this physicist that i i
and that consciousness is the same kind
of
you said consciousness is information
processing so meaning
you know i i think you had a quote of
something like
it's information uh
knowing itself that kind of thing i
think consciousness is
the way information feels when it's
being processed what's being done
we don't know exactly what those complex
ways are it's clear that
most of the information processing in
our brains does not create an experience
we're not
even aware of it right like for example
um
you're not aware of your heartbeat
regulation right now even though it's
clearly being done by your body
right it's just kind of doing its own
thing when you go jogging
you there's a lot of complicated stuff
about how you put your foot down and we
know it's hard that's why robots used to
fall over so much
but you're mostly unaware about it your
brain just
your ceo consciousness module just sends
an email hey you know i'm going to keep
drawing along this path yeah the rest is
on autopilot right
so most of it is not conscious but
somehow
there are some of the information
processing which is
we don't know what what exactly
i i think this is a science problem that
i hope one day
we'll have some equation for or
something so we can be able to build a
consciousness detector and say yeah here
there is some consciousness here there's
not
oh don't boil that lobster because it's
feeling pain or
it's okay because it's not feeling pain
right now we treat this as sort of just
metaphysics
but it would be very useful in emergency
rooms to know if a patient
has locked in syndrome and is conscious
or if they are
actually just out and in the future if
you build a very very intelligent
helper robot to take care of you you
know i think you'd like to know
if you should feel guilty by shutting it
down or if
or if it's just like a zombie going
through the motions like a fancy tape
recorder right
and and once we can make progress on the
science of consciousness and
figure out what is conscious and what
isn't
then um we
assuming we want to create positive
experiences and not
suffering we'll probably choose to build
some machines that are deliberately
unconscious
that do cr you know incredibly
boring repetitive jobs in a iron mine
somewhere or whatever
and maybe we'll choose to create helper
robots for the elderly that are
conscious so
so that people don't just feel creeped
out that the you know the robot is just
faking it
when it acts like it's sad or happy like
i said elderly i think everybody gets
pretty deeply lonely in this world
and uh so there's a place i think for
everybody to have a connection with
conscious beings whether they're human
or otherwise but i know for sure that i
would
if i had a robot if i was going to
develop any kind of personal
emotional connection with it i would be
very creeped out if i knew it in an
intellectual level that the whole thing
was just the fraud
now today you can buy a little talking
doll
for it for a kid which will say things
and the little child will often think
that this is actually conscious
yes and even real secrets to it that
then go on the internet and
with all sorts of creepy repercussions
uh you know
i would not want to be just hacked and
tricked
like this if i was going to be
developing real emotional connections
with
the robot i would want to know that this
is actually real
it's acting conscious acting happy
because it actually feels it
and i i think this is not sci-fi i think
it's possible to measure to come up with
tools and make after we understand the
science of consciousness
you're saying is we'll be able to come
up with tools that can measure
consciousness
and definitively say like this thing is
experiencing
the things it says it's experiencing
kind of by definition if it is a
physical phenomena
information processing that and we know
that some information processing is
conscious and some isn't well then there
is something there to be discovered with
the methods of science
giulia tononi has stuck his neck out the
farthest and written down some equations
for a theory maybe that's right maybe
it's wrong
we certainly don't know but i i applaud
that kind of efforts to
sort of take this say this say this is
not just something
that philosophers can have beer and muse
about but something we can
measure we can study and coming being
that back to us
i think what we would probably choose to
do as i said is if we cannot figure this
out
choose to make to be quite mindful about
what sort of consciousness
if any we put in in different machines
that we have
we um and certainly
we wouldn't want to make them we should
not be making much machines that
suffer without us even knowing it right
and if
if at any point someone decides to
upload themselves
like ray kurzweil wants to do i don't
know if you've had him on your show do
we agree but then covet happens
so we're waiting it out a little bit you
know suppose he uploads himself into
this
robo-ray and it talks like him
and acts like him and laughs like him
and before he powers off his biological
body
he would probably be pretty disturbed if
he realized that there's no one home
this robot is not having any subjective
experience
right if we reply if humanity gets
replaced
by by uh wrote by machine descendants
which do all these cool things and build
spaceships and go to
intergalactic rock concerts and it
turned out
turns out that they are all unconscious
uh just going through the motions
wouldn't that be like the ultimate robo
zombie apocalypse
right just a play for empty benches yeah
i have a sense that there's some
kind of once we understand consciousness
better we'll understand that there's
some kind of continuum
and it would be a greater appreciation
and we'll probably understand
just like you said it'd be unfortunate
if it's the trick
we'll probably definitely understand
that love is indeed a trick
that will play on each other that we
humans are
we convince ourselves we're conscious
but we're really um
you know awesome trees and dolphins are
all the same kind of caution can i try
to cheer you up a little bit with the
philosophical thought here about the
last part
yes let's do it you know you might say
okay yeah love is just a
collaboration enabler okay and then
you'll
and then maybe you can go and get
depressed about that but i
i think that would be the wrong
conclusion actually you know like i know
that the only reason i enjoy food is
because
my genes hacked me and they don't want
me to starve to death
not because they care about me it's
consciously enjoying succulent
delights of pistachio ice cream but they
just they just want me to make copies of
them
the whole thing so in a sense the whole
the whole enjoyment of food is also a
scam
like this but does that mean i shouldn't
take pleasure in this pistachio ice
cream
i love pistachio ice cream and i can
tell you i
i have i know this is an experimental
fact i enjoy pistachio ice cream
every bit as much even though i
scientifically know exactly
why what kind of scam this was your
genes
really appreciate that you like the
pistachio ice cream well but i my mind
appreciates it too you know and
i have a conscious experience right now
ultimately all of my brain
is also just something the genes built
to copy themselves but so what
you know i'm grateful that yeah thanks
jeans for doing this but you know
now it's my brain that's in charge here
and i'm gonna enjoy my conscious
experience thank you very much and not
just pistachio ice cream
but also the love i feel for my amazing
wife
and all the other delights of being
conscious i don't
actually richard feynman i think said
this
so well he he is also the guy you know
really got me into physics
some art friend said that oh science
kind of just
is the party pooper it kind of ruins the
fun right
when like you have a beautiful flowers
as the artist
and then the scientist is going to
deconstruct that into just a blob of
quarks and electrons and
and feynman just pushed back on that in
such a beautiful way
which i think also can be used to push
back and make you pre
not feel guilty about falling in love so
so here's what feynman basically said
he said his friend you know yeah i can
also
as a scientist see that this is a
beautiful flower thank you very much
maybe i'm
i can't draw as good a painting as you
because i'm not as talented an artist
but yeah
i can really see the beauty in it and it
just it also looks beautiful to me
but in addition to that fyman said as a
scientist i see even more beauty
that the artist did not see right
suppose this is the
a flower on a blossoming apple tree you
could say this tree
has more beauty in it than just the
fruit the colors and the fragrance
this tree is made of air if i'm in roads
this is one of my favorite flying quotes
ever
and it took the carbon out of the air
and bound it in using the flaming heat
of the sun you know to turn the air into
a tree
and when you burn logs in your fireplace
it's really beautiful to think that this
is being reversed now the
tree is going the wood is going back in
the air and this flaming beautiful
dance of the fire that the artist can
see is the flaming light of the sun that
was
bound in to turn the air into a tree and
then the ashes
is the little residue that didn't come
from the air that the tree sucked out of
the ground you know
feynman said these are beautiful things
and science just
adds it doesn't subtract and i i
feel exactly that way about love and
about pistachio ice cream also
i can understand that it even there is
even more nuance to the whole thing yeah
right
at this very visceral level you can fall
in love just as much as someone who
knows nothing about neuroscience
but you can also approach appreciate
this even greater beauty
in it just like isn't it remarkable that
it came about from from this completely
lifeless universe just a bunch of
a hot blob of plasma expanding
and then over the eons you know
gradually
first the strong nuclear force decided
to combine
quarks together into nuclei and then the
electric force bound in
electrons and made atoms and then they
clustered from gravity and you got
planets and stars and
this and that then natural selection
came along and
and the genes had their little thing and
you started getting what
went from seeming like a completely
pointless universe so we're just trying
to increase entropy and
approach heat death into something that
looked more goal-oriented
isn't that kind of beautiful and then
this goal-orientedness
through evolution got ever more
sophisticated where you got ever more
and then you started getting this thing
which is kind of like deepmind's um
mu0 on steroids self the ultimate self
play
is not what what deepmind's ai does
against itself to
get better at go it's what all these
little cork blobs did against each other
in the in the game of of survival of the
fittest
you know when you had really dumb
bacteria living in a simple environment
there wasn't much incentive to get in
intelligent but then
the the life made environment more more
complex
and then there was more incentive to get
even smarter and
and that gave the other organisms more
incentive to also get smarter and then
here we are now just like like mew zero
learned to become world club master at
at go and chess from playing as itself
by just playing against itself all the
quarks here on our planet the electrons
have created giraffes and elephants
and humans pistachio ice cream i just
find that
really beautiful and i i me that just
adds to the enjoyment of
of love it doesn't subtract anything do
you feel a little more
i feel way better that was that was
incredible
so the this self play of quarks
taking back to the beginning of our
conversation a little bit you've
there's so many exciting possibilities
about artificial intelligence
understanding
the basic laws of physics do you think
ai will help us unlock there's been a
quite a bit of excitement throughout the
history of physics of
coming up with more and more general
simple laws that explain the nature of
our reality
and then the ultimate of that would be a
theory of everything that combines
everything together
do you think it's possible that well one
we humans
but perhaps ai systems will figure out
a theory of physics that unifies
all the laws of physics yeah i think
it's absolutely absolutely possible
i think it's it's very clear that we're
going to see a great boost to science
we're already seeing a boost actually
from from machine learning helping
science
alpha fold was an example you know the
decades-old protein folding problem
so and gradually yeah unless we go
extinct by doing something dumb like we
discussed i think
it's um
very likely that our understanding of
physics will become so good
that uh our technology will no longer
be limited by human intelligence but
instead be limited by the laws of
physics
so our tech today is limited by what
we've been able to invent right
i think as i ai progresses it'll just be
limited by the speed of light
and other physical limits which would
mean
it's going to be just dramatically
beyond you know where we are now
do you think it's a fundamentally
mathematical pursuit
of trying to understand like the laws of
this
that govern our universe from a
mathematical perspective so almost like
if it's ai it's exploring the space of
like theorems and those those kinds of
things or is there some other
is is there some other more
computational ideas
more sort of empirical ideas they're
both i would say
it's really interesting to look out at
the landscape of everything we call
science today
so here we come out with this big new
hammer it says machine learning on it
and ask you know
where are there some nails that you can
help with here that you can hammer
ultimately if machine learning gets the
point that it can do everything
better than us it will be able to help
across the whole space of
of science but maybe we can anchor it by
starting a little bit right now near
term
and see how we kind of move forward so
so like right now
first of all you have a lot of big data
science
where for example with telescopes we are
able to collect way more data
every hour than a grad student can just
pour over like in the old times right
and machine learning is already being
used very effectively even at mit i
define planets around other stars to
detect
exciting new signatures of new particle
physics in the sky and
to detect the ripples and the fabric of
space
time that we call gravitational waves
caused by enormous black holes crashing
into each other halfway across the
observable universe machine learning is
running and thinking
right now you know doing all these
things and it's really helping
all these experimental fields there is a
separate front of physics
computational physics which is getting
an enormous boost also
so um we had to do all our computations
by hand right
people would have these giant books with
tables of logarithms and oh my god
i just pains me to even think how
long time it would have taken to do
simple stuff then we started to get
little calculators and computers that
could do some basic math for us
now what we're starting to see is
kind of a shift from gophi
computational physics to
neural network computational physics
what i mean by that is
most computational physics would be done
by humans programming in the
intelligence of how to do the
computation into the computer
just as when gary kasparov got his
posterior kicked by ibm's deep blue in
chess
humans had programmed in exactly how to
play chess intelligence came from the
humans it wasn't learned right
mu zero can be not only
kasparov in chess but also fish which is
the best
sort of go-fi chess program by learning
and we're seeing more of that now that
shift beginning to happen in physics so
let me give you
an example so lattice qcd is an area of
physics whose goal is basically to take
the periodic table
and just compute the whole thing from
first principles
this is not the search for theory of
everything we already know the theory
that's supposed to produce as output the
periodic table
which atoms are stable how heavy they
are all that good stuff
their spectral lines it's called
it's a theory lattice qcd you can put it
on your t-shirt
or colleague frank wilcheck got the
nobel prize for working on it
but the math is just too hard for us to
solve we have not been able to start
with these equations and solve them to
the extent that we can predict oh yeah
and then there is carbon and this is
what the spectrum of the carbon atom
looks like
but uh awesome people are building these
super computer simulations
where you just put in these equations
and you
you make a big cubic lattice of space
or actually it's very small lattice
because you're going to the sub-atomic
scale down to the subatomic scale
and you're trying to solve this but it's
just so computationally expensive
that we still haven't been able to
calculate things as accurately as we
measure them in many cases
and now machine learning is really
revolutionizing this so my colleague
fiola shanahan and mit for example she's
been using this really cool machine
learning technique
called normalizing flows where she's
realized she can actually speed up the
calculation
dramatically by having the ai learn how
to do things faster
another area like this where
where we suck up an enormous amount of
supercomputer time to do physics
as black hole collisions so now that
we've done the sexy stuff of detecting
a bunch of this with ligo and other
experiments we want to be able to
know what we're seeing and so it's a
very simple
conceptual problem it's the two body
problem
newton solved it for classical gravity
hundreds of years ago but the two body
problem is still not
fully solved for black holes black
copper yes and nice nice gravity because
they won't disorbit each other forever
anymore two things they give off
gravitational waves
and make sure they crash into each other
and the game what you want to do is you
want to figure out okay
what kind of wave comes out as a
function of of the masses of the two
black holes
as a function of how they're spinning
relative to each other
etc and that is so hard it can take
months of super computer time on
massive numbers of cores to do it you
know
wouldn't it be great if you can use
machine learning
to greatly speed that up right um
now you can use the expensive old gofi
calculation as the truth and then see if
machine learning can figure out a
smarter faster way of getting the right
answer
yet another area like of computational
physics
these are probably the big three that
suck up the most computer
time lattice qcd black hole collisions
and cosmological simulations where you
take not a subatomic
thing and try to figure out the mass of
the proton but you
take something with it's enormous and
try to look at how all the galaxies get
formed in there oh wow
yeah there again there are a lot of very
cool ideas right now about how you can
use machine learning to
do this sort of better stuff better
the difference between this and the big
data is you kind of make the data
yourself
right so and then finally
we're looking over the physics landscape
and seeing what can we hammer with
machine learning right so we talked
about
experimental data big data discovering
cool stuff
that we humans then look more closely at
then we talked about taking the
expensive computations we're doing now
and figuring out how to do them much
faster and better
with ai and finally let's go really
theoretical
like so things like discovering
equations
having deep fundamental insights
this comes this is something
closest to what i've been doing in my
group we talked earlier about the whole
ai fineman project where if you just
have some data
how do you automatically discover
equations that
seem to describe this well that you can
then go back as a human and
then work with and test and explore and
uh you asked a really good question also
about if this is sort of a search
problem in some sense that's very deep
actually what you said because it is
suppose i ask you to prove some
mathematical theorem
what is a proof in math it's just a long
string of steps
logical steps that you can write out
with symbols
and once you find it it's very easy to
write a program to check whether it's a
valid
proof or not so why is it so hard to
prove it then well
because there are ridiculously many
possible candidate proofs you could
write down right
if this if the proof contains 10 000
symbols
even if there were only 10 options for
what each symbol could be
that's 10 to the power 1000 possible
proofs
which is way more than there are atoms
in our universe right so
you could say it's trivial to prove
these things you just write a computer
generate all strings and then check is
this the valid proof
no is this a valid proof no
and then you just keep doing this
forever
but there are a lot of but it is
fundamentally a search problem you just
want to search the space of all those
all strings
of symbols to find the one with find one
that is the proof
right and there's a whole area of
machine learning
called search how do you search with
some giant space to find the needle in
the haystack
it's easier in cases where there's a
clear measure of
of good like you're not just right or
wrong but this is better
and this is worse you can maybe get some
hints as to which direction to go in
that's why we talked about neural
networks work so well
i mean that's such a human thing of that
moment of genius of figuring out
the intuition of of good
essentially i mean we thought that that
is it
maybe it's not right we thought that
about chess right that exactly
that the ability to see like 10 15
sometimes 20 steps ahead was not a
calculation that humans were performing
it was some kind of weird intuition
about different patterns about board
positions about the relative positions
exactly
somehow stitching stuff together and a
lot of it is just like intuition
but then you you have like alpha i guess
zero be the first one that did uh
the self-play it just came up with this
it's it was able to learn to self-play
mechanism this kind of intuition
exactly but just like you said it's so
fascinating to think
well they're in the space of totally new
ideas can that be done
in developing theorems we know it can be
done by neural networks because we did
it with the neural networks in the
cranium those are the great
mathematicians of our
of humanity right and and i'm so glad
you brought up alpha zero because that's
the counter example
it turned out we were flattering
ourselves when we said
intuition is something different it's
only humans can do it it's not
information processing
if you if it used to be that way
again it's very it's really instructive
i think to compare
the chess computer deep blue that beat
kasparov
with alpha zero that beat lisa doll at
the go
because for deep blue there was no
intuition
there was some pro humans had programmed
in some intuition
after humans had played a lot of games
they told the computer you know
count the pawn as one point the bishop
is three points
rook is five points and so on you add it
all up and then you add some extra
points for past pawns and
subtract if the opponent has it and and
blah blah blah blah
and then what what deep blue did was
just search
just very brute force tried many many
moves ahead all these combinations and
the pruned research
and it could think much faster than
kasparov and it won
right and that i think
inflated our egos in a way it shouldn't
have because people started to say
yeah yeah it's just brute force search
but has no intuition yeah
alpha zero really
popped our bubble there yeah because
what alpha zero does yes it does also do
some of that
research but it also has this intuition
module
which in geek speak is called a value
function where it just looks at the
board
and comes up with a number for how good
is that position
the difference was no human told it how
good the position
is it just learned it
and mu zero is the is the coolest or
scariest
of all depending on your mood because if
the same
basic ai system will learn
what the good board position is
regardless of whether it's chess
or go or shogi or pac-man
or lady pac-man or breakout or space
invaders or any number a bunch of other
games you don't tell anything and it
gets this intuition
after a while for what's good so this is
very hopeful for science i think
because if if it can get intuition
what's a good position there
maybe it can also get intuition for what
are some good directions to go if you're
trying to prove something
if i i often one of the most fun things
in my science career is when i've been
able to prove some theorem about
something and
it's very heavily guided of course i
don't sit and try all random strings i
have a hunch that you know
this reminds me a little bit about this
other proof i've seen
for this thing so maybe i first what if
i try this
nah that didn't work out but but this
reminds me actually the way this
failed reminds me of that and so so
combining the intuition
that with all these brute force
capabilities i think i think it's going
to be able to help physics too
do you think that there will be a day
when an ai system
being the primary contributor let's say
plus wins the nobel prize in physics
obviously they'll give it to the humans
because we humans don't like to give
prizes to machines
it'll give it'll give it to the humans
behind the system you could argue that
ai has already been involved in some
nobel prizes probably maybe something
with black holes and stuff like that
yeah we don't like giving prizes to
other life forms
if someone wins a horse racing contest
they don't give the price the horse
either it's true
uh but do you think that's we might be
able to see something like that in our
lifetimes when
ai so like the first system i would say
that
makes us think about a nobel prize
seriously
is like alpha fold is making us think
about
in medicine physiology a nobel prize
perhaps discoveries that are direct
result of something that's discovered by
alpha fall do you think in physics
we might be able to see that in our
lifetimes i think
what's probably going to happen is more
of a blurring of the distinctions
so today
if somebody uses a computer to do a
computation that gives them the normal
price nobody's going to dream of giving
the price to the computer they're going
to be like that was just a tool
i i think for these things also people
are just going to
for a long time view the computer as a
tool but what's going to
change is that as the ubiqui the
ubiquity of
of machine learning i think
at some point in my lifetime finding a
human physicist who knows nothing about
machine learning is going to be about
almost as hard as
it is today finding a human physicist
who doesn't says oh i don't know
anything about computers
right or i don't use math it would just
be a ridiculous concept
you see but the thing is there is a
magic moment though
like with alpha zero when the system
surprises us in a way where the best
people in the world
truly learn something from the system in
a way where
you feel like it's another entity yeah
like the way people
the way magnus carlsen the way certain
people are looking at the work of alpha
zero
it's like uh it it truly is no longer a
tool
in the in the sense that it doesn't feel
like a tool
it feels like some other entity so
there's a magic difference like where
where you're like uh you know if an ai
system is able to come up with an
insight
that surprises everybody in a some
uh in some like major way that's a phase
shift
in our understanding of some particular
science or
some particular aspect to physics i feel
like that
is no longer a tool and then you can
start to say
uh that like it perhaps deserves the
prize
so for sure the more important the more
fundamental
transformation of the 21st century
science is exactly what you're saying
which is
probably everybody will be doing machine
learning
it's to some degree like if you want to
be successful
at uh unlocking the mysteries of science
you should be doing machine learning but
it's just exciting to think about like
whether there will be one that comes
along that's super surprising
and uh they'll make us question
like who the real inventors are in this
world yeah
yeah i think the question of isn't if
it's going to happen but when
and and but it's important in my mind
the time when that happens is also more
or less the same time
when we get artificial general
intelligence yes and then we have a lot
bigger things to worry about than
whether they should get the nobel prize
or not right
yeah because when you have
machines that can outperform our best
scientists
at science they can probably outperform
us
at a lot of other stuff as well
which can at a minimum you know make
them incredibly powerful agents in in
the world you know
and i think it's a mistake
to think we only have to start worrying
about
loss of control when machines get to agi
across the board where they can do
everything all our jobs long before that
they'll be hugely influential we talked
at length
about how the hacking of our minds
with um
algorithms trying to get us glued to our
screens right has already had a big
impact on on
society that was an incredibly dumb
algorithm in the grand scheme of things
right
just supervised machine learning yet
that had had huge impact
so so um i just don't want us to be
lulled into false sense of security and
think there won't be any societal impact
yeah until things reach human level
because it's happening already and it's
i was just thinking the other week
you know when i see some scaremonger
going oh the robots are coming the
implication
is always that they're coming to kill us
yeah and maybe you should have worried
about that if you were in
nagorno-karabakh
during the recent war there but more
seriously
the robots are coming right now but
they're mainly not coming to kill us
they're coming to hack us
[Music]
they're they're coming to hack our minds
into buying things that we maybe we
didn't need
to vote for people who may not have our
best interest in mind
and and it's kind of humbling i think
actually
as a human being to admit that it turns
out that our minds are actually much
hacked more hackable than we thought
and the ultimate insult is that we are
actually getting hacked by
the machine learning algorithms that are
in some objective sense much dumber than
us
you know but maybe we shouldn't be so
surprised because you know
how do you feel about the cute puppies
love them
so you know you would probably argue
that in some
across-the-board measure you're more
intelligent than they are but boy are
our cute puppies good at hacking us
right yeah they move into our house
persuade us to feed them and do all
these things what do they ever do but
for us
yeah other than being cute and making us
feel good right
so if puppies can hack us maybe we
shouldn't be so surprised if
a pretty dumb machine learning
algorithms can hack us too not to speak
of cats which is another level
and and i think we should to counter
your previous point about there
let us not think about evil creatures in
this world we can all agree that cats
are as close to objective evil as we can
get
but that's just me saying that okay so
uh you have you seen the
the cartoon i think it's maybe the onion
um with this incredibly cute kitten and
just says um
with underneath something that thinks
about murder all day
exactly that's that's accurate uh you've
mentioned offline that there might be a
link between
post biological agi and seti so last
time we
talked um you've you've talked about
this intuition that
we humans might be quite unique
in our galactic neighborhood perhaps our
galaxy
perhaps the entirety of the observable
universe we might be the only
intelligence civilization here which is
um
and you argue pretty well for that
thought
um so i have a few little questions
around this
one the scientific question
in which way would you be
if you were wrong in that intuition
in which way do you think you would be
surprised like
why were you wrong if we find out that
you ended up being wrong
like in which dimensions so like is it
because we can't
see them is it because the nature of
their
intelligence or the nature of their life
is
totally different than we can possibly
imagine
is it uh because the
i mean something about the great filters
and surviving them
or maybe because we're being protected
from signals so all those explanations
for
um for why we haven't
heard a big loud like red
light that says yeah we're here yeah
so there are actually two separate
things there that i could be wrong about
two separate claims that i made right
not them
one one of them is i made the claim i
think
most civilizations
when you're going from simple bacteria
like things to space
space colonizing civilizations they
spend
only a very very tiny fraction of their
other
of their life being where we are uh that
i could be wrong about
the other one i could be wrong about is
a quite different statement that i think
that actually
i'm guessing that we are the only
civilization in our observable universe
from which light has reached us so far
that that's actually
gotten far enough to invent telescopes
so let's talk about maybe both of them
in turn because they really are
different
the first one if if we look at
the n equals one the data point we have
on this planet
right so we spent um four and a half
billion years fussing around on this
planet with life right we got
and most of it it was pretty lame stuff
from an intelligence perspective
you know it's bacteria and then
the dinosaur is spent then but things
gradually accelerated
right then the dinosaurs spent over 100
million years stomping around here
without even inventing smartphones
and um and then very recently
you know it's only we've only spent 400
years going from newton
to us right yeah in terms of technology
and
we've look what we've done even you know
when i was a little kid there was no
internet even
so it's i think it's pretty likely for
in this
case of this planet right that we're
either gonna
really get our act together and
start spreading life into space the
century and doing all sorts of great
things or where you're gonna
gonna wipe out um it's a little hard
if i i couldn't be wrong in the sense
that maybe what happened on this
earth is very atypical and for some
reason what's more common on other
planets is that
they spend an enormously long time
futzing around with the ham radio and
things but they just never really
take it to the next level for reasons i
don't haven't understood
i'm humble and open to that but i would
bet
at least ten to one that our situation
is more typical
because the whole thing with moore's law
and accelerating technology it's pretty
obvious why it's happening
everything that grows exponentially we
call it an explosion whether it's a
population explosion or a nuclear
explosion it's always caused by the same
thing it's that
the next step triggers a step after that
so i
we tomorrow's technology today's
technology enables tomorrow's technology
and that enables
the next level and
as i think because the technology is
always better of course the steps can
come faster and faster
on the other question that i might be
wrong about that's the much more
controversial one i think
but before we close out on this thing
about if
the first one if it's true that most
civilizations spend only a very short
amount of their total time in the stage
say between um
inventing um telescopes or like
mastering electricity and leaving their
and doing space travel
if that's actually generally true but
then that should apply also elsewhere
out there so we we should be very very
we should
be very very surprised if we had find
some random civilization and we happen
to catch them exactly in that very very
short stage
it's much more likely that we find this
planet full of bacteria
yes or that we find some civilization
that's already post-biological and has
done some really cool
galactic construction projects in their
in their galaxy would we be able to
recognize them do you think is it
possible
that we just can't i mean this post
biological world
could it be just existing in some other
dimension it could it
could be just all a virtual reality game
for them or something i don't know that
that it changes completely we won't be
able to detect
we have to be honestly very humble about
this yeah i think that i said i think i
said earlier the number one
principle of being a scientist is you
have to be humble and willing to
acknowledge that everything we think
guess might be totally wrong
of course you can imagine some
civilization where they all decide to
become buddhists and very inward looking
and just move into their little virtual
reality and not disturb the
flora and fauna around them and we might
not notice them
but this is a numbers game right if you
have millions of civilizations out there
or billions of them
all it takes is one with a more
ambitious mentality
right that decides hey we are going to
go out and
settle a bunch of other solar systems
and maybe galaxies
and then it doesn't matter if they're a
bunch of quiet buddhists we're still
going to notice
yeah that expansionist one right and it
seems like
quite the stretch to assume that you
know we know even in our own galaxy that
there are
probably a billion or more planets that
are pretty earth-like
and many of them were formed over a
billion years before ours
so had a big head start so if you
actually assume also that life happens
kind of
automatically on an earth-like planet
i think it's it's pretty quite the
stretch to then go and say okay so
we are billions of another billion
civilizations out there that also have
our level of tech
and they all decided to become buddhists
and not a single one
decided to go like go hitler on the
galaxy and say we need
to go on and colonize or and or not and
not a single one decided for more
benevolent reasons to go out and get
more resources
that that seems seems like a bit of a
stretch frankly and this leads into the
the second thing you challenged me to be
that i might be wrong about
how rare or common is life you know
so francis drake when he wrote down the
drake equation multiplied together a
huge number of factors
and then we don't know any of them so we
know even less about
what you get when you multiply together
the whole product yeah
since then a lot of those factors have
become much better known
one of his big uncertainties was how
common is it that a solar system even
has a planet right
well now we know it's very common
earth-like planets we know
we have better diamonds many many of
them even in our galaxy
at the same time you know we have thanks
to i'm a big supporter of the seti
project
and its cousins and i think we should
keep doing this and we've learned a lot
we've we've learned that so far all we
have is still
unconvincing hints nothing more right
and and there are certainly many
scenarios where it would be
dead obvious if there were
100 million other human-like
civilizations in our galaxy
it would not be that hard to notice some
of them with today's technology and we
haven't right so
so what we can what we can say is well
okay
we can rule out that there is a human
level of civilization on the moon
and in fact in many nearby solar systems
where we
we cannot rule out of course that there
is something like earth
sitting in a galaxy 5 billion light
years away
but we've ruled out a lot and that's
already kind of shocking
given that they're all these planets
there you know so like where are they
where are they all that's the that's the
classic fermi paradox
yeah and and um so my argument which
might very well be wrong
it's very simple really it just goes
like this okay we have no clue about
this
it could be the the the probability of
getting life on a random planet it could
be
10 to the minus one a priori or ten to
the minus five
ten ten to the minus twenty ten to minus
thirty ten to the minus
forty basically every order of magnitude
is about equally likely
when you then do the math and ask how
close is our nearest neighbor
it's again equally likely that it's 10
to the 10 meters away 10 to 20 meters
away 10 to 30 meters away
we can we have some nerdy ways of
talking about this with bayesian
statistics and a uniform log prior but
that's irrelevant
this is the simple basic argument and
now comes the data so we can say okay
how many were there all these orders of
magnitude 10 to the 26 meters away
there's the edge of our observable
universe if it's farther than that light
hasn't even reached us yet
if it's uh less than 10 to the 16 meters
away well it's
within an earth survey it's no farther
away than the sun we can definitely rule
that out you know
um so i think about it like this a
priori before we look to telescopes you
know
it could be 10 10 meters 10 to 20 10 to
the 30 to the 40th and 50th and
equally likely anywhere here yeah uh
and now we've ruled out like this chunk
yeah and most of it is outside and here
is the edge of our observable universe
yes yep so
i'm certainly not saying i don't think
there's any life elsewhere in space
if space is infinite then you're
basically 100 guaranteed that there is
but the probability that there is life
that the nearest neighbor it happens to
be in this little region between
where we would have seen it already yeah
and where we will never see it there's
actually
significantly less than one i think and
and
i think there's a moral lesson from this
which is really important
which is to be good stewards
of this planet and this shot we've had
you know it can be very dangerous to say
oh you know it's fine if we nuke our
planet or ruin the climate
or mess it up with an unaligned ai
because
i know there is this nice star trek
fleet out there they're going to swoop
in
and take over where we failed just like
it wasn't the big deal that the easter
island
losers wiped themselves out it's a
dangerous way of lulling yourself into
false sense of security
if it's actually the case that it might
be up to us
and only us the whole future of
intelligent life in our observable
universe
then i think um it's both it really
puts a lot of responsibility on our
shoulders inspiring it's a little bit
terrifying but it's also inspiring
but it's empowering i think most people
because because the biggest problem
today is
i see this even when i teach right
so many people feel that it doesn't
matter what they do or we do
we feel disempowered oh it makes no
difference
this is about as far from that as you
can come but we realize that what we do
on our little spinning ball here in our
lifetime you know
could make the difference for the entire
future of life in our universe you know
how empowering is that yeah survival of
consciousness i mean and the
the other a very similar kind of uh
empowering aspect of the drake equation
is
say that there is a huge number of
intelligent civilizations that spring up
everywhere
but because of the drake equation which
is the lifetime of a civilization
yeah maybe many of them hit a wall and
yeah
and just like you said it's clear that
that for us the great
filter the one possible great filter
seems to be coming
you know in the next 100 years so it's
also empowering to
say okay well uh
we have a chance to not i mean the way
great filters work it does just get most
of them
exactly nick bostrom has articulated
this really beautifully too i i you know
every time
yet another search for life on mars
comes back negative or something i'm
like
yes are odds for
us surviving this yes you already made
the argument and brought rush there
right
but just unpack it right the point is we
already know
there's a crap ton of planets out there
that are earth-like and we also know
that
most of them do not seem to have
anything like our kind of life on them
so
what went wrong there's clearly one step
along the evolutionary at least one
filter roadblock in going from no life
to space faring life and um
where is it is it in front of us or is
it behind us
right if if there's no filter behind us
and we keep finding all sorts of
of uh little mice on
mars and or whatever right that's
actually very depressing because that
makes it much more likely that the
filter is in front of us
and that what actually is going on is
like the ultimate
dark joke that that whenever a
civilization
invents sufficiently powerful attack
it's just you just set your clock and
then after a while it goes poof
for one reason or other and wipes itself
out that will be wouldn't that be like
utterly depressing
if we're actually doomed whereas if it
turns out
that there is a really there is a great
filter early on
that for whatever reason seems to be
really hard to get to the stage of um
sexually reproducing organisms or even
the first
ribosome or whatever right or
or maybe you have lots of planets with
dinosaurs and cows but for some reason
they tend to get stuck there and never
invent smartphones
all of those are huge vic boosts for our
own odds
because been there done that you know
it doesn't matter how hard or unlikely
it was that we got past that
roadblock because we already did yeah
and
then then that makes it likely that the
future is in our own hands we're not
doomed so that's why that's why i think
the fact that like that the life is rare
in the universe it's not just something
that there's some evidence for
but also something we should actually
hope for
so that's the the end the mortality the
death of human civilization that we've
been discussing in life maybe prospering
beyond
any kind of great filter do you think
about your own
death does it make you sad that you may
not witness some of the
you know you lead a research group on
working some of the biggest questions in
the universe actually
um both on the physics and the ai side
does it make you sad that you may not be
able to see some of these
exciting things come to fruition that
we've been talking about
of course of course it sucks the fact
that i'm going to die
and i remember once when i was much
younger
my dad made this remark that life is
fundamentally tragic and i'm like what
are you talking about that you know
many years later i i felt now i feel
like i totally understand what he means
you know
we grow up very little kids and
everything is infinite and it's so cool
and then suddenly
we find out that actually you know
you got to only this this is that you
can get game over at some point
so of course it's it's it's um
it's something that's sad uh are you
afraid
no not in the sense that uh i think
anything terrible is going to happen
after i die or anything like that no i i
think it's really going to be game over
but it's more that um
it makes me very acutely aware of i just
have what a wonderful gift this is that
i get to be alive right now
and and is a steady reminder to just
live life to the fullest and really
enjoy it because it is finite you know
and i think actually and we know we all
get the
regular reminders when someone near and
dear to us dies that
that um
you know one day it's going to be our
turn it adds this kind of focus
i wonder what it would feel like
actually to be an immortal being
if they might even enjoy some of the
wonderful things of life a little bit
less
just because there isn't that um
finiteness yeah do you think that could
be a feature not a bug the fact that we
beings are finite maybe there's lessons
for
engineering in artificial intelligence
systems as well
that are conscious like do you think uh
it makes is it possible
that the reason the pistachio ice cream
is delicious
is the fact that you're going to die one
day and you will not have
all the pistachio ice cream that you
could eat
because of that fact well let me say two
things first of all i
it's actually quite profound what you're
saying i do think i appreciate the
pistachio ice cream a lot more
knowing that i will there's only finite
number of times i get to enjoy that
and i can only remember finding a number
of times in the past
and moreover my life is not so long that
it just starts to feel like things are
repeating themselves in general
it's so new and fresh i also think
though that
beth is a little bit overrated and in
the sense that the
it comes from a sort of outdated view of
physics and what life actually is
because
if you ask okay what is it that's going
to die exactly what what
am i really when i say it i feel sad
about
the idea of myself dying am i really sad
that this skin cell here is going to die
of course not because it's going to die
next week anyway and then i'll grow a
new one
right and it's not any of my cells that
i'm
associating really with with who i
really am
nor is it any of my atoms or quarks or
electrons
in fact basically all of my atoms get
replaced
on a regular basis right so what is it
that's really me
from a modern physics perspective it's
the information
in processing amy that's where my my
memory that's my memories
that's my my values
my dreams my passion my love
um it's that's what's
really fundamentally me and
frankly not all of that will die
when when my body dies like richard
feynman for example
his body died of cancer you know but
many of his
ideas that he felt were made him very
him actually live on i i
this is my own little personal tribute
to richard feynman right i try to keep a
little bit of him alive in myself
and i've even quoted him today right
yeah he almost came alive for a brief
moment in this conversation
yeah yeah and and this honestly gives me
some solace you know when i work as a
teacher i feel
if i can actually share a bit about
myself
that my students feel worthy enough to
copy and
adopt some part of things that they know
or
or they believe or aspire to
now i live on also a little bit in them
right
and i uh so
being a teacher is a little bit of what
i
that's something also that contributes
to making me
a little teeny bit less mortal right
because
i'm not at least not all gonna die all
at once right
and i find that a beautiful tribute to
people we generally respect
if we can if we can remember them and
carry
in us the things that we felt was the
most
awesome about them right then they live
on
and i'm getting a bit emotionally ever
but it's a very beautiful idea you bring
up there
i think we should stop this
old-fashioned materialism and just
equate
who we are with our quarks and electrons
there's no scientific basis for that
really and
it's also very uninspiring
now if you look a little bit towards the
future
right one thing which really sucks about
humans dying
is that even though some of their
teachings and memories and stories and
and ethics and so on will be copied by
those around them hopefully
a lot of it can't be copied and just
dies with them with a brain and that
really sucks
that that's the fundamental reason why
we find it so tragic
when someone goes from having all this
information there
more just gone ruined right
with with uh more post biological
intelligence
that's gonna shift a lot right
the only reason it's so hard to make a
backup of your brain
in its entirety is exactly because it
wasn't built for that right
if you have a future machine
intelligence
there's no reason for why it has to die
at all if it wants to up
if you want to copy it whatever is it
into some other quark blob
right you can copy not just sum it but
all of it right
and so so in that sense
you can get immortality because all the
information can be copied out of any
individual
entity uh it's and it's not just
mortality that will change if we get
more post biological life it's also
the with that i think very much the
whole
individual the whole individualism we
have now right the reason that we make
such a big difference between me
and you is exactly because we're a
little bit limited in how much we can
copy like i would just love to go like
this
and copy your russian skills russian
speaking skills yeah
wouldn't it be awesome but i can't i
have to actually work for years if you
want to get better on it and
but if you have if we were robots
just copy and paste freely then that
loses completely
uh it washes away the sense of what
immortality is
and also individuality a little bit
right we would start feeling much more
maybe we would feel much more
collaborative with each other if we can
just
hey you know i'll give you my rush you
can give me your russian and i'll give
you whatever some i can
and suddenly you can speak swedish maybe
that's less a bad trade for you but
whatever else you want from my brain
right yeah and and um
there have been a lot of sci-fi stories
about hive minds and so on
where where people where experiences can
be more broadly shared
uh and uh i think one
we don't i don't pretend to know what it
would feel like
to be a
super intelligent machine but i'm quite
confident that
however it feels about mortality and
individuality will be very very
different from how it is for
us well
for us mortality and finiteness seems to
be pretty important
at this particular moment and so all
good things must come to an end
just like this conversation i saw that
coming
sorry this is the world's worst
transition i could talk to you forever
it's such a huge honor that you spend
time with me
uh honor is mine thank you so much for
getting me essentially to start this
podcast by doing the first conversation
making me realize
uh falling in love with conversation in
itself
and thank you so much for inspiring so
many people in the world with your books
with your research with your talking and
with the
other the like this this ripple effect
of friends
including elon and everybody else that
you inspire so
thank you so much for talking today
thank you i feel like you're
so fortunate that this you're doing this
podcast and getting so many
interesting voices out there into the
ether
and not just the five second sound bites
but so many of the interviews of what
you do
you really let people go in into depth
in a way which we surely need in this
day and age and that i got to be number
one like i feel
super honored yeah you started thank you
so much max
thanks for listening to this
conversation with max tegmark and thank
you to our sponsors
the jordan harbinger show for sigmatic
mushroom coffee
better help online therapy and
expressvpn
so the choice is wisdom caffeine sanity
or privacy choose wisely my friends and
if you wish
click the sponsor links below to get a
discount and to support this podcast
and now let me leave you with some words
for max tag mark
if consciousness is the way that
information feels
when it's processed in certain ways then
it must be
substrate independent it's only the
structure of information processing that
matters
not the structure of the matter doing
the information processing
thank you for listening and hope to see
you next time