Scott Aaronson: Computational Complexity and Consciousness | Lex Fridman Podcast #130
nAMjv0NAESM • 2020-10-12
Transcript preview
Open
Kind: captions
Language: en
the following is the conversation with
scott anderson his second time on the
podcast
he is a professor at ut austin director
of the quantum information center
and previously a professor at mit
last time we talked about quantum
computing this time
we talk about computation complexity
consciousness
and theories of everything i'm recording
this intro
as you may be able to tell in a
very strange room in the middle of the
night
i'm not really sure how i got here or
how i'm going to get out but
hunters thompson saying i think applies
to today and the last few days and
actually
the last couple of weeks life
should not be a journey to the grave
with the intention of arriving safely in
a pretty and well-preserved body
but rather to skid and broadside in a
cloud of smoke
thoroughly used up totally worn out
and loudly proclaiming wow
what a ride so i figured whatever i'm up
to here
and yes lots of wine is involved i'm
gonna have to improvise
hence this recording okay quick mention
of each sponsor
followed by some thoughts related to the
episode first sponsor is simply safe
a home security company i use to monitor
and protect my apartment
though of course i'm always prepared
with a
fallback plan as a man in this world
must always be second sponsor is
eight sleep a mattress
that cools itself measures heart rate
variability has a nap
and has given me yet another reason to
look forward to sleep
including the all-important power nap
third sponsor is expressvpn the vpn i've
used for many years to protect my
privacy
on the internet finally the fourth
sponsor is betterhelp
online therapy when you want to face
your demons with a licensed professional
not just by doing david goggins like
physical challenges
like i seem to do on occasion please
check out these sponsors in the
description to get a discount and
to support the podcast as a side note
let me say that this is the second time
i recorded a conversation
outdoors the first one was with stephen
wolfram
when it was actually sunny out in this
case it was raining
which is why i found a covered outdoor
patio
but i learned a valuable lesson which is
that
raindrops can be quite loud on the hard
metal surface of a
patio cover i did my best with the audio
i hope it still sounds okay to you i'm
learning always improving
in fact as scott says if you always win
then you're probably doing something
wrong to be honest i get pretty upset
with myself when i fail
small or big but i've learned that this
feeling is priceless
it can be fuel when channeled into
concrete plans
of how to improve so if you enjoy this
thing
subscribe on youtube review five stars
and apple podcast
follow on spotify support on patreon or
connect with me on twitter at lex
friedman
and now here's my conversation with
scott
erinson let's start with the most absurd
question but i've read you write some
fascinating stuff about it so uh let's
go there
are we living in a simulation what
difference does it make lex
i mean i'm serious what difference
because if we are
living in a simulation it raises the
question
how real does something have to be in
stimulation for in it to be
sufficiently immersive for us humans but
i mean even in principle how could we
ever know if we were in one
right a perfect simulation by definition
is something that's indistinguishable
from the real thing well we didn't say
anything about perfect it could be
no no that's that's right well if it was
an imperfect simulation if we could hack
it
you know find a bug in it then that
would be one thing right if
if this was like the matrix and there
was a way for me to
you know do flying kung fu moves or
something by hacking the simulation
well then you know we would have to
cross that bridge when we came to it
wouldn't we
right i mean at that point you know i
it's
it's uh hard to see the difference
between that and just uh
uh what people would ordinarily refer to
as a world with miracles
you know uh what about from a different
perspective thinking about the universe
as a computation like a program running
on a computer
that's kind of a neighboring concept it
is it is an interesting and reasonably
well-defined question to ask
is the world computable you know you
know does the world satisfy
what we would call in cs the the church
touring thesis
yeah that is you know uh could we take
any physical system
and simulate it to uh you know any
desired precision by a touring machine
you know given the appropriate input
data right and
so far i think the indications are
pretty strong that
our world does seem to satisfy the
church-touring thesis
uh at least if it doesn't then we
haven't yet discovered why not
uh but now does that mean that our
universe is a simulation
well you know that word seems to suggest
that there is some other
larger universe in which it is running
right right
and the problem there is that if the
simulation is perfect
then we're never going to be able to get
any direct evidence
about that other universe you know we
will only be able to see
uh the effects of the computation that
is running in this universe
well let's imagine an analogy let's
imagine a pc a personal computer a
computer
is it possible with the advent of
artificial intelligence
for the computer to look outside of
itself to see
to understand its creator i mean that's
a simple is that is that a ridiculous
connection well i mean with the
computers that we actually have
i mean first of all uh we we all know
that uh humans have done an imperfect
job of you know
enforcing the abstraction boundaries of
computers
right like you may try to confine some
program to a playpen
but you know as soon as there's one uh
uh
memory allocation error in in the c
program
then the program has gotten out of that
play pen and it can do whatever it wants
right this is how most hacks work you
know
viruses and worms and exploits and you
know you would have to imagine that
an ai would be able to discover
something like that
now you know of course if we could
actually discover some
exploit of reality itself then you know
then this whole
i mean we we then in some sense we we
wouldn't have to philosophize about this
right this would no longer be a
metaphysical conversation
right this would just but that's
the question is what is what would that
hack look like yeah well i have no idea
i mean
uh uh peter shore uh you know the
you know very famous person in quantum
computing of course has a joked
that uh maybe the reason why we haven't
yet
you know integrated general relativity
in quantum mechanics
is that you know the part of the
universe that depends on both of them
was that
was actually left unspecified and if we
ever tried to do an experiment
uh involving the singularity of a black
hole or something like that
then you know the universe would just uh
generate an overflow error
or something right yeah we would just
crash the universe
now um you know the the the universe you
know has seemed to hold up pretty well
for you know 14 billion years right so
you know my uh you know uh
occam's razor kind of guess has to be
that you know it will continue to hold
up you know that the fact that we don't
know the laws of physics
governing some phenomenon is not a
strong sign
that probing that phenomenon is going to
crash the universe
right but you know of course i could be
wrong but do you think
on the physics side of things you know
there's been uh
recently a few folks eric weinstein
and stephen wolfram that came out with a
theory of everything i think there's a
history of physicists dreaming and
working on
the unification of all the laws of
physics do you think it's possible that
once we understand
uh more physics not necessarily the
unification of the laws but just
understand physics more deeply at the
fundamental level
we'll be able to start you know uh
i mean part of this is humorous but uh
looking
to see if there's any bugs in the
universe that can be exploited for
uh you know traveling at uh
not just speed of light but just
traveling faster than our current
uh spaceships can travel all that kind
of stuff
well i mean to travel faster than our
current spaceships
could travel you wouldn't need to find
any bug in the universe right the known
laws of physics
you know let us go much faster up to the
speed of light
right and you know when people want to
go faster than the speed of light
well we actually know something about
what that would entail
namely that you know according to
relativity
that seems to entail communication
backwards in time
okay so then you have to worry about uh
close time like curves and all of that
stuff so you know in some sense we
we sort of know the price that you have
to pay for these things
right understanding of physics that's
right that's right we can't
you know say that they're impossible but
we you know we know that
sort of a lot else in physics breaks
right so uh now regarding uh eric
weinstein and stephen wolfram like i
wouldn't say that either of them has a
theory of
everything i would say that they have
ideas that they hope
you know could someday lead to a theory
of everything is that a worthy pursuit
well i mean certainly let's say by
theory of everything
you know we don't literally mean a
theory of cats and
of baseball and you know but we just
mean it in the
in the more limited sense of everything
a fun
a fundamental theory of physics right of
all of the
fundamental interactions of physics of
course such a theory
even after we had it uh you know would
would leave
the entire question of all the emergent
behavior right
you know to uh to be explored uh so it's
so it's only everything for a specific
definition of everything
okay but in that sense i would say of
course that's worth pursuing
i mean that is the entire program of
fundamental physics
right all of my friends who do quantum
gravity who do string theory who do
anything like that
that is what's motivating them yeah it's
it's funny though but
i mean eric weinstein talks about this
it is i don't know much about the
physics world but i know about the
ai world it is a little it is a little
bit taboo
uh to talk about agi for example on the
ai
side so really to talk about
uh the big dream of the community i
would say
because it seems so far away it's almost
taboo to bring it up because
uh you know it's seen as the kind of
people that dream about creating a truly
superhuman level intelligence that's
really far out there
people because we're not even close to
that and it feels like the same thing is
true for the physics
community i mean stephen hawking
certainly talked uh constantly about
theory of everything
right uh uh uh you know i mean i mean
people you know used those terms who
were you know some of the most respected
people in the
in the in the whole world of physics
right but i mean i think that
the distinction that i would make is
that people
might react badly if you use the term in
a way that suggests
that that you you know thinking about it
for five minutes have come up with this
major new insight about it yeah
right it's it's difficult stephen hawk
is
is a not a great example because i think
you can do whatever the heck you want
when you get to that level
and i certainly see like seeing your
faculty you know
that you know at that point that's the
one of the nice things about getting
older is you
stop giving a damn but community as a
whole
they tend to roll their eyes very
quickly at stuff that's outside the
quote-unquote mainstream well well let
me let me put it this way i mean if you
asked you know ed whitton let's say
who is you know you might consider the
leader of the string community
and thus you know very very mainstream
in a certain sense but
he would have no hesitation in saying
you know of course
you know they're looking for a you know
uh uh you know a
a a unified description of nature of
you know of general relativity of
quantum mechanics of all the fundamental
interactions of nature
right now you know whether people would
call that a theory of everything whether
they would use that
that term that might vary you know lenny
suskin would definitely have no problem
telling you that you know if that's what
we want right
for me who loves human beings in
psychology
it's kind of ridiculous to say
a theory that unifies the laws of
physics gets you to understand
everything i would say you're not even
close to understanding everything
yeah right well yeah i mean the word
everything is a little ambiguous here
right because you know and then people
will get into debates about
you know reductionism versus emergentism
and blah blah blah
and so in in not wanting to say theory
of everything people might just be
trying to short-circuit that debate
and say you know look you know yes we
want a fundamental theory of
you know the particles and interactions
of nature let me bring up the next topic
that people don't want to mention
although they're getting more
comfortable with it it's consciousness
you mentioned that you have a talk on
consciousness
that i watched five minutes of but the
internet connection was really bad
was this my talk about you know uh
refuting the integrated information
theory
yes which is a particular account of
consciousness that yeah i think
one can just show it doesn't work right
so let me much harder to say what does
work what doesn't work yeah yeah
let me ask maybe it'd be nice to uh
comment on
you talk about also like the semi hard
problem of consciousness or like almost
hard pro or kind of hard pretty pretty
hard pretty hard one i think i call it
so maybe can you uh talk about that uh
their idea of um
of the approach to modeling
consciousness and why you don't find it
convincing
what is it first of all okay well so so
what what what i called the pretty hard
problem of consciousness this is my
term although many other people have
said something equivalent to this okay
uh but uh it's just you know the the
problem of you know giving an account of
just which physical systems are
conscious and which are not
or you know if there are degrees of
consciousness then quantifying how
conscious
a given system is oh awesome so that's
the pretty hard
yeah that's what i mean that's it i'm
adopting it i love it that's a good
a good ring to it and so you know the
infamous hard problem
of consciousness is to explain how
something like consciousness could arise
at all you know in a material universe
right or you know why does it ever feel
like anything
to to experience anything right and
you know so i'm trying to distinguish
from that problem right
and say you know no okay i am i would
merely settle for an account
that could say you know is a fetus
conscious you know if so at which
trimester you know is a uh
is a dog conscious you know what about a
frog
right or or even as a precondition you
take that both these things are
conscious
tell me which is more conscious yeah for
example yes
yeah yeah i mean if consciousness is
some multi-dimensional vector well just
tell me in which respects these things
are conscious and in which respect they
aren't
right and you know and have some
principled way to do it where you're not
you know carving out exceptions for
things that you like or don't
like but could somehow take a
description of an arbitrary physical
system
and then just based on the physical
properties of that system
or the informational properties or how
it's connected or something
like that just in principle calculate
you know its degree of consciousness
right i mean this this this would be the
kind of thing that we would need
you know if we wanted to address
questions like you know
what does it take for a machine to be
conscious right or when or
you know when when when should we regard
ais as being conscious
um so now this iit
this integrated information theory uh
which has been put forward by uh
giulio tanoni and a bunch of his uh
uh collaborators over the last
decade or two uh this is noteworthy i
guess
as a direct attempt to answer that
question
to you know answer the to address the
pretty hard problem
right and they give a uh a criterion
that's just based on how a system is
connected so you so it's up to you
to sort of abstract the system like a
brain
or a microchip as a collection of
components that are connected to each
other by some
pattern of connections you know and and
to specify how the components can
influence each other
you know like where the inputs go you
know where they affect the outputs but
then once you've specified that
then they give this quantity that they
call fee you know the greek letter
phi and the definition of phi is
actually changed over
time it changes from one paper to
another
but in all of the variations it involves
something about
what we in computer science would call
graph expansion
so basically what this means is that
they want it uh
in order to get a large value of fee uh
it should not be possible to
take your system and partition it into
two components
that are only weakly connected to each
other
okay so whenever we take our system and
sort of
try to split it up into two then there
should be lots and lots of connections
going between the two components
okay well i understand what that means
on a graph do they formalize
what uh how to construct such a graph or
data structure whatever
uh or is this well one of the criticism
uh
i i've heard you kind of say is that a
lot of the very interesting specifics
are usually communicated through
like natural language like like through
words
so it's like the details aren't always
well they well it's true i mean they
they they they have nothing even
resembling a derivation
of this fee okay so what they do is they
state a whole bunch of postulates
you know axioms that they think that
consciousness should satisfy
and then there's some verbal discussion
and then at some point
fee appears right right and this this
was one the first thing that really made
the hair stand on my neck to be honest
because
they are acting as if there is a
derivation they're acting as if you know
you're supposed to think that this is a
derivation and there's nothing even
remotely resembling
a derby they just pull the fee out of a
hat completely is one of
the key criticisms to you is that
details are missing or is that exactly
more fun
that's not even the key criticism that's
just that's just a side point
okay the the core of it is that i think
that the you know that they want to say
that a system is more conscious the
larger its value of fee
and i think that that is obvious
nonsense okay as soon as you think about
it for like a minute
as soon as you think about it in terms
of could i construct a system
that had an enormous value of fee like
you know even larger than the brain has
but that is just implementing an error
correcting code you know doing nothing
that we would
associate with you know intelligence or
consciousness or any of it
the answer is yes it is easy to do that
right
and so i wrote blog posts just making
this point that yeah it's easy to do
that
now you know tanoni's response to that
was actually kind of incredible
right i mean i i admired it in a way
because
instead of disputing any of it he just
bit the bullet
in the sense you know he was one of the
the uh the most
uh audacious bullet bitings i've ever
seen in my career
okay he said okay then fine
you know this system that just applies
this error correcting code it's
conscious
you know and if it has a much larger
value of fee then
you or me it's much more conscious than
you want me
you know you we just have to accept what
the theory says because
you know science is not about confirming
our intuitions it's about challenging
them
and you know this is what my theory
predicts that this thing is conscious
and
you know or super duper conscious and
how are you going to prove me wrong see
i would
so the way i would argue against your
blog post
is i would say yes sure you're right in
general but
for naturally arising systems developed
through the process of evolution on
earth
the this rule of the larger fee being
associated being associated with more
consciousness is correct
yeah so that's not what he said at all
right right because he wants this to be
completely general
right so we can apply to even computers
yeah i mean i mean the whole interest of
the theory
is the you know the hope that it could
be completely general apply to aliens to
computers to uh
uh animals coma patients to any of it
right yeah and uh uh so so so he just
said well
you know uh scott is relying on his
intuition but you know i'm relying on
this theory
and you know to me it was almost like
you know are we being serious here
like like like you know like like okay
yes in science we try to learn highly
non-intuitive things but what we do is
we
first test the theory on cases where we
already know the answer
right like if if someone had a new
theory of temperature right
then you know maybe we could check that
it says that boiling water is hotter
than ice
and then if it says that the sun is
hotter than anything you know you've
ever
experienced then maybe we we trust that
extrapolation right
but like this this theory like if if you
know
it it's now saying that you know a
a gigantic grit like regular grid of
exclusive or gates
can be way more conscious than a you
know a person
or than any animal can be you know even
if it
you know is you know is is is is so
uniform that it might as just well just
be a blank wall
right and and so now the point is if
this theory is sort of getting wrong
the question is a blank wall you know
more conscious than a person
then i would say what is what is there
for it to get right
so your sense is a blank wall uh
is not more conscious than a human being
yeah i mean i mean i mean
you could say that i am taking that as
one of my axioms
i'm saying i'm saying that if if a
theory of consciousness
is is get getting that wrong then
whatever it is talking about at that
point i
i i'm not going to call it consciousness
i'm going to use a different word you
have to use a different word i mean yeah
it's all
it's possible just like with
intelligence that us humans conveniently
define these very difficult to
understand concepts
in a very human-centric way just like
the touring test
really seems to define intelligence as a
thing that's human-like
right but i would say that with any uh
concept
you know there's uh uh uh
you know like we we we first need to
define it right and a definition
is only a good definition if it matches
what we thought we were talking about
you know
prior to having a definition right yeah
and i would say that you know
uh fee as a definition of consciousness
fails that test that is my argument
so okay let's so let's take a further
step so you mentioned that the universe
might be
uh the touring machine so like it might
be computational or simulatable by one
anyway
simulated by one so yeah do you what's
your sense
about consciousness do you think
consciousness is computation
that we don't need to go to any place
outside of the computable universe
to uh you know to to understand
consciousness to build consciousness to
measure consciousness all those kinds of
things i don't know
these are what uh you know have been
called the the vertigonous
questions right there's the questions
like like uh you know
you get a feeling of vertigo and
thinking about them right
i mean i certainly feel like uh i
am conscious in a way that is not
reducible to computation
but why should you believe me right i
mean
and and if you said the same to me then
why should i believe you
but as computer scientists yeah i feel
like
a computer could be intel could achieve
human level intelligence
but and that's actually a feeling and a
hope
that's not a scientific belief it's just
we've built up enough intuition the same
kind of intuition you use in your blog
it's you know that's what scientists do
they i mean some of it is a scientific
method but some of it is just damn good
intuition
i don't have a good intuition about
consciousness yeah i'm not sure that
anyone does or or has
in the you know 2500 years that these
things have been discussed lex
uh but do you think we will like one of
the i got a chance to
attend i can't wait to hear your opinion
on this but attend the neuralink event
and uh one of the dreams there is to uh
you know basically push neuroscience
forward and the hope with neuroscience
is that we can inspect the machinery
from which
all this fun stuff emerges and see we're
going to notice something
special some special sauce from which
something like consciousness or
cognition emerges
yeah well it's clear that we've learned
an enormous amount about neuroscience
we've learned an enormous amount about
computation you know about machine
learning about you'll know
ai how to get it to work we've learned
uh an enormous amount about the
underpinnings of the physical world you
know and
you know it from one point of view
that's like an enormous distance that
we've traveled along the road to
understanding consciousness
from another point of view you know the
distance still to be traveled on the
road
you know maybe seems no shorter than it
was at the beginning yeah
right so it's very hard to say i mean
you know these are
questions like like in in in sort of
trying to have a theory of consciousness
there's sort of a problem where it feels
like
it's not just that we don't know how to
make progress it's that it's hard to
specify
what could even count as progress right
because no matter what scientific theory
someone proposed
someone else could come along and say
well you've just talked about the
mechanism you haven't said anything
about
what breathes fire into the mechanism
right really makes there's something
that it's like to be
it right and that seems like an
objection that you could always raise
yes no matter you know how much someone
elucidated
the details of how the brain works okay
let's go touring tests and love the
prize i have this intuition
call me crazy but we
that a machine to pass the touring test
and is full
whatever the spirit of it is we can talk
about how to formulate
the perfect touring test that that
machine has to
be conscious or we at least have to uh
i have a very low bar of what
consciousness is a dentist
i tend to think that the emulation of
consciousness is as good as
consciousness
so like consciousness is just a dance a
social
a social uh shortcut like a nice useful
tool
but i tend to connect intelligence
consciousness together so by
by that do you uh maybe just
to ask what uh what role does
consciousness play do you think in
passing the touring test well look i
mean it's almost tautologically true
that if we had a machine that passed the
turing test then it would be emulating
consciousness
right so if your position is that you
know emulation of consciousness is
consciousness
then so you know by by definition any
machine that passed the touring test
would be conscious
but it's uh uh but i mean we know that
you could say that you know that that is
just a way to rephrase the original
question you know is an emulation of
consciousness
you know necessarily conscious right and
you can you know
i hear i'm not saying anything new that
hasn't been
debated ad nauseum in the literature
okay but
you know you could uh imagine some very
hard cases like imagine a machine
that passed the touring test but it did
so just
by an enormous cosmological sized
look-up table
that just cached every possible
conversation that could be had the old
chinese room
well well yeah yeah but but this is uh
uh i mean i mean the chinese room
actually would be doing some computation
at least in searle's version right
here i'm just talking about a table
lookup okay now
it's true that for conversations of a
reasonable length this
you know lookup table would be so
enormous that wouldn't even fit in the
observable
universe okay but supposing that you
could build a big enough look-up table
and then just you know pass the touring
test just by looking up what the person
said
right are you going to regard that as
conscious okay
let me try to make this yeah yeah formal
and then you can shut it down
i think that the emulation of something
is that something if there exists in
that system a black box
that's full of mystery so like
uh full of mystery to whom to uh human
in inspectors
so does that mean that consciousness is
relative to the observer like could
something be conscious for us
but not conscious for an alien that
understood better what was happening
inside the black box
yes so that if inside the black box is
just a look-up table
the alien that saw that would say this
is not conscious to us
another way to phrase the black box is
layers of abstraction
which make it very difficult to see to
the actual underlying
functionality of the system and then we
observe just the abstraction
and so it looks like magic to us but
once we understand the
inner machinery it stops being magic and
so like
that's a prerequisite is that you can't
know
how it works some part of it because
then there has to be in our human mind
uh entry point for the magic
so that that's that's a formal
definition of the system
yeah well look i mean i i explored a
view and this essay i wrote
called the ghost and the quantum touring
machine uh seven years ago
that is uh related to that except that i
did not want to
have consciousness be relative to the
observer right because i think that
you know if consciousness means anything
it is something that is experienced by
the entity that is conscious
right you know like i don't need you to
tell me that i'm conscious right
nor do you need me to to to
to tell you that you are right so uh
so but but basically what i explored
there is you know are there uh
aspects of a of a system like uh like a
brain
that uh that just could not be predicted
even with arbitrarily advanced future
technologies
yes because of chaos combined with
quantum mechanical uncertainty
you know and things like that i mean
that that actually could be a
a property of the brain you know if true
that would distinguish it in a
principled way
at least from any currently existing
computer not from any possible computer
but from yeah yeah
let's do a thought experiment so yeah if
i gave you
information that you're in the entire
history of your life
basically explain away free will with a
look-up table say that
this was all predetermined that
everything you experienced has already
been predetermined
wouldn't that take away your
consciousness wouldn't you yourself that
wouldn't
experience of the world change for you
in a way that's
you you can't well let me put it this
way if you could do like in a greek
tragedy where you know you would just
write down a prediction for what i'm
going to do
and then maybe you put the prediction in
a sealed box
and maybe you know you you uh open it
later and you
show that you knew everything i was
going to do or you know
of course the even creepier version
would be you tell me the prediction
and then i try to falsify it and my very
effort to falsify it makes it come true
right but let's let's you know let's
even forget that you know that version
is as convenient as it is for fiction
writers right let's just
let's just do the version where you put
the prediction into a sealed envelope
okay but uh if you could reliably
predict everything that i was going to
do
i'm not sure that that would destroy my
sense of being conscious
but i think it really would destroy my
sense of having free will
you know and much much more than any
philosophical conversation could
possibly do that
right and so i think it becomes
extremely interesting to ask
you know could such predictions be done
you know even in principle
is it consistent with the laws of
physics to make such predictions
to get enough data about someone that
you could actually generate such
predictions without having to kill them
in the process to you know
slice their brain up into little slivers
or something i mean
theoretically possible right well um i
don't know i mean i mean it might be
possible but only at the cost of
destroying the person
right i mean it depends on how low you
have to go
in sort of the substrate like if there
was a nice
digital abstraction layer if you could
think of each neuron as a kind of
transistor
computing a digital function then you
could imagine
some nanorobots that would go in and we
just scan the state of each transistor
you know of each neuron
and then you know make a a good enough
copy
right but if it was actually important
to get down to the molecular
or the atomic level then you know
eventually you would be up against
quantum effects you would be up against
the unclonability of quantum states
so i think it's a question of uh how
good of a replica
how good does the replica have to be
before you're going to count it as
actually a copy of you
or as being able to predict your actions
uh that's a totally open question then
yeah yeah yeah and
and especially once we say that well
look maybe there's no way to pre
you know to make a deterministic
prediction because you know there's all
there you know we know that there's
noise buffeting the brain around
presumably even quantum mechanical
uncertainty
you know affecting the sodium ion
channels for example whether they open
or they close
um you know there's no reason why
over a certain time scale that shouldn't
be amplified just like we imagine
happens with the weather
or with any other you know chaotic
system uh
so um so if if that stuff
is is important right then uh
then then you know we would say uh well
you know you
you you can't uh uh you know you're
you're never going to be able to make an
accurate enough copy
but now the hard part is well what if
someone can make a copy
that sort of no one else can tell apart
from you right
it says the same kinds of things that
you would have said
maybe not exactly the same things
because we agree that there's noise
but it says the same kinds of things and
maybe you alone would say no
i know that that's not me you know it's
it doesn't share my
i haven't felt my consciousness leap
over to that other thing
i still feel it localized in this
version right
then why should anyone else believe you
what are your thoughts
i'd be curious you're a good person to
ask which is uh
penn rose's roger penrose's work on
consciousness
saying that there you know there is some
with axons and so on there might be some
biological places where quantum
mechanics can come into play
and through that create consciousness
somehow yeah
okay well um uh familiar with his work
of course you know i read
penrose's books as a teenager they had a
huge impact on me
uh uh five or six years ago i had the
privilege to actually talk these things
over with penrose you know at some
length at a conference in minnesota
and uh you know he is uh uh you know an
amazing uh
personality i admire the fact that he
was even raising such uh audacious
questions at all
uh but you know to to to answer your
question i think the first thing we need
to
get clear on is that he is not merely
saying that quantum mechanics is
relevant to consciousness
right that would be like um you know
that would be tame
compared to what he is saying right he
is saying that
you know even quantum mechanics is not
good enough right if because if
supposing for example that the brain
were a quantum computer
maybe that's still a computer you know
in fact a quantum computer can be
simulated by an ordinary computer it
might merely need
exponentially more time in order to do
so right so that's simply not good
enough for him
okay so what he wants is for the brain
to be a quantum gravitational computer
or or uh he wants the brain to be
exploiting
as yet unknown laws of quantum gravity
okay which would which would be
uncomputable
that's the key point okay yes yes that
would be literally uncomputable
and i've asked him you know to clarify
this but uncomputable
even if you had an oracle for the
halting problem
or you know and and or you know as high
up as you want to go and the sort of
high
the usual hierarchy of uncomputability
he wants to go beyond all of that okay
so
so you know just to be clear like you
know if we're keeping count of how many
speculations
you know there's probably like at least
five or six of them right
there's first of all that there is some
quantum gravity theory that would
involve this kind of
uncomputability right most people who
study quantum gravity would not agree
with that
they would say that what we've learned
you know what little we know about
quantum gravity from the
this ads cft correspondence for example
has been very much consistent with the
broad idea of nature being computable
right um but uh but all right but but
supposing that he's right about that
then you know
what most physicists would say is that
whatever
new phenomena there are in quantum
gravity you know they might be relevant
at the singularities of black holes they
might be relevant at the big bang
uh they are plainly not relevant
to something like the brain you know
that is operating at ordinary
temperatures
you know with ordinary chemistry and
you know the the the physics underlying
the brain they
would say that we have you know the
fundamental physics of the brain they
would say that we've
pretty much completely known for for
generations now
right uh because you know quantum field
theory lets us sort of
parametrize our ignorance right i mean
sean carroll has made this case and you
know in great detail right
that sort of whatever new effects are
coming from quantum gravity
you know they are sort of screened off
by quantum field theory
right and this is this bring you know
brings us to the whole idea of effective
theories
right but that like we have you know
that in like in the standard model of
elementary particles
right we have a quantum field theory
that seems totally adequate for all of
the terrestrial phenomena
right the only things that it doesn't
you know explain
are well first of all you know the
details of gravity
if you were to probe it like at a uh you
know extremes of you know
curvature or like incredibly small
distances
it doesn't explain dark matter it
doesn't explain black hole singularities
right but these are all very exotic
things very you know far removed from
our life on earth
right so for penrose to be right he
needs
you know these phenomena to somehow
affect the brain he needs the brain to
contain
antenna that are sensitive to the black
hole
to this as yet unknown physics right and
then he needs
a modification of quantum mechanics okay
so he needs
quantum mechanics to actually be wrong
okay he needs
uh uh what what he wants is what he
calls an objective reduction mechanism
or
an objective collapse so this is the
idea that once quantum states get large
enough
then they somehow spontaneously collapse
right that uh uh um you know and and
this is an idea that lots of people have
explored
uh you know there's uh something called
the grw
proposal that tries to uh you know
say something along those lines you know
and these are theories that actually
make testable predictions
right which is a nice feature that they
have but you know the very fact that
they're testable may mean that in the
uh you know in the in the coming decades
we may well be able to test these
theories and show that they're they're
they're wrong
right uh you know we may be able to test
some of penrose's ideas
if not not his ideas about consciousness
but at least his ideas that about an
objective collapse of quantum states
right and people have actually
like dick balmester have actually been
working to try to do these experiments
they haven't been able to do it yet to
attest penrose's proposal
okay but penrose would need more than
just an objective collapse of quantum
states
which would already be the biggest
development in physics for a century
since quantum mechanics itself okay he
would need
for consciousness to somehow be able to
influence
the direction of the collapse so that it
wouldn't be completely random
but that you know your dispositions
would somehow influence the quantum
state
to collapse more likely this way or that
way
okay finally penrose you know says that
all of this
has to be true because of an argument
that he makes based on girdle's
incompleteness theorem
okay right now like i would say the
overwhelming majority of computer
scientists and mathematicians
who have thought about this i don't
think that girdles and completeness
theorem can do what he needs it to do
here right i don't think that that
argument is sound okay
but that is you know that is sort of the
tower that you have to ascend to if
you're going to go where penrose goes
and the
intuition uses with uh yeah the
completeness theorem is that basically
that there's important stuff that's not
computable
it's not just that because i mean
everyone agrees that there are problems
that are uncomputable right that's a
mathematical theorem
right that but what penrose wants to say
is that uh
uh you know the um you know for example
there are statements
uh you know for you know given any uh
formal system
you know for doing math right there will
be true statements of arithmetic
that that formal system you know if it's
adequate for math at all
if it's consistent and so on will not be
able to prove
uh a famous example being the statement
that that system itself is consistent
right no you know good formal system can
actually prove its own consistency
that can only be done from a stronger
formal system which then can't prove its
own consistency
and so on forever okay that's gurdle's
theorem
but now why is that relevant to uh
consciousness right uh uh well you know
i mean i mean the the idea that it might
have something to do with consciousness
as an old one
girdle himself apparently thought that
it didn't really um you know uh
lucas uh uh um um
thought so i think in the 60s and
penrose is really just you know sort of
updating what what uh uh what what they
and others had said
i mean you know the idea that girdle's
theorem could have something to do with
consciousness was
you know um in in 1950 when alan turing
wrote his article
uh about the touring test he already you
know
was writing about that as like an old
and well-known idea and as one that he
was as a wrong one that he wanted to
dispense with okay but
the basic problem with this idea is you
know penrose wants to say
that uh and and all of his predecessors
your you know want to say
that you know even though you know this
given formal system cannot
prove uh its own consistency we
as humans sort of looking at it from the
outside
can just somehow see its consistency
right
and the you know the rejoinder to that
you know from the very beginning has
been well can we really
yeah i mean maybe or maybe maybe you
know maybe maybe he
penrose can but you know can the rest of
us
right uh and you know i i noticed that
that um
you know i mean it is perfectly
plausible to imagine
a computer that could say you know it
would not be limited to working within a
single formal system
right they could say i am now going to
adopt the hypothesis
that this that my formal system is
consistent right and i'm now going to
see what can be done from that stronger
vantage point
and and so on and you know when i'm
going to add new axioms to my system
totally plausible there's absolutely
gerdle's theorem has nothing to say
about
against an ai that could repeatedly add
new axioms
all it says is that there is no absolute
guarantee
that when the ai adds new axioms that it
will always be right
right okay and you know and that's of
course the point that penrose pounces on
but the reply is obvious and you know
it's one that that alan turing made 70
years ago
name we we don't have an absolute
guarantee that we're right when we add a
new axiom
right we never have and plausibly we
never will
so on alan turing you took part in the
lobner prize
uh uh not really no i didn't i mean
there was this
uh uh kind of ridiculous claim that was
made uh
some almost a decade ago about an
a chat bot called eugene goose
i guess you didn't participate as a
judge in the lobner prize i didn't
but you participated as a judge in that
i guess it was an
exhibition event or something like that
or was eugene uh
eugene gusman that was just me writing a
blog post because some journalists
called me to ask about it did you ever
chat with him i thought
i did chat with eugene gooseman i mean
it was available on the web the chat
oh interesting i didn't so yeah so all
that happened was that uh
so you know a bunch of journalists
started writing breathless articles
about you know an a you know first uh
chatbot that passes the touring test
right and it was this thing called
eugene guzman
that was supposed to simulate a 13 year
old boy
and um you know and apparently someone
had done
some tests where you know people
couldn't you know
you know were less than perfect let's
say distinguishing it from a human
and they said well if you look at
touring's paper
and you look at you know the percentages
that he that he talked about then you
know it seemed like we're past that
threshold
right and you know i had a sort of
you know different way to look at it
instead of the legalistic way
like let's just try the actual thing out
and let's see what it can do with
questions like you know is mount everest
bigger than a shoebox
okay or just you know like the most
obvious questions right and then and you
know and the answer is
well it just kind of parries you because
it doesn't know what you're talking
about right
so just clarify exactly in which way
they're obvious they're obvious
in the sense that you convert the
sentences
into the meaning of the objects they
represent and then do some basic
obvious we mean your common sense
reasoning
with the objects that the sentences
represent uh right right it was not able
to answer
you know or even intelligently respond
to basic common sense questions but let
me say something stronger than that
there was a famous chatbot in the 60s
called eliza
right that you know that managed to
actually fool
you know a lot of people right or people
would pour their hearts out
into this elisa because it simulated a
therapist
right and most of what it would do is it
would just throw back at you whatever
you said
right and this turned out to be
incredibly effective
right maybe you know therapists know
this this is you know one of their
tricks
but uh it um um you know it it really
had some people convinced
uh but you know this this thing was just
like i think it was literally just a few
hundred lines
of lisp code right it was not only was
it not intelligent it wasn't
especially sophisticated it was like a
it was a simple little hobbyist program
and eugene gusman from what i could see
was not a significant advance compared
to
uh eliza right so so this is and and
that was
that was really the point i was making
and this was
you know you didn't in some sense you
didn't need a like
a computer science professor to sort of
say this like
anyone who was looking at it and who
just had
you know an ounce of sense could have
said the same thing
right well but because you know these
journalists were call you know
calling me you know like the first thing
i said was
uh well you know no you know i i'm a
quantum computing person i'm not an ai
person you know you shouldn't ask me
then they said look you can go here and
you can try it out i said all right
all right so i'll try it out um but now
you know
this whole discussion i mean it got a
whole lot more interesting in just the
last few months
yeah i'd love to hear your thoughts
about gpt yeah yeah yeah
in the last few months we've had you
know we've we've
the world has now seen a chat engine or
a text engine
i should say called gpt-3 um
that you know i think it it's still you
know it does not pass
a touring test you know there are no
real claims that it passes the touring
test
right you know this is comes out of the
group at open ai
and you know they're you know they've
been relatively careful and what they've
claimed about the system
but i think this this this
uh as clearly as eugene gusman was not
in advance over eliza
it is equally clear that this is a major
advance over over
over eliza or really over anything that
the world has seen before
uh this is a text engine that can
come up with kind of on topic you know
reasonable sounding completions to just
about anything that you ask
you can ask it to write a poem about
topic
x in the style of poet y
and it will have a go at that yeah and
it will do you know
not a perf not a great job not an
amazing job
but you know a passable job you know
definitely you know as as good as you
know
you know in in many cases i would say
better than i would have done
right uh you know you can ask it to
write you know an
essay like a student essay about pretty
much any topic and it will get something
that i am pretty sure
would get at least a b minus you know in
my most you know
high school or even college classes
right and you know in some sense
you know the way that it did this the
way that it achieves this
um you know scott alexander of the you
know the much
mourned blog slate star codex had a
wonderful way of putting it he said that
they basically just ground up the entire
internet into a slurry
okay yeah and you know and i i to tell
you the truth i had wondered for a while
why nobody had tried that
right like why not write a chat bot by
just doing deep learning over a corpus
consisting of the entire web
right and and so so so uh now they
finally have done that
right and you know the results are are
very impressive
you know it's not clear that you know
people can argue about whether this is
truly a step
toward general ai or not but this is an
amazing capability
uh that you know uh we didn't have a few
years ago
that you know if a few years ago if you
had told me that we would have it now
that would have surprised me
yeah and i think that anyone who denies
that is just not engaging with what's
there
so their model it takes a large part of
the inte
Resume
Read
file updated 2026-02-13 13:25:49 UTC
Categories
Manage