François Chollet: Scientific Progress is Not Exponential | AI Podcast Clips
-I6plWpbbSQ • 2019-10-09
Transcript preview
Open
Kind: captions
Language: en
what is your intuition why an
intelligence explosion is not possible
like taking the scientific all the South
Atlantic revolution why can't we
slightly accelerate that process so you
you can absolutely accelerates any
problem-solving process so recursively
a recursive search improvement is
absolutely a real thing but what happens
with recursively search in boring system
is typically not explosion because no
system exists in isolation and so
tweaking one part of the system means
that suddenly another Pallavi system
becomes a bottleneck and if you look at
science for instance which is clearly a
recursively self-improving clearly a
problem-solving system scientific
progress is not actually exploding if
you look at science what you see is the
picture of a system that is consuming an
exponentially increasing amount of
resources it's having a linear output in
terms of scientific progress and maybe
that that will seem like a very strong
claim many people are actually saying
that you know scientific progress is
exponential but when they are claiming
this they are actually looking at
indicators of resource consumption
resource consumption by science for
instance the number of firm papers being
published the number of parents being
filed and so on which are just just
completely correlated with how many
people are working on science today yeah
right so it's actually an indicator of
resource consumption but what you should
look at is the output is progress in
terms of the knowledge that sense
generates in terms of the the scope and
significance of the problems that we
solve and some people have actually been
trying to measure that like Michel
Neilson for instance he had a very nice
paper I think that was last year about
it
so his approach to measure a scientific
progress I was to look at the timeline
of scientific discoveries over the past
you know hundred 150 years and for each
measure discovery ask a panel of experts
to rate the significance of the
discovery and if the output of Sciences
institution were exponential you will
expect the temporal density of
significance to go up exponentially
maybe because there's a faster rate of
discoveries maybe because the
discoveries are you know increasingly
more important and what actually happens
if you if you plot this temporal density
of significance measured in this way is
that you see very much a flat graph you
see a flat graph across all disciplines
across physics biology in medicine and
so on and it actually makes a lot of
sense if you think about it because
thing about the progress of physics a
hundred and ten years ago right it was a
time of crazy change think about the
progress of Technology you know 130
years ago when we started in you know
replacing horses with scars on solid
electricity and so on it was a time of
incredible change and today is also a
time a very fast change but it would be
an unfair characterization to say that
today technology enzymes are moving way
faster than they did 50 years ago 100
years ago and if you do try to
rigorously plot the temporal density of
the significance yeah significance idea
of seeing a valley
sorry you do see very flat curves that
fastens and you can check out the paper
that Michael Neilson had about this idea
and so the way I interpret is as you
make progress you know in a given field
or in a given substance it becomes
exponentially more difficult to make
further progress like the very first
person to work on information theory if
you enter a new field and still the very
early years there's
a lot of low-hanging fruit you can think
that's right yeah but the next
generation of researchers is gonna have
to dig much harder actually to make
smaller discoveries I'll probably larger
number of small discoveries and to
achieve the same amount of impact you're
gonna need a much greater headcount and
that's exactly the picture you're seeing
with science that the number of
scientists and engineers is in fact
increasing exponentially the amount of
computational resources that are
available to science is increasing
exponentially and so on so the resource
consumption of science is exponential
but the output in terms of progress in
terms of significance is linear and the
reason why is because and even though
science is recursively self-improving
meaning that scientific progress
turns into technological progress which
in turn helps science if you look at
computers for instance our products of
science and computers are tremendously
useful in spinning up science
the internet same thing the engine is a
technology that's made possible by very
recent centric advances and itself
because it enables you know scientists
to network to communicate to exchange
papers and ideas much faster it is a way
to speed it centric promise so even
though you're looking at a recursively
self-improving system it is consuming
spinelli more resources to produce the
same amount of problem-solving very much
so that's the first thing anyway pain
and certainly that holds for the deep
learning community right if you look at
the temporal what did you call it the
temporal density of significant ideas if
you look at in deep learning I think I'd
have to think about that but if you
really look at significant ideas in deep
learning they might even be decreasing
so I do believe the per third paper
significance
it's like creasing with signifies and
the amount of papers is still today
exponentially increasing setting if you
look at an aggregate my guess is that
you would see a linear progress you're
probably aware to some
to send the significance of all papers
you would see roughly in your profits
and in in my opinion it is not a
coincidence that you're seeing in your
progress in science despite exponential
resource conception I think the resource
consumption is dynamically adjusting
itself to maintain linear progress
because the we as a community expect in
your progress meaning that if we start
investing less and single s progress it
means that suddenly there are some lower
hanging fruits that become available and
someone's going to step in step up and
pick them right right so it's very much
like a market right for discoveries and
ideas but there's another fundamental
part which you're highlighting which as
a hypothesis as science or like the
space of ideas any one path you travel
down it gets exponentially more
difficult to get a new way to develop
new ideas yes and your sense is that fun
that's gonna hold across our mysterious
universe yes when exponential promise
triggers exponential friction so that if
you tweak one part of a system suddenly
some other part becomes a bottleneck for
instance let's say let's say develop
some device that measures its an
acceleration and then it's it has some
engine and it add puts even more
acceleration in proportion if it's an
acceleration and you drop it somewhere
it's not going to reach infinite speed
because some it exists in a certain
context so the air around is gonna
generate friction it's gonna is gonna
you know block it at some top speed and
even if you were to consider the broader
context and lift the bottleneck there
like the bottleneck of a friction then
some other part of the system which
starts stepping in and creating external
friction maybe the speed of light are
you know whatever and it's definitely
horse true when you look at the problem
solving algorithm that is being run by
science as an institution science as a
system as you make more and more
progress this despite adding this
recursive self-improvement component
you are encountering exponential
friction like do more researchers you
have working on different ideas the more
overhead you have in terms of
communication across researchers if you
look at you were mentioned in quantum
mechanics right well if you wants to
start making significant discoveries
today significant progress in quantum
mechanics there is an amount of
knowledge you have to ingest which is
huge so there's a very large overhead to
even start to contribute there is a
large amount of overhead to synchronize
across researchers and so on and of
course there's the significant practical
experiments are going to require
exponentially expensive equipment
because there is your ones have already
been run right so in your senses there
is no way escaping there's no way of
escaping this kind of friction with
artificial intelligence systems yeah no
I think science is very good way to
model with what will happen with with a
super humans are excessively sniffing
pravinia
that's the intent I mean that's that's
my intuition too it's not it's not like
a mathematical proof of anything that's
not my points like I'm not I'm not
trying to prove anything I'm just trying
to make an argument to question the
narrative of intelligence explosion
which is quite a dominant narrative and
you do get a lot of pushback if you go
against it because so for many people
write AI is not just a subfield of
computer science it's more like a belief
system like this belief that the world
is headed towards an event the
singularity past which you know
I will become we go exponential very
much and the world will be transformed
and humans will become obsolete and if
you if you go against this narrative
because because it is not really a
scientific argument but more of a belief
system it is part of the identity of
many people if you go against this
narrative it's like you're attacking the
identity of people who believe in it
it's almost like saying God doesn't
exist at something right so you get a
lot of pushback if you try to question
this ideas
you
Resume
Read
file updated 2026-02-13 13:23:10 UTC
Categories
Manage