Transcript
KllCrlfLuzs • Richard Karp: Algorithms and Computational Complexity | Lex Fridman Podcast #111
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0423_KllCrlfLuzs.txt
Kind: captions
Language: en
the following is a conversation with
richard carp a professor at berkeley
and one of the most important figures in
the history
of theoretical computer science in 1985
he received the touring award for his
research in the theory of algorithms
including the development of the
admirance carp algorithm
for solving the max flow problem on
networks
hopcroft corp algorithm for finding
maximum
cardinality matchings in bipartite
graphs and
his landmark paper and complexity theory
called reducibility
among combinatorial problems in which he
proved
21 problems to be np complete this paper
was probably the most
important catalyst in the explosion of
interest in the study of np completeness
and the p versus np problem in general
quick summary of the ads two sponsors a
sleep mattress
and cash app please consider supporting
this podcast
by going to asleep.com lex
and downloading cash app and using code
lex
podcast click the links buy the stuff it
really is the best way to support this
podcast
if you enjoy this thing subscribe on
youtube review it with 5 stars on apple
podcast
support it on patreon or connect with me
on twitter at lex friedman
as usual i'll do a few minutes of as now
and never any ads in the middle that can
break the flow of the conversation
this show is sponsored by eight sleep
and it's pod pro mattress that you can
check out
at asleep.com lex to get 200
off it controls temperature with an app
it can cool down to as low as 55 degrees
and each side of the bed separately
research shows the temperature has a big
impact on the quality of our sleep
anecdotally it's been a game changer for
me i love it
it's been a couple weeks now i just been
really enjoying it
both in the fact that i'm getting better
sleep and then it's a
smart mattress essentially i kind of
imagine this being the early days of
artificial intelligence being a part of
every aspect of our lives
and certainly infusing ai in one of the
most important aspects of life which is
sleep
i think has a lot of potential for being
beneficial
the pod pro is packed with sensors that
track heart rate
heart rate variability and respiratory
rate
showing it all in their app the app's
health metrics are amazing
but the cooling alone is honestly worth
the money i don't always sleep
but when i do i choose the a-sleep pod
pro mattress
check it out at 8sleep.com to get
two hundred dollars off and remember
just visiting the site and considering
the purchase
helps convince the folks at asleep that
this silly old podcast is worth
sponsoring in the future
this show is also presented by the great
and powerful cash app the number one
finance app in the app store
when you get it use code lex podcast
cash app lets you send money to friends
buy bitcoin and invest in the stock
market with as little as one dollar
it's one of the best designed interfaces
of an app that i've ever used
to me good design is when everything is
easy and natural
bad design is when the app gets in the
way either because it's buggy
or because it tries too hard to be
helpful i'm looking at you clippy from
microsoft
even though i love you anyway there's a
big part of my brain and heart that
loves to design things
and also to appreciate great design by
others so again
if you get cash out from the app store
google play and use the code
lex podcast you get ten dollars and cash
app will also donate ten dollars to
first
an organization that is helping to
advance robotics and stem education
for young people around the world and
now
here's my conversation with richard carp
you wrote that at the age of 13 you were
first exposed to plain geometry
and was wonder struck by the power and
elegance of formal proofs
are there problems proofs properties
ideas and plain geometry that
from that time that you remember being
mesmerized by or just
enjoying to go through to prove
various aspects so michael rabin told me
this story
about an experience he had when he was a
young student
who was ex tossed out of his classroom
for bad behavior and was wandering
through the corridors of his school
and came upon two older students
who were studying the problem of finding
the shortest distance between
two non-overlapping circles
and michael thought about it and said
you take the straight line between the
two centers
and the segment between the two circles
is the shortest
because a straight line is the shortest
distance between the two centers
and any other line connecting the
circles would be
on a longer line and i thought
and he thought and i agreed that this
was just
elegant that pure reasoning could come
up with
such a result certainly the the shortest
distance from the two centers of the
circles
is a straight line could you once again
say
what's the next step in that proof well
any
any segment joining the the two
circles if you extend it
by taking the radius on each side
you get a segment with a path with three
edges which connects the two centers
and this has to be at least as long as
the shortest path which is the straight
line the straight line
yeah wow yeah that is that's quite
quite simple so what what is it about
that elegance that
you just find uh compelling well just
that you could
establish a a fact
about geometry beyond dispute by pure
reasoning
i i also enjoy the challenge of
solving puzzles in plain geometry it was
much more fun
than the earlier mathematics courses
which were mostly about
arithmetic operations and manipulating
them
was was there something about geometry
itself
the slightly visual component of it
yes absolutely although i lacked
three-dimensional vision i wasn't very
good at
three-dimensional vision you mean being
able to visualize three-dimensional
objects
three-dimensional objects or or um
surfaces hyperplanes and so on um
so so there there i didn't have an
intuition
but
for example the fact that the sum of the
angles of a triangle is 180 degrees
is proved convincingly
um and it comes as a surprise that that
can be done
why is that surprising the the
well it is a surprising uh is a
surprising idea
i suppose uh why is that proved
difficult it's not
that's the point it's so easy and yet
it's so convincing
do you remember what is the proof that
it's um
as up to 180 uh you you
start at a corner and draw
a line um
parallel to the opposite side
and that line sort of trisects the angle
between the other two sides
and uh you you get a
uh a half plane which has to add up to
180 degrees
and it consists in the angles by
by the equality of uh alternate
angles what's it called
you you you get a correspondence between
the angles
created by the side along the side of
the
triangle and the three angles of the
triangle
has geometry had an impact on when you
look into the future of
your work with combinatorial algorithms
has it had some kind of
impact in terms of yeah being able
the puzzles the visual aspects that were
first
so compelling to you not euclidean
geometry
particularly i think
i use tools like linear programming and
integer programming a lot and
but those require
high dimensional visualization and so i
tend to go by the
algebraic properties um right the
you you go by the algebra the linear
algebra and not by the
the visualization well the
interpretation in terms of
for example finding the highest point on
a polyhedron
as in linear programming
is motivating but again
it i don't have the high dimensional
intuition
that would particularly inform me so i
sort of deep lean on the algebra
so to linger on that point what kind of
visualization
do you like do you do when you're trying
to think about
we'll get to combinatorial algorithms
but just algorithms in general
yeah what kind of what what's inside
your mind when you're thinking about
designing algorithms
or or even just tackling any any
mathematical problem
well i think that usually an algorithm
is uh involves a repetition of some
inner loop and
and so i can sort of visualize the um
the distance from the desired solution
as iteratively reducing until you
finally hit the exact solution and try
to take steps that get you closer to the
try to
take steps that get closer and having
the certainty of
converging so it's it's racist
it's basically the mechanics of the
algorithm is often very simple
but especially when you're trying
something out on the computer so for
example
i did some work on the traveling
salesman problem and
i could see there was a particular
function that had to be
minimized and it was fascinating to see
the successive approaches to the minimum
to the optimum
you mean so first of all traveling
salesman problems where you have to
visit
uh every city without ever
the only ones yeah that's right find the
shortest path through
cities yeah uh which is sort of a
canonical standard a really nice problem
that's really hard
right exactly so can you say again
what was nice about the objective being
able to think about the objective
function there and
maximizing it or minimizing it well just
that the um as the algorithm proceeded
it was you were making progress
continual progress
and and eventually getting to the
optimum point
so there's two two parts maybe
maybe you can correct me but first is
like getting an intuition about what the
solution would look like
and or even maybe coming up with a
solution and two is proving that this
thing is
actually going to be pretty good uh what
part is harder for you where's the magic
happen
is it in the first sets of intuitions or
is it
in the detail the messy details of
actually showing
that it is going to get to the exact
solution
and it's going to run at this
at a certain complexity
well the magic is just the fact that it
the the gap from the optimum
decreases monotonically and you can see
it happening
and um various metrics of
what's going on are improving all along
until finally hit the optimum perhaps
later we'll talk about the assignment
problem and i can
illustrate illustrate a little better
yeah now zooming out again
as you write don knuth has called
attention to a breed of people
who derive great aesthetic
pleasure from contemplating the
structure of computational processes
so don calls these folks geeks and you
write that you remember the moment you
realized you were such a person
you were shown the hungarian algorithm
to solve the assignment problem
right so perhaps you can explain what
the assignment problem is and what
uh the hungarian algorithm is
so in the assignment problem you have uh
n boys and in girls and
you are given the desirability
of uh or the cost of matching
the i boy with the jth girl for all i
and j
you're given a matrix of numbers
and you want to find the
one-to-one matching of the boys with the
girls
such that the some of the associated
costs will be
minimized so the the best way
to match the boys with the girls or men
with jobs or any
two sets um no any possible matching is
possible
or yeah all one-to-one correspondences
are
permissible if there is a connection
that is not allowed
then you can think of it as having an
infinite cost
so um what you do is
uh to depend on
the observation that
the identity of the optimal
assignment or as we call it the optimal
permutation
um is not changed if you subtract
a constant from any row or column of the
matrix
you can see that the comparison between
the different assignments is not changed
by that
um because you're penal if you decrease
a particular row
all the elements of a row by some
constant
all solutions decrease by the cost of
that
by an amount equal to that constant
so the idea of the algorithm is to start
with a matrix of
non-negative numbers and
keep subtracting from rows or from
our entire columns um
in such a way that you subtract the same
constant from all the elements of that
row or column
uh while maintaining the property that
um uh
all the elements are non-negative
simple yeah and so and so
um what you have to do
is uh is find
small moves which will decrease the
total cost
while subtracting constants from rows
or columns and there's a particular way
of doing that by computing a kind of
shortest path through the elements in
the matrix
and you just keep going in this way
until you finally get a full permutation
of zeros while the matrix is
non-negative and then you know that that
has to be the cheapest
is that as simple as it sounds
so the the shortest path of the matrix
part
yeah the simplicity lies in how you find
the what i oversimplified slightly what
you
you you will end up subtracting a
constant from some rows
or columns and adding the same constant
back to other rows and columns
so as not to not to reduce any of the
zero
elements you leave them unchanged
but
each individual step modifies us
several rows and columns by the same
amount but
overall decreases the cost so there's
something about
that elegance that made you go aha this
is a beautiful
like it's it's uh it's amazing that
something like this
something so simple can solve a problem
like this yeah it's really cool
if i had mechanical ability i would
probably like to do
woodworking or other activities where
you sort of
shape something in into something
beautiful and orderly and there's
something about the orderly
systematic nature of uh
that innovative algorithm that is
pleasing to me
so what do you think about this idea of
geeks as don knuth calls them
what do you think of is it something uh
specific
to a mindset that allows you to discover
the elegance and
computational processes or is this all
of us can all of us discover this beauty
are you born this way
i think so i always like to play with
numbers i i
i used to amuse myself by multiplying
four digit
decimal numbers in my head and
putting myself to sleep by starting with
one and
doubling the number as long as i could
go and uh
testing my memory my ability to retain
the information
and i also read somewhere that you uh
you wrote
that you enjoyed uh showing off to your
friends
by i believe multiplying four digit
numbers
uh right a couple of four digit numbers
yeah i had a summer job at a beach
resort outside of boston
and uh the other employee i
i was the barker at a skee-ball game
yeah i used to i used to sit at a
microph
microphone saying come one come all come
in and play ski ball
five cents to play nickel to win and so
on that's what a barker i was gonna
i wasn't sure if i should know but
barker that's so you're the
the charming outgoing person is getting
people to uh come in
yeah well i wasn't particularly charming
but i could be very
repetitious and loud and
the other employees were
sort of juvenile delinquents who had no
academic bent but somehow
i found that i could impress them by
by performing this mental melter or
mental arithmetic
you know there's something too that you
know one of
some of the most popular videos on the
internet
is uh there's a there's a youtube
channel called number file that shows
off different mathematical ideas
there's still something really
profoundly interesting to people
about math the the beauty of it
something
even if they don't understand the basic
concept
even being discussed there's something
compelling to it what do you think that
is
any lessons you drew from the early teen
years when you were
showing off to your friends with the
numbers
like is what is it that attracts us to
the beauty of mathematics do you think
the general population not just
the the computer scientists and math the
magicians
i think that it you know you can do
amazing things you can
test whether large numbers are prime
you can uh um
you can solve little puzzles about
cannibals and missionaries
and there's a kind of achievement it's
it's it's
puzzle solving and at a higher level the
fact that you can
you can do this reasoning that you can
prove in an
absolutely ironclad way that the some of
the
angles of a triangle is 180 degrees
yeah it's a nice escape from the
messiness of the real world
where nothing can be proved so and we'll
talk about it but
sometimes the ability to map the real
world into such problems where you can't
prove it is this a is a powerful step
yeah it's amazing that we can do this
another attribute of geeks is they
they're not necessarily uh endowed with
emotional intelligence
so they can live in a world of
abstractions without having to
uh master the complexities of uh dealing
with people
so just to link on the historical note
as a phd student in 1955
he joined the computational lab at
harvard where
howard aiken had built the mark 1 and
the mark iv computers
just to take a step back into that
history what were those computers like
uh the mark iv filled
me a large room much big much bigger
than this large office that we were
talking in now and you could walk around
inside it
they were they were rows of
relays you could just walk around the
interior
and uh the
machine would sometimes fail because of
bugs which literally meant flying
creatures landing on the switches
so i never i never used that machine for
any
practical purpose the
lab eventually acquired a uh
one of one of the earlier um commercial
computers
this is already in the 60s no in the mid
50s in mid 50s
or mid late 50s there was already usual
computers in there yeah we had a univac
a 2000
univac with 2000 words of storage
and so you had to work hard to allocate
the memory properly to
also the excess time from one word to
another
depended on the number of
the particular words and so you there
was an
art to sort of arranging the storage
allocation to
make fetching data rapid
were you attracted to this actual
physical world implementation
of mathematics so it's a mathematical
machine that's
actually doing the math physically
no not at all i think i was a
i was attracted to the underlying
algorithms
so but did you draw any inspiration so
could you have imagined like what did
you imagine was the future of these
giant computers could you imagine that
60 years later would have billions of
these computers
all over the world i couldn't imagine
that
but there was a sense in the
laboratory that this was the wave of the
future
in fact my mother influenced me she she
told me that
data processing was going to be really
big and i should get into it
she's a smart woman yeah she was a smart
woman
and there was just a feeling that
this was going to change the world but i
i didn't think of it in terms of
personal computing i hadn't that
i had no anticipation that we would be
walking around with computers in our
pockets or anything like that
did you see computers as
tools as mathematical mechanisms to
analyze sort of
sort of theoretical computer science or
as the ai folks which is an entire
other community of dreamers yeah that's
something that could
one day have human level intelligence
well ai wasn't very
much on my radar i did read uh turing's
paper about the uh the uh
the uh the drawing test computing and
intelligence
yeah the turing test um what'd you think
about that paper was that just like
science fiction
um i thought that it wasn't a very good
test because it was too subjective so i
i didn't
feel that i didn't feel that the turing
test was really the right way to
calibrate how intelligent an algorithm
could be
to linger on that do you think it's pos
because you've come up with some
incredible
tests later on tests on algorithms right
yeah that are
like strong reliable robust across a
bunch of different classes of algorithms
but returning to this emotional
mess that is intelligence do you think
it's possible
to come up with the test that's as
iron-clad as
some of the computational complexity
work
well i think the greater question is
whether it's possible to achieve
human level level intelligence
right so that's so first of all let me
at the philosophical level do you think
it's possible to create algorithms
that reason and
would seem to us to have the same kind
of intelligence as human beings
it's an open question um it seems to me
that um most of the achievements have
acquire operate within a very limited
set of ground rules and for a very
limited precise task
which is a quite different situation
from the
processes that go on in the minds of
humans which
where they have to sort of function in
changing environments
they have emotions they have
[Music]
um
physical attributes for acquire for
exploring their environment
um they have intuition they have desires
um emotions and
i don't see anything in the current
achievements of what's called
ai that come close to that capability i
don't think there's any
computer program which
surpasses a six-month-old child in terms
of
comprehension of the world
do you think this complexity of human
intelligence
all the cognitive abilities we have all
the emotion
do you think that could be reduced one
day or just
fundamentally can it be reduced to an
out a set of algorithms or an algorithm
so can a touring machine achieve human
level intelligence
i am doubtful about that i guess the
argument
in favor of it is that the human brain
seems to achieve
what we call intelligence cognitive
abilities of different kinds
and if you buy the premise that the
human brain is
just an enormous interconnected set of
switches so to speak
then in principle you should be able to
diagnose what that interconnection
structure is like
characterize the individual switches and
build a
simulation outside but
why that may be true in principle that
cannot be the way we're
eventually going to tackle this problem
it's
you know you know that that does not
seem like a feasible way to go about it
so it there is however an existence
proof that
um uh if you believe that the brain is
is just a network of
of neurons operating by rules i guess
you could say that that's an existence
proof of the ability to build
the capabilities of a mechanism um
but it would be almost impossible to
acquire the information unless we got
enough insight into the operation of the
brain but there's so much mystery there
do you think what do you make of
consciousness for example
there's something as an example of
something we completely have no clue
about
the fact that we have this subjective
experience right is it possible that
this
network of uh this circuit of switches
is able to create something like
consciousness to know its own identity
yeah to know to know the algorithm to
know itself
to know itself i think if you try to
define that
rigorously you'd have a lot of trouble
yeah that's interesting
so i know that there are
many who um
believe that general
intelligence can be achieved and there
are even some who are
feel certain that uh
the singularity will come and uh we will
be surpassed by the machines which will
then
learn more and more about themselves and
reduce humans to an inferior breed
i am doubtful that this will ever be
achieved
just for the fun of it could you linger
on why
what's your intuition why you're
doubtful so there are
quite a few people that are extremely
worried about this
uh existential threat of artificial
intelligence of us being
left behind by the super intelligent
new species what's your intuition why
that's not quite
likely just because
none of the achievements in speech
or robotics or
natural language processing or creation
of
flexible computer assistance or any of
that comes
anywhere near close to
that level of cognition
what do you think about ideas as a sort
of uh if we look at moore's law
and exponential improvement uh to allow
us to
that would surprise us sort of our
intuition fall apart with with
exponential improvement
because i mean we're not able to kind of
we kind of think in linear improvement
yeah we're not able to imagine a world
that goes from the
mark one computer to a an iphone 10.
yeah so do you think it would be we
could be really surprised
by the exponential growth or
or on the flip side is is it possible
that also intelligence is
actually way way way way harder
even with exponential improvement to be
able to crack
i don't think any constant factor
improvement
could could change things and given
given our current comprehension
of how the of
of what cognition requires it seems to
me that
multiplying the speed of the switches by
a factor of a thousand or a million
uh will not be useful until we really
understand the organizational principle
behind the network of switches
well let's jump into the network of
switches and talk about combinatorial
algorithms if we could
let's step back with the very basics
what are combinatorial algorithms
and what are some major examples of
problems they aim to solve a
combinatorial algorithm
is is one which
deals with a a
system of discrete objects that can
occupy various states or take on various
values from a discrete set of values
um and need to be arranged or
or selected um in such a way as to
achieve some to minimize some cost
function
or to prove or to prove the existence of
some
combinatorial so an example
would be um coloring the vertices of a
graph
what's a graph let's step back so what
uh
and it's fun to uh to ask
one of the greatest computer scientists
of all time the most basic questions in
the beginning of most books
but for people who might not know but in
general how you think about it what is
what is a graph
uh a graph that's that's simple it's a
set of points
certain pairs of which are joined by
lines
called edges and they
sort of represent the in different
applications represent the
interconnections between
discrete objects so they could be the
interactions interconnections between
switches
in a digital circuit or
interconnections indicating the
communication patterns of a human
community
um and they could be directed or
undirected and then as you've mentioned
before
might have costs right they can be
directed or undirected
they can be you can think of them as if
if you think if a graph were
representing a
communication network then the edge
could be undirected meaning that
information could flow along it in both
directions
or it could be directed with only
one-way communication
a road system is another example of a
graph with weights
on the edges and then a lot of problems
of optimizing
the efficiency of such networks or
learning about the performance of such
networks
um uh are the
the objective combinatorial algorithm so
it could be
scheduling classes at a school where
the the
vertices the nodes of the network are
the individual
classes and uh the edges indicate the
constraints which say that certain
classes cannot take place at the same
time or
certain teachers are available only at
cert for certain classes
etc or um
i talked earlier about the assignment
problem of matching the boys with the
girls
um where um
you have a very graph with an edge from
each boy to each girl
with a weight indicating the cost
or in logical design of computers
you might want to find a
set of so-called gates switches
that perform logical functions which can
be interconnected to realize some
function
so you you might ask um
how many gates do you need in order to
um
for for a circuit to
give a yes output if at least
a given number of its inputs are ones
and no if not a few are
are present my favorite is probably
all the all the work with network flows
so anytime you have
uh i don't know why it's so compelling
but there's something just beautiful
about it it seems like there's so many
applications and communication networks
in uh traffic right flow
that you can map into these and then you
can think of pipes
and water going through pipes and you
could optimize it in different ways
there's something
always visually and intellectually
compelling to me about it and of course
you've done work there
yeah yeah so so there
the edges represent channels along which
some commodity can flow it might be
gas it might be water it might be
information
maybe supply chain as well like products
being products flowing from one
operation to another
and the edges have a capacity which is
the rate at which the commodity can flow
and a central problem is to
determine given a network of these
channels in this case the edges are
communication channels
the the challenge is to find the maximum
rate
at which the information can flow along
these channels to get from
a source to a destination and
that's a that's a fundamental
combinatorial problem that i i've worked
on
jointly with the scientist jack edmunds
we
i think we're the first to give a formal
proof that
this maximum flow problem through a
network
can be solved in polynomial time
which uh i remember the first time i
learned that
just learning that in um
maybe even grad school i don't think it
was even undergrad no
algorithm yeah do netfl network flows
get taught in in um basic algorithms
courses
yes probably okay so yeah i've i
remember being very surprised that max
flow
is a polynomial time algorithm yeah that
there's a nice fast algorithm that
solves max flow
but so there is an algorithm
named after you an admins they haven't
carp algorithm for max flow so
what was it like tackling that problem
and trying to arrive at a polynomial
time solution
and maybe you can describe the algorithm
maybe you can describe what's the
running time complexity
that you showed yeah well
first of all what is a polynomial time
algorithm yeah perhaps we could
discuss that so yeah let's let's
actually just even
yeah that's what is algorithmic
algorithmic complexity what are the
major
classes of algorithm complexity
so we in in a problem like the
assignment problem
or scheduling schools or
any of these applications um
you have a set of input data
which might for example be um
a set of vertices connected by edges
with
being you're given for each edge the
capacity of the edge
and you have algorithms
which are think of them as computer
programs with operations such as
addition subtraction
multiplication division comparison of
numbers and so on
and you're trying to construct an
algorithm
based on those operations
which will determine in a minimum number
of
computational steps the answer to the
problem in this case the computational
step
is one of those operations and the
answer to the problem
is let's say the um
the configuration of the network that
carries the maximum amount of flow
and an algorithm is said to run in
polynomial time
if as a function of the size of the
input the number of vertices the number
of edges and so on
the number of basic computational steps
grows
only as some fixed power of that size
a linear algorithm would
execute a number of steps linearly
proportional to the size
quadratic algorithm would be steps
proportional to the square of the size
and so on
and algorithms that whose running time
is bounded by some
fixed power of the size are called
polynomial algorithms
and that's supposed to be relatively
fast class of algorithms that's right we
theoreticians take that to be
the definition of an algorithm being
um efficient and and
we're interested in which problems can
be solved by such
efficient algorithms one can argue
whether that's
the right definition of efficient
because you
could have an algorithm whose running
time is the ten thousandth
power of the size of the input and that
wouldn't be
really efficient and in practice it's
oftentimes
reducing from an n squared algorithm to
an n log n or a linear time
is practically the jump that you want to
make
to allow a real world system to solve a
problem
yeah that's also true because especially
as we get very large networks
the size can be in the millions and uh
and then anything above uh
n log n where n is the size would be
uh too much for a practical solution
okay so that's polynomial time
algorithms what other classes of
algorithms are there
what's so that usually they they
designate polynomials of the letter p
yeah there's also
np np complete and be hard yeah
so can you try to disentangle those and
by trying to define them simply
right so a polynomial time algorithm is
one which
was running time is bounded by a
polynomial and the size of the input
uh there's then there's that the class
of such algorithms is
called p in the worst case by the way we
should say
right yeah for every case of the problem
and that's very important that
in this theory when we measure the
complexity of an algorithm
we really measure the
number of step the growth of the number
of steps
in the worst case so you may have an
algorithm that
[Music]
runs very rapidly in most cases but if
there is
any case where it gets into a very long
computation
that would increase the computational
complexity by this measure
and that's a very important issue
because there
as we may have discussed later there are
some very important algorithms which
don't have a good
standing from the point of view of their
worst case performance
and yet are very effective so
so theoreticians are interested in p the
class of problem solvable in polynomial
time
then there's np
which is the class of problems
which may be hard to solve but where the
where when confronted with the solution
you can check it
in polynomial time let me give you an
example there
so if we look at the assignment problem
uh so you have
uh n boys you have n girls you the
number of numbers that
you need to write down to specify the
problem instances
n squared and the question
is
how many steps are needed to solve it
and
jack edmonds and i were the first to
show that it could be done in time
n cubed uh earlier algorithms required
n to the fourth so as a polynomial
function of the size of the input this
is a
fast algorithm now to illustrate the
class
np the question is how long
would it take to verify that a solution
is optimal so for example
if if the input was a graph
we might want to find the largest
clique in the graph or a clique is a set
of vertices
such that any vertex each vertex in the
set is adjacent
to each of the others so the
clique is a complete subgraph
yeah so if it's a facebook social
network everybody's friends
with everybody else it's close click no
that would be what's called a complete
graph it would be
no i mean uh within that click uh within
that clique yeah
yeah they're all friends so a complete
graph is when
everybody is friendly as everybody is
friends with everybody yeah
so the problem might be to determine
whether in a given graph there exists a
clique
of a certain size
well that turns out to be a very hard
problem but how
but if somebody hands you a clique and
asks you to check
whether it is a hands you a set of
vertices and ask you to check whether
it's a clique
you could do that simply by exhaustively
looking at all of the edges between the
vertices and the clique
and verifying that they're all there and
that's a polynomial time
that's a polynomial so the verify there
the problem of finding the clique
appears to be extremely hard but the
problem of verifying a clique
to see if it reaches the target number
of vertices
is easy to solve is easy to verify
so finding the clique is hard checking
it is easy
problems of that nature are called the
non-deterministic polynomial time
algorithms
and that's the class np
and what about mp complete and be hard
okay
let's talk about problems where you're
getting a yes no a yes or no answer
rather than a numerical value so either
there is a
a perfect matching of the of the
boys with the girls or there isn't
it's clear that um every problem in p
is also in np if you can solve the
problem
exactly then you can certainly verify
the solution on the other hand
there are problems in the class np
this is the class of problems that are
easy to check
although they may be hard to solve it's
not at all clear that
problems in np lie in p so for example
if we're
looking at scheduling classes at a
school
the fact that you can
verify when handed a schedule for the
school whether it meets all the
requirements
that doesn't mean that you can find the
schedule rapidly
so intuitively np non-deterministic
polynomial
checking rather than finding
is going to be harder than
is going to include is easier checking
is easier and therefore the class of
problems that can be checked
appears to be much larger than the class
of problems that can be solved
and then you keep adding appears to and
uh sort of these uh additional words
that designate that we don't know for
sure yet
we don't know for sure so the
theoretical question which is considered
to be the most central problem in
theoretical computer science or at least
computational complexity theory
combinatorial algorithm theory
the question is whether p is equal to np
if p were equal to np it would be
amazing it would mean
that every
problem where a solution can be rapidly
checked can actually be solved in
polynomial time we don't really believe
that's true
if you're scheduling classes at a school
it's we expect that if somebody hands
you
a satisfying schedule you can verify
that it works
that doesn't mean that you should be
able to find such a schedule
so intuitively np encompasses a lot more
problems than p so can
we take a small tangent and break apart
that intuition
so do you first of all think that
the biggest sort of open problem in
computer science maybe mathematics
is whether p equals np do you think
p equals np or do you think p
is not equal to np if you had to bet all
your money on it
i would bet that p is unequal to np
uh simply because there are problems
that have been around for centuries and
have been studied intensively in
mathematics
and even more so in the last 50 years
since the
p versus np was stated and
no polynomial time algorithms have been
found for these
easy to check problems so one
one example is a problem that goes back
to the mathematician
gauss who is interested in um
factoring large numbers so uh
we know what a number is prime if it
doesn't
if it cannot be written as the product
of
two or more numbers unequal to one
uh so if we can factor the a number like
91 that's 7 times 13
but if i give you
20 digit or 30 digit numbers you're
probably going to be at a loss to
have any idea whether they can be
factored
so the pr the problem of factoring very
large numbers
is does not appear to have an efficient
solution
but once you have found the factors
express the number as a product the two
smaller numbers you can quickly verify
that they are factors of the number
and your intuition is a lot of people
finding you know this
a lot of brilliant people have tried to
find algorithms for this one particular
problem there's
many others like it that are really well
studied and it would be great
to find an efficient algorithm for right
and in fact we have
some results that i was instrumental in
obtaining
following up on work by the
mathematician stephen cook
to show that within the class
np of easy to check problems
there's a huge number that are
equivalent in the sense that either
all of them or none of them lie in p
and this happens only if p is equal to
np
so if p is unequal to np we would also
know
that
virtually all the standard combinatorial
problems
if p is unequal to np none of them can
be solved in polynomial time
can you explain how that's possible to
tie together so many problems
in a nice bunch that if one is proven to
be efficient
then all are the first
and most important stage of progress was
a result by stephen cook
who showed that a certain problem called
the satisfiability problem
of propositional logic
is as hard as any problem in the class p
so the propositional logic problem
is expressed in terms of
expressions involving the logical
operations
and or and not offering operating
operating on variables that can be
either true or false
so an instance of the problem would be
some formula involving and or and not
and the question would be whether there
is an assignment of
truth values to the variables in the
problem
that would make the formula true so for
example if i take
the formula a or b and a or not b
and not a or b and not a
or not b and take the conjunction of all
four of those
so-called expressions you can determine
that
no assignment of truth values to the
variables a and b
will allow that conjunction of
cl what are called clauses uh to be true
so that's an example of a formula
in propositional logic
involving expressions based on the
operations and
or and not um that's an example of a
problem which has which is not
satisfiable there is no solution that
satisfies all of those constraints
and that's like one of the cleanest and
fundamental problems in computer science
it's like a nice statement of a really
hard problem it's a nice statement a
really hard problem
and and what cook showed is that
every problem in np
is can be re-expressed as
an instance of the satisfiability
problem
so to do that
he used the observation that a very
simple abstract machine called the
turing machine
can be used to describe
any algorithm
an algorithm for any realistic computer
can be translated
into an equivalent algorithm
on one of these turing machines which
are extremely simple
it's a tour machine there's a tape and
you can yeah you have to walk along that
data on a tape and you have basic
instructions
a finite list of instructions which say
we
would say if you're reading a particular
symbol on the tape
and you're in a particular state then
you can move to
a different state and change the state
of the number that you
or the element that you were looking at
the cell of the tape that you were
looking at
and that was like a metaphor and a
mathematical construct that touring put
together to represent
all possible computation all possible
computation now one of these
so-called turing machines is too simple
to be useful in practice
but for theoretical purposes we can
depend on the
fact that an algorithm for any computer
can be
translated into one that would run on a
turing machine
right and then using that fact
um he could sort of describe
any possible nondeterministic polynomial
time algorithm any pro
any algorithm for a problem in np
could be expressed as a sequence of
moves of the turing machine described in
terms of
reading a symbol on the tape
while you're in a given state and moving
to a new state and leaving behind a new
new symbol and given that
the fact that any non-deterministic
polynomial time algorithm
can be described by a list of such
instructions
you could translate the problem
into the language of the satisfiability
problem
is that amazing to you by the way if you
take yourself back when you were first
thinking about the space of problems is
that
how amazing is that it's astonishing
when you look at cook's proof it's not
too difficult to
sort of figure out why this is
why this is so but the implications are
staggering
it tells us that this of all the
problems in
np all the problems where solutions are
easy to check
they can they can all be rewritten
in terms of the satisfiability problem
yeah it's a in adding so much more
weight to the p equals np
question because all it takes is to show
that one
that's right one algorithm in this class
so the p versus np
can be re-expressed is simply asking
whether the
satisfiability problem of propositional
logic
you'll solve a billion polynomial time
but there's more
uh i i encountered cook's paper
when he published it in a conference in
1971.
yeah so when i saw uh cook's paper and
saw
this uh reduction event of all of each
of the problems in np
by a uniform method to to the
satisfiability problem of propositional
logic
that meant that the satisfiability
problem was a universal combinatorial
problem
and it occurred to me
through experience i had had in trying
to solve other combinatorial problems
that there were many other problems
which
seemed to have that universal structure
and so i began looking for
reductions from the satisfiability
to other problems
one of the other problems would be the
so-called integer programming problem
of solving a determining whether there's
a solution to a
um a set of linear inequalities
involving
integer variables just like linear
programming but there's a constraint
that the variables must remain integers
integers in fact must be either zero or
one
because they could only take on those
values and that makes the problem much
harder
yes that makes the problem much harder
and
it was not difficult to show that
the satisfiability problem can be
restated
as an integer programming problem so can
you pause on that was that one of the
first problem
mappings that you try to do and how hard
is that map you said it wasn't hard to
show but
you know that's a that's a big
leap it is a big leap yeah
well let me let me give you another
example um
another problem in np is whether a graph
contains a clique of a given size
and now the question is can we reduce
the propositional
logic problem to the problem
of whether there's a clique of a certain
size
well if you look at the propositional
logic problem it can be expressed as a
number of
clauses each of which is
a
of the form a
or b or c where a is either one of the
variables in the problem or the
negation of one of the variables
and the
an instance of the propositional logic
problem
can be rewritten using operations of
boolean logic can be re
rewritten as the conjunction of a set of
clauses
the and of a set of ors where each
clause
is a disjunction an or
of variables or negated variables
so the pro the question of uh
the in the satisfiability problem
is whether those clauses can be
simultaneously
satisfied now to satisfy all those
clauses you have to
find one of the terms in each clause
which is going to be given that which is
going to be true in your truth
assignment
but you can't make the same variable
both true and false
so if you have the variable a
in one clause and you want to
satisfy that clause by making a true you
can't also make
the complement of a true in some other
clause and so the goal is to make every
single clause
true if it's possible to satisfy this
and
the way you make it true is at least one
term in the clause must be
it must be true so
so now we uh to convert this problem
to something called the independent set
problem where you're
just sort of asking for a set of
vertices in a graph such that no two of
them are adjacent sort of the opposite
of the clique problem
so we've seen that we can now
express that as
finding a
set of terms one in each clause
without picking both the variable
and the negation of that variable
because you if the variable is assigned
the truth value
the negated variable has to have the
opposite truth value
right and so we can construct the graph
where the vertices are the
terms in all of the clauses
and you have an edge
between two
terms if um
if an edge between
two occurrences of terms
if they're both in the same clause
because you're only picking one
element from each clause and also
an edge between them if they represent
opposite values of the same variable
because you can't make a variable both
true and false
and so you get a graph where you have
all of these
occurrences of variables you have edges
which which mean that you're not allowed
to choose
both ends of the edge either because
they're in the same clause or they're
con negations of one another all right
and that's uh
first of all sort of to zoom out that's
a really powerful idea that you can take
a graph and connect it to a
logic equation right somehow and do that
mapping for
all possible formulations of a
particular problem on a graph
yeah i mean that that
still is hard for me to believe that
that's possible
that that they're like what do you make
of that
that um there's such a union of
there's such a friendship among all
these problems across
that somehow are akin to combinatorial
uh algorithms that they're all somehow
related
yeah i i know it can be proven yeah but
what do you make of it
that that that's true
well if they just have the same
expressive power
you can take any one of them and
translate it into the terms of the other
you know
the fact that they have the same
expressive power also somehow
means that they can be translatable
right
and what i did in the 1971 paper was to
take
21 fundamental problems
commonly occurring problems of packing
covering matching and so forth
or lying in the class np
and show that the satisfiability problem
can be
re-expressed as any of those that any of
those have the same
expressive proper uh expressive power
so and that was like throwing down the
gauntlet of saying
there's probably many more problems like
this right but that's just saying that
look that
they're all the same they're all the
same but not exactly
yeah yeah they're all the same in terms
of whether they are
um rich enough to express any of the
others
but that doesn't mean that they have the
same computational complexity
but what we can say is that either
all of these problems or none of them
are solvable in polynomial time
yeah so where does np completeness and
np
hard classes well that's just a small
technicality
so when we're talking about decision
problems
that means that the answer is just yes
or no
there is a clique of size 15 or there's
not a clique of size 15.
on the other hand an optimization
problem would be asking
find the largest clique the answer would
not be yes or no it would be
15. so um
so when you're asking for the
when you're putting a valuation on the
different solutions
and you're asking for the one with the
highest valuation that's an optimization
problem
and there's a very close affinity
between the two kinds of problems
but the counterpart of
being the hardest decision problem
the hardest yes no problem the kind of
part of that
uh is is to minimize or maximize
an objective function and so a problem
that's
hardest in the class when viewed in
terms of optimization
those are called np-hard rather than
np-complete and np-complete is for
decision problems and np-complete is for
decision problems so
if somebody shows that p equals np
what do you think that proof will look
like
if you were to put on yourself if it's
possible to show that as a proof
or to demonstrate an algorithm
all i can say is that it will involve
concepts that we do not now have
and approaches that we don't have do you
think those concepts are out there
in terms of inside complexity theory
inside of computational analysis of
algorithms do you think there's concepts
that are totally outside of the box
that we haven't considered yet i think
that if there is a proof that p
is equal to np or that p is not equal to
np
uh it'll depend on concepts that are now
outside the box now if that's shown
either way p
equals np or p not well actually p
equals np
what impact you kind of mentioned a
little bit but can you
can you linger on it what kind of impact
would it have
on theoretical computer science and
perhaps software
these systems in general well i think it
would have enormous impact on the
on the world any in either way case
if p is unequal to np which is what we
expect
then we know that we're in that for the
great majority of the combinatorial
problems that come up
since they're known to be np complete
uh we're not going to be able to solve
them by
efficient algorithms however
there's a little bit of hope in that
it may be that we can solve most
instances
all we know is that if a problem is not
in p then
then it can't be solved efficiently on
all instances
um but but basically it will
um it will if we find that p is unequal
to np it will mean that we can't
expect always to get the optimal
solutions to these problems
and we have to depend on heuristics that
perhaps work
most of the time or give us good
approximate solutions
but not so we would turn our eye towards
the heuristics
with a little bit more um acceptance and
comfort on our hearts
exactly okay so let me ask a
romanticized question
what to you is one of the most or the
most beautiful
combinatorial algorithm in your own life
or just in general in the field that
you've ever come across
or have developed yourself oh i like the
stable matching problem
or the stable marriage problem uh
very much what's the stable matching
problem
yeah imagine that you
want to marry off n boys
with uh and girls
and each boy has an ordered list of his
preferences among the girls
his first choice is second choice
through her nth choice
and um
each girl also has a an ordering of the
boys first choice second choice and so
on
and we'll say and we will say that a
matching
one-to-one matching of the boys with the
girls is stable
if there are
no two couples in the matching
such that the boy in the first couple
prefers the girl in the second couple to
her mate
and she refers the boy to her current
mate in other words if there is
the matching is stable if there is no
pair
who want to run away with each other
leaving their partners behind
gosh yeah
uh yeah actually this is relevant to
matching uh uh residents with hospitals
and some other real life problems
although
not quite in the form that i described
so it turns out that there is that a
stable for any set of preferences
a stable matching exists
and um moreover it can be computed
by a simple algorithm in which
each boy starts making proposals to
girls
and if the girl receives the proposal
she accepts it tentatively
but she can
drop it if she can end it she can drop
it
later if she gets a better proposal from
her point of view
and the boys start going down their
lists proposing to their first second
third choices
until stopping when
a proposal is accepted
but the girls meanwhile are watching the
proposals that are coming into them
and the girl will drop her current
partner
um if she gets a better proposal
and the boys never go back through they
they never go back
yeah so once they've been denied
they don't try again they don't they
don't they don't try again because
the girls are always improving their
status as they get more as they receive
better and better proposals the boys are
going down their list starting with
their top preferences
and um
one can prove that
that the process will come to an end
where everybody will get matched with
somebody
and you'll you won't have any pair that
want to
abscond from each other do you find the
proof or the algorithm itself
beautiful or is it the fact that with
the the simplicity of just
the two marching i mean the simplicity
of the underlying rule of the algorithm
is that the beautiful part both i i
would say
um and you also have the observation
that
you might ask who is better off the boys
who are doing the
proposing or the girls who are reacting
to proposals
and it turns out that it's it's the boys
who are doing
the doing the best that is each boy is
doing at least as well
as uh he could do in any other stable
matching
so there's a sort of lesson for the boys
that you should go out and be
proactive and make those proposals
go for broke yeah i don't know if the
this is directly mappable
philosophically to our society but uh
certainly
seems like a compelling notion and
like you said there's probably a lot of
actual real world problems that this
could be mapped to
yeah well you get you you get
complications
for example what happens when a husband
and wife want to be assigned to the same
hospital
so you you have to
take those constraints into account and
then the problem becomes
np hard or
uh why is it a problem for the husband
and wife to be assigned to the same
hospital
no it's desirable so desirable or at
least go to the same city
so you can't if you're i think if you're
assigning
residents to hospitals and then you have
some preferences
uh for the husband and wife for for the
hospitals
the residents have their own preferences
references residents both male and
female have their own preferences
um the hospitals have their preferences
but if if
resident a the boy is going to
philadelphia
then you'd like his wife
be also to be assigned to a hospital
in philadelphia so which step makes it a
and be hard problem
do you mention the fact that you have
this additional constraint
that it's not just the preferences of
individuals
but the fact that the two partners to a
marriage
have to go to have to be assigned to the
same place
i'm being a little dense uh
the sort of
the perfect matching no not the stable
matching is what you refer to
that's when two partners are trying to
okay what's confusing you is
that in the first interpretation of the
problem i had boys matching with girls
yes in the second interpretation
you have humans matching with
institutions
i and there's a coupling between within
the
gotcha within the humans any added
little constraint will make it an empty
heart problem well
yeah okay
by the way the algorithm you mentioned
wasn't was one of yours no no that was
due to
gail and shapley and uh
my friend david gale passed away before
he could get part of the nobel prize
but his partner shapley
shared in a nobel prize with somebody
else for economics
for huma for economics uh
for ideas stemming from this stable
matching idea
so you've also have developed yourself
some elegant
beautiful algorithms again picking your
children
so the the the robin carp algorithm for
string searching pattern matching
admin carb algorithm for max flows we
mentioned hop craft carbon algorithm
for finding maximum cardinality
matchings and bipartite graphs
is there ones that stand out to you as
ones you're most proud of or just um
whether it's beauty elegance or
just being the right discovery
development in your life
that you're especially proud of i like
the
raven carp algorithm because it
illustrates the power of
randomization
so the
the problem there is
to um
is to decide whether uh
a given long string of symbols from some
alphabet contains a given word
whether a particular word occurs within
some very much longer word
and so the the idea of the
algorithm is to associate
with the word that we're looking for
a fingerprint some
some number or some
combinatorial object that
describes that word and then to look for
an occurrence of that same fingerprint
as you slide along the longer word
and what we do is we
associate with each word a number
so we first of all we think of the
letters that are kind of occur in a word
as the digits of let's say decimal or
whatever
base your whatever number of different
symbols there
are that's the base of the of the
numbers yeah right
so every word can then be thought of as
a number
with the letters being the digits of
that number
and then we pick a random prime number
in a certain range
and we take that word viewed as a number
and take the remainder on dividing
the dividing that number
by the prime so coming up with a nice
hash function
it's a it's a kind of hash function yeah
um
it gives you a little little shortcut
for for that particular word
yeah that so that's the that's the
uh it's very different than the any and
other algorithms
of its kind that we're trying to do
search uh
string matching yeah which
usually are combinatorial and don't
involve
the idea of taking a random fingerprint
yes
and doing the fingerprinting has
two advantages one is that as we slide
along the long word
digit by digit we can we we keep
a window of of a certain size the size
of
the word we're looking for and we
compute the fingerprint of every
stretch of that length and it turns out
that
just a couple of arithmetic operations
will take you
from the fingerprint of one part
to what you get when you slide over by
one position
so the computation of all the
fingerprints
is um simple
and secondly it's unlikely
if the prime is chosen randomly from a
certain range
that you will get two of the segments
in question having the same fingerprint
right and so there's a small probability
of error which can be checked after the
fact
and also the ease of doing the
computation because you're working with
these fingerprints
which are remainders modulo some big
prime
so that's the magical thing about
randomized algorithms is that if you add
a little bit
of randomness it somehow allows you to
take a pretty naive approach
a simple looking approach and allow it
to run
extremely well so can you maybe
take a step back and say like what is a
randomized algorithm this category of
algorithms
well it's um just the ability to draw a
random number from
such um from some range or to
to associate a random number with some
object
or to draw fro at random from some set
so another
example is very simple if we're
conducting a presidential election
and we would like to pick the winner
in principle we could draw
a random sample of all of the voters in
the country
and if it was a side of substantial size
say a few thousand
then the most popular candidate in that
group would be
very likely to be the correct choice
that would come out of counting all the
millions of votes
of course we can't do this because first
of all everybody has to feel that his or
her vote counted
and secondly we can't really do a purely
random sample
from that population and i guess thirdly
there could be a tie
in which case we wouldn't have a
significant difference between
two candidates but those things aside if
you didn't have all that messiness of
human beings
you could prove that that kind of random
picking would be just that random
picking
would would be would solve the problem
with a very
with a very low probability of error
another example is
testing whether a number is prime so if
i want to test whether
[Music]
17 is prime
i could pick any number between
1 and 17 and raise it to the 16th power
modulo 17 and you should get back the
original number
that's a famous formula due to
ferma about it's called fairmont's
little theorem that
if you take any a any number a in the
range
0 through n minus 1. and raise
it to the n minus one paper uh
power modulo n you'll get back the
number
a if the number is if a is prime
yeah so if you don't get back the number
a that's a proof that a number is not
prime
well and
you can show that um
suitably define the the
the probability that you will get
a value unequal you will get a violation
of fermat's
result is very high and so this gives
you a way of
rapidly proving that a number is not
prime
it's a little more complicated than that
because uh there are certain
values of n where something a little
more elaborate has to be done but that's
the basic idea
using taking an identity that holds for
primes and therefore
if it ever fails on any instance
for a non-prime unit you know that the
number is not prime it's a quick
joy a fast choice fast proof that a
number is not prime
can you maybe elaborate a little bit
more what's your intuition why
randomness works so well and results in
such simple algorithms
well uh the example of conducting an
election where you could
take in in theory you could take a
sample and depend on the
validity of the sample to really
represent the whole
is a just the basic fact of statistics
which gives a lot of
opportunities um
and i actually exploited that sort of
random
random sampling idea in uh designing an
algorithm for
counting the number of solutions that
satisfy a particular
formula and propositional calc
propositional
particular so some some some uh
version of the satisfiability problem or
a version of the satisfiability problem
is there some interesting insight that
you want to elaborate on like what
some aspect of that algorithm that might
be
useful to describe so you you have a
a collection of
formulas and you want to
count the number
of solutions
that satisfy at least one of the
formulas
and you can count the number of
solutions that satisfy
any particular one of the formulas but
you have to account
for the fact that that solution might be
counted many times if it solves
more than one of the formulas
and so what what you do is you
sample from the formulas according to
the number of solutions that satisfy
each individual one in that way you draw
a random solution
but then you correct by looking at
the number of formulas that satisfy that
random solution
and uh and don't double count
so if if you you can think of it this
way so you have a
matrix of zeros and ones and you want to
know
how many columns of that matrix contain
at least one one
and you can count in each row how many
ones there are
so what you can do is draw from the rows
according to
the number of ones if a row has more
ones it gets to run
more frequently but then
if you draw from that row you have to go
up the column and looking at where that
same one is repeated in
different rows and only
count it as a success or a hit if it's
the
earliest row that contains the one right
and that gives you a robust
statistical estimate of the total number
of columns that contain at least one of
the ones
so that that is an example of
the same principle that was used in
studying random sampling
another viewpoint is that
if you have a phenomenon that occurs
almost all the time
then if you sample one of the
occasions where it occurs you're most
likely to
and you're looking for an occurrence a
random occurrence is likely to work
so that comes up in solving
identities solving algebraic identities
you
you get um two formulas that may look
very different you want to know if
they're really identical
what you can what you can do is just
pick a random value and evaluate the
formulas at those two
at that value and see if they seeing if
they agree
and you depend on the fact
that if the formulas are distinct then
they're going to disagree a lot
and so therefore a random choice will
exhibit the disagreement
if there are many ways for the two to
disagree
and you only need to find one
disagreement then random choice is
likely to yield it and in general so
we've just talked about randomized
algorithms but we can look at
the probabilistic analysis of algorithms
and
that gives us an opportunity to step
back and as we said
everything we've been talking about is
worst case analysis right
could you maybe comment on
the usefulness and the power of worst
case analysis versus
best case analysis average case
probabilistic how do we think about the
future of theoretical computer science
computer science
in the kind of analysis we do of
algorithms does worst case analysis
still have a place
an important place or do we want to try
to move forward towards kind of
average case analysis yeah and what what
are the challenges there
so if worst case analysis shows that
an algorithm is always good that's fine
if worst case analysis uh
is used to show that the problem
that the solution is not always good
then you have to step back and do
something else to ask how often will
you get a good solution just to pause on
that for a second that
that's so beautifully put because i
think we tend to judge algorithms
we throw them in the trash the moment
their
their worst case is shown to be bad
right and and
and that's unfortunate i think we use
a good example is um going back to the
satisfiability problem
there are very powerful programs called
set solvers
which in practice fairly reliably
solve instances with many millions of
variables that arise in
a digital design or improving programs
correct and other applications
and so in in many application areas
even though satisfiability as we've
already discussed is
npe complete the sat solvers will
work so well that the people
in that discipline tend to think of
satisfiability as an easy problem
so in other words just
for some reason that we don't entirely
understand
the instances that people formulate in
designing digital circuits or other
applications
are such that
satisfiability is not hard to check
and even searching for a satisfying
solution can be done efficiently
in practice and there are
many examples for example we talked
about the traveling salesman problem
so just to refresh our memories uh the
problem is you've got
a set of cities you have pairwise
distances between cities
um and you want to find a tour through
all the cities that
minimizes the total the total cost of
all the edges traversed
all all the trips between cities the
problem is
np hard but people using
integer programming codes
together with some other mathematical
tricks
solve
geometric instances of the problem where
the cities are let's say points in the
plane
uh and get optimal solutions to problems
with tens of thousands of cities
actually it'll take a few computer
months to
solve a problem of that size but for
problems of size a thousand or two
it'll rapidly get optimal solutions
provably optimal solutions
even though again we know that it's
unlikely that the traveling salesman
problem can be solved in polynomial time
are there methodologies like rigorous
systematic methodologies
for you said
in practice in practice this algorithm
is pretty good are there systematic ways
of
saying in practice this sounds pretty
good so in other words average case
analysis
or you've also mentioned that average
case
kind of requires you to understand what
the typical cases
typical instances and that might be
really difficult that's very difficult
so
after i did my original work on
getting uh showing all these problems to
be np complete
i looked around for a way to get some
shed some positive light on
combinatorial algorithms
and what i tried to do was to study
problems behavior on the average or
with high probability but i had to make
some assumptions about
what what's the probability space what's
the sample space what do they
what do we mean by typical problems
that's very hard to say
so i took the easy way out and made some
very simplistic assumptions
so i assumed for example that if we were
generating a graph
with a certain number of vertices and
edges
then we would generate the graph by
simply choosing one edge at a time
at ran at random until we got the right
number of edges
that's that's a particular model of
random graphs that has been studied
mathematically a lot
and within that model i i could prove
all kinds of wonderful things
i and others who also worked on this
so we could show that we know
exactly how many edges there have to be
in order for
um there be a
so-called hamiltonian circuit that's a
cycle that
visits each vertex exactly once
we know that if the number of
edges is a little bit more than n log n
where n is the number of vertices then
where such a cycle is very likely to
exist
and we can give a heuristic that will
find it with her high probability
and we got a the community
in which i was working got a lot of
results along these lines
but the field tended to be rather
lukewarm about accepting these results
as meaningful
because we were making such a simplistic
assumption about the kinds of graphs
that we would be dealing with so we
could show all kinds of wonderful things
it was a great playground i enjoyed
doing it
but after a while i
concluded that um
that it didn't have a lot of bite in
terms of the practical application
oh the okay so there's too much into the
world of toy problems
yeah that can okay but all right so but
is is there a way to find nice
representative real world
impactful instances of a problem on
which demonstrate
that an algorithm is good so this is
kind of like the machine learning world
that's kind of what they at his best
tries to do is
find a data set from like the real world
and show the performance all the
all the conferences are all focused on
beating the performance
of on that real world data set is there
an equivalent
in complexity analysis not
really um don knuth
started to collect examples of graphs
coming from various places so he would
have a whole
zoo of different graphs that he could
choose from and he
could study the performance of
algorithms on different types of graphs
and um but there it's really important
and compelling to be able to define
a class of graphs so that the the actual
act of defining a class of graphs that
you're interested in it seems to be
a non-trivial step if we're talking
about instances that we should care
about in the real world
yeah it's there's nothing
available there that would be analogous
to the training set for supervised
learning
you know where you sort of assume that
the world has
given you a bunch of examples
to work with we don't really have that
for
problems for combinatorial problems on
graphs and networks
you know there's been a huge growth a
big growth of data sets
available do you think some aspect of
theoretical computer science
i might be contradicting my own question
while saying it but
will there be some aspect an empirical
aspect of theoretical computer science
which will allow the fact that these
datasets are huge we'll start using them
for analysis
sort of you know if you want to say
something about
a graph algorithm you might take
a net a social network like facebook
and looking at subgraphs of that and
prove something about the facebook graph
and be respected and at the same time be
respected in the theoretical computer
science community that hasn't been
achieved yet i'm afraid
is that is that uh is it p equals np is
that impossible
is is it impossible to publish a
successful paper in the theoretical
computer science community
that shows some
some performance on a real-world data
set or is that really just those are two
different worlds
well they haven't really come together i
would say that there is
a field of experimental algorithmics
where people sometimes are given some
family of examples sometimes they just
generate them at random
and they report on performance
but there's no convincing
evidence that the sample is
representative of anything at all
so let me ask in terms of breakthroughs
and open problems what are the most
compelling open problems to you
and what possible breakthroughs do you
see in the near term in terms of
theoretical computer science
well there are all kinds of
relationships among complexity classes
that can be studied
just to mention one thing i wrote a
paper with
richard lipton in 1979
where we asked the following question um
if you take a problem a combinatorial
problem in np let's say
and you um choose a
and you pick the the size of the
problem uh say it's a traveling salesman
problem but
of size 52 and you ask
could you get an efficient a small
boolean circuit tailored for that size
52 where you could feed the
edges of the graph in in as boolean
inputs
and get as an output the question of
whether or not there's a tour of a
certain
length and that would in other words
briefly what you would say in that case
is that the problem has
small circuits polynomial size circuits
now we know that if p is equal to np
then
in fact these problems will have small
circuits
but what about the converse could a
problem have small
circuits meaning that it's that an
algorithm tailored to any particular
size could work well
and yet not be a polynomial time
algorithm that is you couldn't write it
as a single uniform algorithm good for
all sizes
just to clarify small circuits for
problem of particular size or even
further constraint
small circuit for a particular
for no for all the inputs of that cell
almost that size is that a trivial
problem for a particular
instance of so coming up an automated
way of coming up with a circuit
i guess that's that would be that would
be hard yeah
but you know but there's the existential
question
everybody talks nowadays about every
existential questions
existential challenges yeah
you could ask the question
[Music]
does the hamiltonian circuit problem
have a small circuit
for for every size for each size a
different small circuit
in other words could you tailor
solutions
depending on the size and and get
polynomial size
even if p is not equal to np right
and that would be fascinating if that's
true
yeah what we proved is that
if that were possible then something
strange would happen in complexity
theory
some level
uh class which i could briefly describe
um
something strange would happen so um
i'll take a stab at describing what i
mean let's go there
so we have to define this hierarchy
in which the first level of the
hierarchy is p
and the second level is np and what is
np
np involves statements of the form there
exists
a something such that something holds
um so for example
um um there exists the coloring such
that a graph can be colored
with only that number of colors
or there exists a hamiltonian circuit
there's a statement about this graph
yeah so so the um
np um
nnp deals with statements of that kind
that there exists a solution
now you could imagine a more complicated
expression which which says um
uh for all x there exists a y
such that some uh
proposition holds involving both x and y
so that would say for example in game
theory for all
strategies for the first player there
exists a strategy for the second player
such that the first player wins that
would be that would be at the second
level of the hierarchy
the third level would be there exists an
a such that for all b there exists a c
that something holds and you can imagine
going higher and higher in the hierarchy
and you'd expect that the class the
complexity class the classes
that correspond to those
different cases would get
bigger and bigger or they they
harder and harder to solve and what
lifted and i showed was that if um np
had small circuits then this hierarchy
would
collapse down to the second level in
other words you wouldn't get any more
mileage by
complicating your expressions with three
quantifiers or four quantifiers or any
number
i'm not sure what to make of that
exactly well i think it would be
evidence that
and np doesn't have small circuits
because
something because something so bizarre
would happen
but again it's only evidence not proof
well yeah
it's not that's not even evidence
because
you're saying p is not equal to np
because something bizarre has to happen
i mean
there that's uh that's proved by the
lack of bizarreness
in in our science but it seems like
um it seems like just the very notion of
p equals np would be bizarre so any way
you arrive at
there's no way you have to fight the
dragon at some point
yeah okay well anyway for whatever it's
worth that's
what we proved awesome so
so that's a potential space of open
interesting problems
yeah let me ask you about the this other
world
that of machine learning of deep
learning
uh what's your thoughts on the history
and the current progress
of machine learning field that's often
progressed sort of
separately as a space of ideas and space
of people
than the theoretical computer science or
just even computer science world
yeah it's really um very different from
the theoretical computer science world
because
yeah the results about it
algorithmic performance tend to be
empirical
it's more akin to the world of sat
solvers where we
observe that for formulas and arising in
practice see
the solver does well so it it's of that
type it's
where we're moving into the empirical
evaluation of algorithms now it's clear
that there have been huge successes
in um image processing
robotics natural language processing
a little less so but across the spectrum
of
of game playing is another one there
have been great
successes um
and one of those effects is that it's
not too hard to become a millionaire if
you can get a reputation in machine
learning and there'll be all
kinds of companies that will be willing
to offer you the moon because they
they think that if they have
ai at their disposal then they can solve
all kinds of problems
but there are limitations
one is that the solutions that you get
by
from to
supervised learning problems uh through
uh convolutional neural networks
uh seem to perform
amazingly well even for inputs that are
outside the training set
um but we don't
have any theoretical understanding of
why that's true
secondly the solutions the the networks
that you get
uh are very hard to understand and so
very little insight comes out
so yeah yeah they may seem to work on
your training set
and you may be able to discover
whether your photos occur in a
different sample of inputs or not
um but we don't really know what's going
on we don't know the
the features that distinguish the
photographs or the objects are
are um not easy to characterize
well it's interesting because you
mentioned coming up with a small circuit
yeah to solve a particular size problem
yeah it seems that neural networks are
kind of
small circuits in a way yeah uh but
they're not programs
sort of like the the things you've
designed are algorithms programs
right algorithms neural networks aren't
able to
develop algorithms to solve a problem
is it well they are more of a function
they are algorithms it's just
that they're uh but
sort of uh well yeah it's a it could be
a semantic question but
there's not a algorithmic style
manipulation
of the input perhaps you could argue
there is
yeah well it feels a lot more like a
function
of the input it's a yeah it's a function
it's a computable function
it's um once you have the network you
can
simulate it on a given input and figure
out the output
but what you you know if you're if
you're trying to recognize
images then you don't know what features
of the image are really
being uh uh
determinant of of what the circuit is
doing the circuit is
sort of a very intricate and
you know it's not clear that the the
you know the the simple characteristics
that you're looking for the
the edges of the objects or whatever
they may be
they're not emerging from the structure
of the circuit
well it's not clear to us humans but
it's clear to the circuit
yeah well right i mean uh
it's not clear to sort of the um
the elephant how the human brain works
but it's clear to us humans we can
explain to each other
our reasoning and that's why the
cognitive science the psychology field
exists
maybe maybe the whole thing of being
explainable to humans is a little bit
overrated
well maybe yeah i guess i you know you
could say the same thing about our brain
that
when we perform acts of cognition we
have no idea how we do it really
we do though i mean we for
at least for the visual system the
auditory system and so on we do
get some understanding of the principles
that they operate under but
uh for many deeper cognitive tasks we
don't have that
that's right so let me ask yeah
you've also been doing work on
bioinformatics
does it amaze you that the fundamental
building blocks
so if we take a step back and look at us
humans
the building blocks used by evolution to
build us intelligent
human beings is all contained there in
our dna
it's amazing and and what's really
amazing
is that we have are beginning
to learn how to edit
dna which which is very
very very fascinating this
this ability to
take a sequence find it in the genome
and
do something to it i mean that's really
taking our biological systems towards
the worlds of algorithm
of algorithms yeah but it raises a lot
of questions
um you have to distinguish between doing
it on an individual
or doing it on somebody's germ line
which means that all of the
descendants will be affected
so that's like an ethical yeah so it
raises very severe
ethical questions and um
and even doing it on individuals
um is uh so there's a lot of
hubris involved that you can assume that
knocking out a particular gene is going
to be beneficial because you don't know
what the side effects are going to be
so we have this
wonderful new world
of gene editing
uh which is you know very very
impressive and it it could be used in
agriculture it could be used in
medicine in various ways um
but very serious ethical problems arise
what are to you the most interesting
places where algorithms
sort of the ethical side is an
exceptionally challenging thing that i
think
we're going to have to tackle with all
of uh
genetic engineering but on the
algorithmic side there's a lot of
benefit that's possible
so is there uh areas where you see
exciting possibilities for algorithms to
help model
optimize study biological systems
yeah i mean we we can certainly
analyze genomic data to figure out
which genes are operative in the cell
and under what conditions and
which proteins affect one another uh
which prote
which proteins physically interact um
we can sequence proteins and modify them
um is there some aspect of that that's a
computer science problem
or is that still fundamentally a biology
problem
well it's a big data it's a statistical
big data problem for
sure so you know the biological data
sets are increasing our ability to
study our ancestry by to study the
tendencies towards disease to
personalize treatment according to
what's in our genomes and what
tendencies for disease we have
to be able to predict what troubles
might come upon us in the future and
anticipate them to to understand
whether you um
for a woman whether her proclivity for
um breast cancer is so strong enough
that she would want to take
action to avoid it
you dedicate your 1985 touring award
lecture to the memory of your father
what's your fondest memory of your dad
seeing him standing in front of a class
at the blackboard drawing perfect
circles
by hand and
showing his his ability
to attract the interest of
the motley collection of eighth grade
students that he was teaching
when when did you get a chance to see
him draw the perfect circles
on rare occasions he i would get a
chance to
sneak into his classroom and observe
observation and i think he was at his
best in the classroom i think he really
came to life
and had fun um not only teaching but
but you know engaging in chit chat with
the students and
you know ingratiating himself with the
students
and what i inherited from that
is the great desire to be a teacher
i retired recently and a lot of my
former students came students who
with whom i had done research or who had
read my papers or who
had been in my classes and when they
talked about
about me
they talked not about my
1979 paper or my 1992 paper but about
what they
what came away in my classes and not
just the details but just the approach
and the
the manner of teaching
and so i sort of take pride in the
at least in my early years as a faculty
member at brickley
i was exemplary in preparing my lectures
and i always came in
prepared to the teeth and able therefore
to deviate according to what happened in
the class
and to really
really provide a model for the students
so is there advice you could give out
for others on how to be a good teacher
so preparation
is one thing you've mentioned being
exceptionally well prepared but there
are other things
pieces of advice that you can impart
well the top three would be preparation
preparation and preparation
why is preparation so important i guess
uh is uh
it's because it gives you the ease to
deal with any situation that comes up in
the
in the classroom and uh you know
if you're if you discover that you're
not getting through one way you can do
it another way
if the students have questions you can
handle the questions
ultimately you're also feeling the
the the crowd the students of what
they're struggling with what they're
picking up just looking at them through
the questions but even just through
their eyes
yeah and because of the preparation you
can uh you can dance
you can dance you can you can
say it another way or give another angle
are there
in particular ideas and algorithms that
computer science do you find
were big aha moments for students were
they
for some reason once they got it it
clicked for them and they fell in love
with computer science
or is it individual is it different for
everybody it's different from everybody
you have to work differently with
students some
some of them just don't don't need much
influence you you know they they're just
running with what they're doing and they
just need an ear and now and then
others need a little prodding others
need to be
persuaded to collaborate among
themselves rather than working alone
[Music]
they have their personal ups and downs
so you have to
have to deal with each student as a
human being
and bring out the best
humans are complicated yeah perhaps a
silly question
if you could relive a moment in your
life outside of family
because it made you truly happy or
perhaps because it changed the direction
of your life in a profound way
what moment would you pick i was kind of
a lazy student
as an undergraduate and even in my first
year in graduate school and
i think it was when i started doing
research
i had a couple of summer jobs where i
was able to contribute
and i had an idea
and then there was one particular course
on mathematical methods in operations
research
where i just gobbled up the material and
i scored 20 points higher than
anybody else in the class then came to
the attention of the faculty
and it made me realize that i had some
ability
some ability that was going somewhere
uh you realize you're pretty good at
this thing
i don't think there's a better way to
end it richard was a huge honor thank
you for
decades of incredible work thank you for
talking thank you it's been a great
pleasure and
uh your superb interviewer
i'll stop it thanks for listening to
this conversation with richard carp
and thank you to our sponsors eight
sleep
and cash app please consider supporting
this podcast by going to
eightsleep.com lex to check out their
awesome mattress
and downloading cache app and using code
lex podcast
click the links buy the stuff even just
visiting the site
but also considering the purchase helps
them know that
this podcast is worth supporting in the
future it really is the best way to
support this journey i'm on
if you enjoy this thing subscribe on
youtube review it with five stars nappa
podcast
support it on patreon or connect with me
on twitter
at lex friedman if you can figure out
how to spell that
and now let me leave you with some words
from isaac
asimov i do not fear computers
i fear lack of them thank you for
listening
and hope to see you next time
you