Transcript
nG2_GhNdTek • Interview: Deepfake Detection and the Future of AI with Hany Farid | Particles of Thought
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/novapbs/.shards/text-0001.zst#text/1074_nG2_GhNdTek.txt
Kind: captions
Language: en
We have to put our hands back on the
steering wheel and we have to start
getting serious about how do we put
guard rails on the system because what
we know is that if you unleash Silicon
Valley they will burn the place to the
ground to get to the finish line first
and we've got to start putting
guardrails in place and the thing is is
that this is a borderless problem right
we this can't be an only US or an only
EU or an only we have got to start
thinking about this globally
>> and I don't think we are doing that very
[Music]
[Applause]
[Music]
Hello everybody. I sat down with Hani
Farid, one of the leading voices in AI
research. He's a professor at UC
Berkeley where he works on digital
forensics and misinformation, especially
things like deep fakes, image analysis,
and how we perceive fake content. He's
also the chief science officer at GetRal
Labs, a company that focuses on the
authentication of digital media, telling
us what's real out there and what's
fake. So, yeah, he's busy, but he made
time for us. This isn't your standard AI
interview, cuz I know y'all have heard
too much about AI and you're tired of
that same old conversation. This time,
we talk about how AI actually works and
how he can detect the fake stuff. and of
course what we all want to know, what he
thinks the future holds. If you enjoy
the show, I'd love your help spreading
the word. So, take a moment to rate,
review, or leave a comment. And don't
forget to subscribe so you don't miss
anything. Your support really helps us
reach new audiences. So, let's go.
Honey, welcome to Particles of Thought.
>> It's great to be here, Hickey.
>> All right, man. So, look, this isn't
going to be like your normal interview
because this first question I got for
you
>> Yeah.
>> is something that the people need to
know.
>> Good.
>> All right. So,
>> I can't tell you I'm a little nervous
already, but go ahead.
>> All right. So, listen. You're you're an
expert on AI
>> and you specialize in identifying deep
fakes.
>> So, there's been three occurrences in
recent history that has everybody in a
tizzy.
>> Yeah.
>> The first one was the movie The Matrix.
>> Yeah. The second one is when physicists
came up with their holographic theory
which seems to indicate that life could
be a simulation.
>> Yeah.
>> All right. So, and the third one now is
AI.
>> Yeah.
>> And the question everyone has is
>> is reality a deep fake?
>> And since you are an expert in
uncovering deep fakes, are we cold, man?
Is reality what we think it is? And if
it is a deep fake, would a deep fake
expert have an idea of how to uncover
that?
>> No. I mean, if this was all a
simulation, my job would be a lot
easier, honestly, because then you sort
of give up, right? There's there's no
more reality anymore, I think. But this
is sort of where we are, Hakee.
>> Yeah.
>> Is we are now questioning everything,
not just what I see on social media, but
our our existence. we are starting to
question and our existence today and
yesterday and in the future
>> and it does feel to me I've been I've
been thinking about these problems for
25 years that we went from if I was
having a podcast with you 25 years ago
your questions would have been of the
form hey let's talk about Photoshop and
how people splice together two images
and maybe we would talk about the Lee
Harvey Oswald photo or the moonlanding
photo and today a perfectly reasonable
question is let's talk about the nature
of reality and that's happened in 25
years. Wow. Imagine the next 25 years,
the kind of conversations we're going to
be having. So, to get back to your
question, I don't know. I honestly don't
know. And I also don't know where this
latest AI
boom is taking us. And I I don't think
anybody knows honestly. But here's what
I can tell you.
>> If you look at the personal computer
revolution, that took roughly 50 years
to unfold, which at the time felt really
fast.
>> Where do you start from? like
>> I'm going to start in 19 let's start in
1950 1945 and let's go to about 2000
when more than half of US um homes had
uh a personal computer
>> then you look at the internet revolution
right from the beginning of the htt http
or html protocol Tim Berners Lee to
about half the world's population being
online that was 25 years the mobile
revolution was less than 10 years and
the AI revolution has been well the AI
revolution of course started in 1950 but
the one we are experiencing now was
about 2 to 3 years we have gone from 0
to 100 miles an hour where we are now
talking about existential threats of AI.
We're talking about 50% of jobs being
eliminated in the next 5 years. We are
talking about general artificial
intelligence. We're talking about the
Terminator. And we would not have had
this conversation 5 years ago. And by
the way, on top of all of that, we are
don't even know what's real anymore
because we consume all of our content
from online sources. Online sources have
been polluted for a while. out there
getting more polluted thanks to AI and
suddenly our whole notion of reality is
up in the air and that is it's
unsettling. I think people feel
unanchored and I don't
>> I don't know how to help everybody with
that.
>> Yeah. All right. Well, listen man, let's
step back
>> because everybody listening may not know
what's even meant by AI, what's meant by
artificial general intelligence, what's
meant by deep fakes. So give us some
bearing and foundation and define AI and
I will tell you I started in
>> big data
>> back in 2008.
>> Yeah.
>> And I was doing what was called machine
learning, right? Classified I mean
supervised and unsupervised learning and
I was working on data sets in astronomy
and astrophysics. And it was for the
Vera Rubin telescope which has just
debuted its its images this year.
>> And I was being told, hey, you're
building the software infrastructure for
analyzing that data now.
>> Yeah.
>> No, I wasn't.
>> No, I wasn't cuz AI didn't exist. So
>> good.
>> I think of machine learning as just
statistics. Define AI for us. Define
deep learning. I mean deep fakes and
define general intelligence.
>> All right. Let's start with AI because I
think you raised a really good point
which is that everything we are talking
about today is in fact not AI. It is
machine learning. It is statistics.
Almost everything we see from the chat
GPTs of the world to deep fakes which
I'll define in a minute to all of the
things that we are seeing impacting our
day-to-day lives is machine learning. So
let me rewind to 1950. 1950 was when the
term give or take when Alan Turing and
John McCarthy conceived of the term or
the concept of AI and the idea was then
quite bold given where computers were
can we imbue intelligence the way we
conceive of intelligence with humans
into machines
>> and the idea was yes we should be able
to do this and I think it's because when
we think about our own brain and how we
operate it seems like it should be
straightforward but of course it's not
talk to any neuroscientist and they'll
tell you that but for a long time for
about 30 years the field of AI struggled
to be relevant
>> because there was no path to creating
human intelligence in a machine
>> and then around the 1980s the field
splintered and the field of machine
learning came about which was what you
were just referring to and here the idea
is we are not going to imbue
intelligence into machines we are going
to learn it from data So we are simply
going to present to a machine a bunch of
data and have it infer the rules and the
logic that we think that we have and
frankly it failed for about 20 years. So
1980 to about 2000 the field really
struggled for relevance and it struggled
for relevance for two reasons. One is
there was not enough data because there
was no internet and there was not enough
computing power because there was no
Nvidia.
>> So what happened at the turn of the
millennium right the internet rose. What
did the internet give rise to? lots of
things, but one of the things it gave
rise to is a boatload of data
>> from us. Right? The ultimate irony, by
the way, is if AI comes for us, we
created it by giving it all of our our
data to learn from.
>> So around the turn of the millennia,
around 2000, huge rise in data being
pushed online and then of course a huge
rise in computing power with things like
Nvidia.
>> So define Nvidia for the audience for
people who don't know.
>> Good. So Nvidia is the company of course
that makes GPUs, graphical processing
units. This is the core computip.
Yeah. That is particularly good at the
types of computation you need to do
machine learning.
>> And suddenly we saw this explosion. It
really started around 2015 2016 in the
last decade has really exploded with
phenomenal amounts of data, phenomenal
amounts of computing power. And you got
to give credit to the statistics and
machine learning community. Some real
insights on how to do machine learning
and how to do the types of things that
you did. you were way ahead of the curve
in the in the early knots and then we
got better at doing that because we had
some insights from statistics and
physics and mathematics and computer
science and engineering. So that is so
everything you hear today in my opinion
that we talk about AI is machine
learning. We've sort of conceded that
we're just going to call it AI because
that's just the term that has gotten
adopted. But underneath it, understand
that everything that is happening is
pattern matching.
>> That you take a bunch of data and you
learn the rules implicitly, not
explicitly from the data.
>> And that's both very powerful and very
dumb. Right? It's powerful because you
can learn complicated patterns, but it's
dumb because you don't know what the
rules are. Like it's not learning that
gravity is - 9.8 m/s squared. It's
simply learning that if you film
something falling, this is how it will
fall. But it doesn't know what the
physics of it are.
>> Right? It simply knows this is where the
ball will be at any given moment. Okay?
>> Now, so that's AI/Machine Learning.
>> Um, artificial general intelligence is a
term that I don't think anybody knows
how to define, but I'm going to give you
my definition of it, or at least we
don't agree on a definition. So,
typically, historically, when you've
looked at machine learning and AI
systems, they've been bespoke. So,
you'll have a system that does medical
imaging. You'll have a system that does
drug discovery. You'll have a system
that drives um self-driving cars.
>> The idea of AGI, artificial general
intelligence, is that it does
everything.
>> What humans do, right?
>> Um
>> or stem cells
>> or that's right. Exactly. That's a good
example of it. Right. So, so is that a
year away, 5 years, 10 years? I don't
know. I don't think anybody knows. But
there's a lot of speculation about that.
I would argue by the way that we have
some notion of AGI because if you go
over to your favorite large language
model like chat GPT yeah or claude you
can ask it a lot of different questions
about physics about medicine about
computer science about televisions
anything and so is that AGI I don't know
I suppose it depends on how you define
it
>> that that takes me to the touring test
right that was the big so so so what was
the touring test is it if you can't tell
the difference between when you're
talking to a machine or
>> Yeah. So you you got to by the way, you
got to have respect for Alan Turing.
>> Oh, absolutely.
>> In the 1950s, this guy was thinking
about things that were really outrageous
to be thinking about at the time.
>> Maybe he was a time travel.
>> That's right. That's really funny. Don't
don't give anybody any ideas by the way.
>> Okay. So what is the Turing test? So
when Alan Turing first conceived of this
notion of imbuing intelligence in
machines, he came up with a mechanism to
determine if you have solved that
problem. And the idea was that you would
have two computer screens uh and behind
one computer screen was a human and
behind another computer screen was a AI
system. You of course couldn't see what
was behind it. And you were allowed to
interact with it with a keyboard. Okay?
And if you and you can ask it questions,
you can have a conversation the way you
and I are having right now. And if you
could not tell the difference, then that
machine has passed the so-called touring
test and it has what probably today we
would call AGI. Okay. So now we've done
AI and we've done AGI, right? Okay. So
let's talk about deep fakes which is
this sort of sliver of all of this.
>> Yeah.
>> So deep fakes is an umbrella term for
using machine learning AI to whole cloth
create images, audio and video of things
um that have never existed or happened.
So for example, I can go to my favorite
deep fake generator and say give me an
image of Hakee in a studio doing a
podcast with Professor Hani Fared
>> and actually it would do a pretty good
job because you have a presence online.
I have somewhat of a presence online. It
knows what we look like and it would
generate an image that's not exactly
this but something like that. Or I can
say please I by the way I still say
please when I ask AI for for things. One
of my students told me that this is a
good idea because when the AI overlords
come they're going to remember you were
polite to them. Ah,
>> I actually really like this advice.
>> Wait a minute. So, I read an article.
>> Yes. It cost tens of millions of
dollars.
>> The energy ultimate. Yes. Just saying
please and thank you. I still do it by
the way. And even in my head right there
when I was asked when I was I I still in
my head say please.
>> Well, listen. I have AI connected to my
AI, right? And so my AI corrects my AI
prompts
>> to proper grammar and it's like
>> please. It puts please in there.
>> I know. And it does cost tens of
millions of dollars for that extra
token. Okay.
>> So, I will ask it for an image of a um
of a unicorn wearing a red clown hat um
walking down the street at Times Square
and it will generate that image. Um I
can ask uh generate an audio uh of
Professor Hani Fared saying the
following, right?
>> Um I can generate a video of me saying
and doing things I never did. And you
can clearly see the power of that
technology from a creative perspective.
If you and I are having a conversation
and in post we said something we didn't
mean to, we can just fill it in with AI
now.
>> Well, here here's the thing that makes
me you just mentioned how we're only two
three years into this. So, however good
it is now, you know, this is the worst
it will ever be,
>> right?
>> So, if you look at the So, I can tell
you by the way how good it is.
>> So, in addition to being trained as a
computer scientist and applied
mathematician, I've been somewhat
trained as a as a cognitive
neuroscientist. And we do perceptual
studies. So what we do is we recruit
participants. We show them images, audio
clips and video. And we tell them half
of the things you're going to look at
are real. Half of the things are AI
generated. We explain to them what AI
generated is. We give them examples of
that.
>> And for images as of last year, people
are roughly at chance at distinguishing
a real photo from an AI generated photo.
>> So what you mean by that is if they were
just if you had a a monkey behind a
keyboard,
>> flipping a coin.
>> Flipping a coin.
>> Yeah. Yeah. The monkeyy's probably
better than you. By the way, I'm I'm
going to go off and guess. Um, so with
audio, so we play a clip of somebody
speaking like you and then we play an AI
generated version. They're slightly
above chance, not like 65%.
>> On image at chance, at audio, slightly
better than chance.
>> And video, they're a little bit better,
but all of those trends are going
towards chance. So here's what we know.
everything in the next 12 months, 18
months, 24 months, I don't know what the
number is,
>> it will be indistinguishable to the
average person online, right? And that
is
>> that is a weird world we're living in
because think about how much in first of
all, the vast majority of Americans now
get the the the majority of their
information from online sources and
unfortunately from social media too.
>> And that and because it is so easy to
create this content, understand all this
is is a text prompt away. I type,
"Please give me an image of this,
generate this audio, generate this
video." There are dozens of services
that will do this extremely inexpensive
or for free. And you can carpet bomb the
internet with fake images of the
conflict in uh Gaza.
>> Fake images.
>> I have seen them too. Fake images of the
flood in Texas, fake images and video of
the fires in name it across the boards,
right? Fake images of people stuffing
ballot boxes. Now we have a threat to
our democracy.
>> Wow. So suddenly our sense of reality
coming back to your first very good
question is up in the air because I can
create whatever reality I want and
understand that there's sort of three
things happening here when we talk about
deep fakes. There's the creation of it.
That's what we've been talking about.
>> There's the distribution which we
democratized 20 years ago. So anybody
can
>> publish to the world and that's very
powerful and very terrifying because
there's no editorial standards on social
media. And then there's the
amplification that we have become so
polarized as a society that when you see
things that conform to your world view,
you are more than happy to click like,
reshare. And now you have creation,
distribution, amplification.
>> Wow.
>> That's the ball game,
>> right? That's the ball game for
spreading massive lies, conspiracies,
and disinformation campaigns that affect
our global health, our planet's health,
our democracy, our economy, everything.
Everything. So let's get into how these
fakes are generated. So start with
images.
>> Good. So let's start with images because
in some ways it's the easiest one, but
all of these have a similar theme. And
one of my favorite techniques for
generating images called a generative
adversarial network or a GAN. And here's
how it works.
>> Wait a minute. Wait a minute.
Adversarial.
>> Adversarial.
>> So that means that you're fighting your
computer.
>> Two two computer systems are fighting
each other. And this is sort of the
genius of this technique. So here's how
it works.
>> You have two systems.
One system's job is to make an image of
a person or a landscape or whatever you
want. Yeah. And so what it does, it
starts by, this is literally true, it
just splats down a bunch of random
pixels. So I say, generate an image of a
of a person and it says, "Okay, here's a
bunch of so so think uh the monkeys at
the keyboard typing randomly. Let's see
if this is Shakespeare,
>> right?
>> And then it takes that image and it
hands it to a second system and it says,
is this a face?" And that system has
access to millions and millions of
images that it scraped from the internet
that are faces.
>> I see.
>> And that system says that thing that you
generated doesn't look like these things
over here.
>> And it gives the feedback to the
generator and it says, "Nope, try again.
>> Modify some pixels. Send it back to
what's called the discriminator. Is it a
face? No, try again." And they work in
this adversarial loop. So it's like
somebody's checking your homework.
>> But it it it seems like it could get
stuck never getting to a face,
>> you would think. And that's what's
amazing about the GANs. the gen is that
they converge.
>> They converge.
>> And part of that is the way they they've
been trained. But that's what's the
genius of this is that the generator is
not very smart
>> because all it's doing is modifying
pixels. And the discriminator is
actually quite simple. It's simply
saying, does this thing look like these
things? And because you pit them against
each other in this adversarial game,
this sort of amazing thing happens out
the other side.
>> So here's the question. In on average,
how many iterations does it take? And
then how much time does that translate
to?
>> Yeah, that's a real great question. So
typically the time is in seconds. So
there's two phases. There's you train
the GANs. That's a really long process.
But then what we call inference, which
is that run this thing, it happens in
seconds. And the reason it happens in
seconds is by the way that is hundreds
of thousands of iterations. Wow. But
it's on a GPU, which is very powerful
and very fast. And then there's these
tricks to make it even faster. You start
with small images and then you make them
bigger over time. So there's these
tricks, but it is literally seconds to
make that image.
>> And what the brilliance of that is the
two systems are competing with each
other.
>> And then this thing that seems like
intelligence come out even though it's
not. If you think about those two
individual components, right?
>> They're pretty basic, but then you have
this like emergent behavior almost. It's
like you know how to generate images of
people. That's amazing.
>> So let's have a little fun. I understand
good
>> that you brought me some fakes
>> and some real images
>> to put to the test. Good
>> to see if I can discern the difference.
>> So before I I'm going to play for you a
couple of audios. Before I do this, let
me say I've been doing this for a long
time and I've been I'm pretty good at
it. I'm pretty good at what I do and I
created three audio samples. I'm going
to play them for you.
>> Wait, are you allowed to say that that
you're you're good at what you do? I'll
say that. Honey is really good.
>> I said pretty good, by the Yeah, he's
amazing.
>> But this is amazing. This is this is
this is a true story, by the way. So, I
made three audio clips for you of me
talking and you and I have been talking
for a little while, so you now know what
my voice sounds like.
>> And uh I got off the plane and I was in
the car coming over here and I wanted to
make sure they worked. And I played all
three of them and I couldn't tell which
one of me was real or fake. I wasn't
100% sure.
>> Wow.
>> And I do this for a living and it's my
voice,
>> right?
>> So, okay. So, that is Okay.
>> So, wait a minute. Which AI did you use?
Is this something that you created or
something generally available?
>> So here's the thing you have to
understand about AI is this is so
readily available. So here's what I did.
I went to a service. It's a commercial
service. Um I uploaded I think it was
about 3 minutes of my voice.
>> I said please um uh please clone my
voice. Um and it clones my voice. And by
what I mean by that is that it learns
the patterns of my voice, what I sound
like, the intonation, my cadence, how
fast I speak, where I put the pauses.
And then I can simply type and have it
say anything I want to say.
>> And so I'm going to I'm going to read
I'm going to have you play I'm going to
listen have you listen to three
sentences. Okay.
>> Um and one of them is fake. I'm going to
give you a hint. One of them is fake and
two are real.
>> Okay. And let's see what you we you can
do. Okay. Here we go.
>> And in fairness, this is not the best uh
speaker, but okay.
>> Are there guardrails in our law?
>> Ah, good. Uh, so first of all, when I
went to do this this service, um, I
uploaded my voice and there's a button
that says, "Do you have permission to
use this person's voice?" And and I did
because it was my voice, but I can
upload anybody's voice and click a
button.
>> The laws are very complicated and they
actually vary state-to-state and of
course internationally. Wow.
>> So, there are almost no guardrails on
grabbing people's likeness. And even if
there were,
>> there's,
>> you can still do it anyway.
>> There's there's no stopping this.
There's no stopping it. Okay. All right.
number one. Oh, and by the way, the the
three U this is part of a talk I gave
recently on deep fakes. So, you'll hear
a consecutive thing. Okay, ready?
>> And if you invite me back next year,
almost certainly everything will have
changed. Uh the nature of creation of
deep fakes, the risk of deep fakes.
>> That's the deep fake right there, man.
>> Is changing.
>> Hold on. Hold on. That was good.
>> It is a fastmoving field and we have to
start thinking seriously and carefully
about the threat of misinformation.
>> Okay, good. And one more. We are living
through an unprecedented time where we
are relying more and more on the
internet for information. For
information that affects our health, our
societies, our democracies, and our
economies.
>> Can I hear number one again?
>> Yep. You're a little less sure than you
were a minute ago.
>> Yeah.
>> And if you invite me back next year,
almost certainly everything will have
changed. Uh the nature of creation of
deep fakes, the risk of deep fakes, and
the detection of deep fakes is changing.
I think it's the first one still.
>> I got it right.
>> Yeah.
>> Yeah. I struggled with it, by the way.
Honestly, I couldn't remember. I'm from
the future.
>> You're the time traveler, it turns out.
>> Wow. Well, you know what? I So, I I
started my media work in audio, right?
Being a voice actor. And and very
quickly, I was able to pick up on music
and commercials and movies where they
were dropping in uh you know, pickups.
The the reason I figured it out is
there's a difference in the background
noise. Like one had more reverb than the
other. Um which is how I I I then
remembered it. But you got to admit all
three of them sound like me.
>> Oh, they all do. They all sound like
you.
>> Oh, and by the way, so not only
>> Let let me tell you what has gotten me
recently is I'll get these uh social
media announcements. Oh, there's a new
song by Tupac and Eminem. And I start
listening to it and halfway in I'm like,
no, this is this is Yeah. But at the
beginning,
>> it's coming from music. It's coming from
music as well, by the way. So, this is
one of my favorite videos, by the way.
Let me just show this to you.
>> And if you invite me back next year,
almost certainly everything will have
changed. The nature of the creation of
deep fakes, the risk of deep fakes,
real.
And your mouth is doing it. I don't
speak Japanese.
Doesn't
it sound like?
>> Yes, it does.
>> I know. So, now I can do full-blown
video.
>> Any language. Any language. By the way,
here's what's really cool about this.
Here's a really cool application. I like
foreign films a lot, but I can't stand
>> bad lip syncing. It makes me crazy. But
you don't need it anymore.
>> You don't need it.
>> We're now going to make videos in any
language you want, and it's going to be
perfect.
>> What? How did you do that? How
>> This is also a commercial software. Um,
you upload a video. say that you have
permission to do it and you say, "Please
translate this into Japanese, Korean,
Spanish, French, German, anything you
want."
>> It's amazing.
>> That is nuts. But the fact that the
mouth change to to voice the word,
>> by the way, the way this works, this is
really amazing, is you upload a video of
you talking. And what it does is it
takes the audio and transcribes it. So,
it goes from audio to words
>> and then it translates from English to
Spanish
>> and then it synthesizes a new audio in
Spanish and then it puts that audio back
into the video. Every one of those is an
AI system by the way and it does that in
about 3 minutes.
>> Wow.
>> And it's amazing. So, if you wanted to
take this podcast
>> and distribute it in Spanish, French,
German,
>> Yeah.
>> upload it.
>> Man, I'm just hitting India, China,
Southeast Asia.
>> Two and a half billion people. Done.
Done. 10 cents each, we're good to go.
>> This podcast is from the producers of
Nova. Nova is supported by Carile
Companies, a manufacturer of innovative
building envelope systems. With
buildings responsible for over a third
of total energy use and energy demand on
the rise, Carile's mission is to meet
the challenge head on. Carile's energy
efficient solutions are built to reduce
strain on the grid. For example,
Carile's Ultra Touch Denim Insulation
made from sustainable recycled cotton
fibers delivers high thermal performance
for improved energy efficiency while
being safe to handle and easy to
install. Made from 80% recycled denim,
Ultraouch diverts nearly 20 million
pounds of textile waste from landfills
each year. Operating nearly 100
manufacturing facilities across North
America, Carile is working towards a
more sustainable future. Learn more at
carile.com.
>> We have systems now to detect AI text,
AI audio, AI images, AI video. What is
the give us the nuts and bolts of how
you detect these fakes? My bread and
butter as an academic and also now as a
chief science officer of get real is to
build technology to distinguish what is
real from what is fake. Okay. And so
here there is some AI there. There are
some more classic techniques and I'm
going to I want to talk if I may about
my favorite one and I think this may
resonate with you as a physicist. So
what you have to understand about um
generative AI deep fakes is that it is
fundamentally learning how to generate
images, audio and video by looking at
patterns in billions and billions of
images, audio and video.
>> Okay?
>> But it doesn't know what a lens is. It
doesn't know what the physics of the
world is. It doesn't know about
geometry. It doesn't know about the
physical world. It's not recreating this
thing that we you and I are in right
now.
>> Right?
>> Um take any image outdoors. sunny day
here in uh in Virginia, go outdoors and
because the sun is shining, you will see
shadows all over the place.
>> Yeah.
>> And those shadows have to follow a very
specific law of physics, which is that
there's a single dominant light source,
the sun,
>> and it is giving rise to all those
shadows.
>> So, we have geometric techniques that
can say given a point on a shadow and
the part of the object that is casting
it, tell me where the light is that's
consistent with that. And we can do that
not once, not twice, not three times,
but as many times for shadows that we
find,
>> for every shadow in the
>> every shadow. And if we find that they
are not converging on a single light
source, the sun, then we have a
physically implausible scene.
>> Yeah,
>> it seems like that would be easy for AI
to figure out.
>> You would you would think, but here's
why I can't. Because what I described to
you is a three-dimensional process
that's happening in the
three-dimensional world, but the AI
lives in 2D.
>> It lives in a two-dimensional world. And
reasoning about the three-dimensional
world is not something it does. Now, we
can sort of fake it pretty well the way
artists fake it. Right.
>> Right. Lots of things in paintings are
not physically plausible, but our visual
system doesn't really care. We're
looking at a pretty picture.
>> So, that's one of my favorite
techniques. Um, here's another one that
I love.
>> Um, go outside and well, you shouldn't
do actually do this, but stand on the
railroad tracks. I don't actually advise
doing that. I did this the other day
with one of my students. I'm like, what
are you doing standing on the railroad
tracks? I wanted to take a picture of
railroad tracks. And the reason I want
to take a picture of railroad tracks is
that when you're standing on the
railroad tracks, those railroad tracks
of course are parallel in the physical
world and they are remain parallel as
long as the track continues going. But
if you take a picture of it, those uh
train tracks will converge to what's
called a vanishing point. This is a
notion that Renaissance painters have
understood for hundreds of years. And
why is that? It's because when you
photograph something, um the size on the
image sensor is inversely proportional
to how far it is from me. So as the
train tracks recede, it looks like
they're converging. Right? This is
called projective geometry, a vanishing
point. It's a very specific geometry.
And this is true of the parallel lines
on the top and bottom of a window, on
the sides of a building, on a sidewalk,
anything that you have a flat surface
like this table that we're at.
>> Take a photo of this of this table and
the all these parallel lines will
converge to a vanishing point.
>> Right?
>> So we can make those measurements in an
image. And when we find deviations of
that,
>> something is physically implausible.
Your the image is violating geometry.
>> Okay.
>> All right. Let me move to a sort of
different side of it. This is actually
one of my favorite techniques is when
you go to your favorite AI system and
you ask it to make an image, um, it will
create all the pixels, but then it has
to bundle it up into a JPEG image or a
PNG image or some format,
>> right?
>> And it actually does that in a very
specific way. And so here's an analogy.
When I buy something from an online
retailer, there's the product I get, but
that product is also packaged in a box.
Yes. And different retailers have
different ways of doing it. Apple has a
very specific way of doing beautiful
packaging, right? Other retailers, you
know, just shove it in a box and send it
off.
>> So the packaging when I create an image
on OpenAI or on Enthropic or on
MidJourney, all these different
generators, they package it up
differently. M
>> um and it's different than the way my
phone packages up the pixels and it's
different than the way Photoshop
packages up the pixels. So when we get
an image or an audio or video for that
matter, we can look at the underlying
package and saying is this a packaging
that is consistent with OpenAI or
Enthropic or a camera or whatever it is.
And so
>> so it doesn't have package emulators.
>> It does not. It doesn't know about
because it doesn't care. Why would you
care about it? I'm the only person in
the world who probably cares about this.
You certainly don't care how it's
packaged because what do you do? You
open the package, you throw it away and
you got your product, the image, right?
So, we can look at the packaging.
>> U there's a whole another set of
techniques um that so everything I've
described is sort of after the fact,
right? Right. You wait for the content
to land on your desk and you start doing
these analyses, right? There's a whole
another set of techniques that are what
are called active techniques. So, Google
recently announced that every single
piece of content that comes from their
generators, image, audio, or video will
have what's called an imperceptible
watermark. So, think we don't use
currency that much anymore, but take
your $20 bill out of your wallet and
hold it up to the light and you'll see
all kinds of watermarks that prevent or
make it very difficult to counterfeit.
>> So, what Google has done is they have
inserted an invisible watermark into
images, audio, and video at the point of
creation that says, "We made this."
>> Yeah.
>> And then when I get that piece of
content, I have a specialized piece of
software because we over at Get Real
have a relationship with Google that
says, "Is there a watermark in there?"
It's it's a signal and you can't see it,
>> right? And the adversary can't see it,
but I can see it. So, that's really
cool. And by the way, if this comes into
the phones, so if Apple decides, we're
going to watermark every single piece of
content that is natural,
>> I've got a signal that is built in,
right?
>> So, we've got lots of different
techniques from things that we rely on
third parties like the Googles of the
world to measurements that we can make
in an image, a video, or an audio. I'll
give you one of my favorite audio ones,
by the way. So, if you're listening to
this, you won't be able to see us, but
if you're watching this on YouTube, you
will know we're in a really nice studio
>> and there are soft walls around us and
we have really nice microphones. And so,
the amount of reverberation,
>> yeah,
>> that you hear is quite minimal. We're
This audio is going to sound really good
because you guys are pros here, right?
But the amount of reverberation is
dependent on the physical geometry
around us, how hard those surfaces are,
and that should be fairly consistent
over an audio,
>> right? But what you see with AI
generation is you see inconsistencies
in the microphone and the reverberation
because it doesn't it's not physically
recording these things.
>> So even in a single
>> recording you'll see modulation that are
quote unquote unnatural. Yeah. So what a
lot of what we do is look for patterns
you expect to see that mimic the
physical world. Okay. Now, and then I
talked about the active techniques, the
watermarking, and then there's a whole
another set of techniques that I'm going
to talk about a little, but not a lot,
and you'll understand in a minute why
not. So, the other side of of what we do
is we try to understand the tools that
our adversary uses. So, if you're using
an open AI or anthropic or some open
source code, we actually go into the
well, we can't do this for open AI, but
for anything that's open source. M
>> um so there's there are so-called um uh
um face swap deep fakes where you can
take somebody's face eyebrow to chin
cheek to cheek and replace it with
another face
>> and these are all open source libraries.
We can dig into the code and we can see
okay what are they doing? All right the
first thing they're doing is this and
then they do this and then they do this
and then they do this and then we'll say
ah that second step should introduce a
very specific artifact.
>> Um so I'll give you one example but not
more than one. Yeah.
>> So, one of the things that a lot of
these swap faces do is they put a a
square bounding box around the face.
>> They pull the face off, they synthesize
a new face, and then they put it back.
>> But when they put it back, it's with a
bounding box. And they do it very well.
>> Yeah.
>> You can't see it, but we know how to go
into the to the video and discover that
bounding box that was there.
>> Wow.
>> Right. So, that's an example where we
reverse engineering because we
understand how the adversary has made
something. Now we have lots of other
ones which I don't want to tell you
about.
>> Yes, I understand why.
>> You can see now cuz it's adversarial.
>> Exactly. Right. Right. Right. Man, it it
sounds very systematic. I I have a a
decent understanding now if I want to
make a lab to do this some techniques to
do it. But you know the average person
out there isn't a scientist. How can
people how can I how can my mother
identify real from the fake in the world
of AI?
>> Yeah. Yeah. They can't. This is the
reality of where we are right now. And
this is important to understand because
I don't want you to walk away from this
podcast thinking, "Okay, I understand a
little bit now. Now, when I'm scrolling
through, you know, X or Blue Sky or
Facebook or Instagram, I'm going to be
able to tell." You won't.
>> You won't be able to tell. And even if I
could tell you something today that was
reliable, six weeks from now, it will
not be reliable and you'll have a false
sense of security.
>> Right?
>> So, I get this question a lot and the
thing you have to understand is this is
a hard job. It is really hard to do this
and it's constantly changing and the
average person doom scrolling on social
media cannot do this reliably. You can't
do it reliably. I can barely do it
reliably and this is what I do for a
living.
>> So here's some some things.
>> Stop for the love of God getting your
news and information from social media.
Yeah, this is not what it was designed
for and it's not good. If you want to be
on social media for entertainment,
that's fine. I don't I don't care. I
don't think you should, by the way, but
I don't care.
>> But this is not where you get news
information. You know where you get it?
Things like this,
>> right?
>> Right. You get it from news outlets that
have standards, that have serious, smart
journalists who work hard to get you
information. And we have to come back to
some sense of reality of where we get
our news.
>> Man, rigor around determining truth and
measuring uncertainties is not something
that we're generally taught. And when
you become a scientist, like it it is
unnatural.
>> Yeah.
>> It's an unnatural way to think.
>> That's right. I agree. Yeah. And look,
you know, you and I have both fallen in.
You were telling me in the green room
before this, right? You heard the story,
you thought it was true, you assumed it
was true. Somebody called you out on it,
you went and figured out like, "Oh, God,
I was wrong."
>> Yeah. Yeah.
>> Right. And now imagine that at scale of
the thousands of posts you're seeing.
>> The reason why I thought it was true was
because everybody else was saying,
>> exactly. And that's what social media
is. Everybody is saying the same thing.
Millions of views. Oh, this must be
true. But that's the way social media
works. We have to get back to getting
reliable information from reliable
sources. That's number one.
>> Number two,
>> and I tell you the other thing about it
though is that even though I I I assumed
it was true because everyone else
assumed it was true and these others
were scientists just like me,
>> is I still knew that I didn't know
>> that I had not confirmed it for myself.
And I think that is where the average
person can can, you know, if we know the
difference between knowing and not
knowing,
>> then you can check yourself. Even if
everybody's saying it, you don't know
that that's the truth. I I agree and and
this gets to number two in a you said it
in a nicer way than I was going to, but
understand that the business model of
social media is to draw your time and
attention by feeding you content that
outrages you and engages you to deliver
ads to get you to buy stuff you don't
need.
>> Yeah.
>> And so understand that you're being
manipulated,
>> right? And that you are being fed
information that the social media
companies believe you are going to
engage with. And first of all, that
should make you angry that you are being
manipulated. But you're 100% right is
that, you know, we live in these
distorted bubbles when it comes to
social media.
>> And it's very easy to forget
>> that you actually don't know what is
happening in Gaza, in Ukraine, in Texas,
in Los Angeles. You just think you do.
>> And that is incredibly dangerous. It
gives you this false sense of security.
>> And so it comes back to what what I say
is you got to have some humility. M
>> you have to have humility that this is a
complicated world. It is fastm moving
and you have a responsibility to get
reliable information because not only
are you being lied to and being deceived
and making decisions on bad information,
you're also spreading that bad
information. So you are actually part of
the problem now because when you like
and share and send this to your friends,
>> you are now a carrier of dis of
disinformation. So, we have these AIs
that do things for us
>> and we're sort of managing them.
>> Yeah, that's right.
>> And you know, I might write something
and I like, "Oh, give me an edit on
this." Right. And then it comes back and
it'll tighten it or whatever.
>> But you can move to a point where you're
like, "Give me the first draft."
>> Yeah. That's right. Yeah. And it's
coming.
>> And you get to the point where it's
like, "Okay, do it."
>> Yeah. Start responding to my emails. I
don't even have to read my emails
anymore. Right. And by the way, there's
a really weird future that you could
imagine where emails are being sent by
each of our agents and we're hanging out
in the beach. I mean, what are we doing?
>> Exactly. That's the point. It's almost
as if we are making ourselves obsolete
in many, you know, we're not you need a
human being to do uh your um to build
your house, right? You need humans there
swinging hammers. AI can't do that. But
yeah,
>> well AI robots, right?
>> That's right. That's right. But this is
this is sort of the ultimate joke in
some ways or the the the irony of all
this is
>> I think if you go back uh 50 years, what
were people worried about? That we were
going to take blue collar jobs away,
manufacturing jobs, physical labor jobs,
that we were going to build robots to do
those jobs. What did we end up doing? We
took out the white collar jobs. We took
out those highpaying computer science
jobs. We took out those jobs that AI is
now doing better than most humans,
>> right?
>> And that is a that's a weird world. I
can tell you I I am on the campus almost
every day
>> and there is a lot of anxiety among
students about what the future holds for
them. Are there going to be jobs?
Because you're seeing unemployment go up
in these high historically
>> in my own house. My son, he's 20, he's a
senior in college this fall. Guess what
his major is? Computer science.
>> Yeah. And he's struggling. I'm I
Everybody is. But I can tell you, so I'm
at UC Berkeley, one of the top, let's
say, five CS uh programs in the world.
And our students typically had five
internship offers throughout their first
four years of college. They would
graduate with exceedingly high salaries,
multiple offers. They had to run the
place.
>> That is not happening today. They're
they're happy to get one job offer. So
something is happening in the industry.
I think it's a confluence of many
things. I think AI is part of it. I
think there's a thinning of the ranks
that's happening part of it. But
something is brewing and for for people
like your son by the way who four years
ago were promised
>> right
>> go study computer science it's going to
be a great career it is future proof
that changed in four years
>> that is astonishing and I get this
question by the way from students all
the time is
>> how do I prepare for this
>> yeah exactly
>> and honestly I'm sort of at a loss my
best advice and I don't know if it's
good advice by the way
>> is I used to tell people this is what I
used to tell people is you want a broad
education you should know about physics
and language and history and philosophy,
but then you have to go deep
>> like deep deep deep deep into one thing.
Become really really good at one thing.
>> Now, I think I'm telling people be good
at a lot of different things because we
don't know what the future holds.
>> You need options
>> and you need options. And so depth is in
some ways less relevant, particularly if
you know how to harness the power of AI
to get the depth that you need in a
particular area. But if you have a broad
knowledge, I think you're probably today
more futurep proof than if you're very
narrow in one area. And the best line I
heard about this was this is in the in
the the framing of the legal system is
that um I don't think AI is going to put
lawyers out of business, but I think
lawyers who use AI will put those who
don't use AI out of business. And I
think you can say that about every
profession.
>> So I think two things are going to
happen is that you're going to have to
learn like every technology how to
harness the power of AI. And that was
true of computers. That was true of the
internet. That was true of everything.
>> Yeah.
>> So, but
>> because of how powerful the AI systems
are, it is absolutely going to reduce
the workforce.
>> And I think the big question here is the
question always that happens with
disruption of technology is we are going
to eliminate and reduce certain jobs and
the question is do we create new jobs
and what do they look like? Exactly. And
I don't think anybody knows the answer
to that right now.
>> How is AI going to affect bluecollar
jobs? Is AI going to affect blue collar
jobs? We have this vision of white
collar jobs
>> but there could be an effect. What do
you think?
>> Yeah, it's a great question. So now
let's come to blue collar.
>> So where is it coming first? So here's
what I can tell you. It's coming for the
self-driving cars. It's coming for
drivers
>> taxis
>> cuz go to San Francisco.
>> Right.
>> Right. And you can get a a car that will
that is self-driving. It's weird. By the
way, you go to San Francisco and just
stand on a street corner and look at how
many cars drive by you without a driver.
>> What about trucking?
>> So I think it's coming for trucking. I
think it's coming for the lifts, the
Ubers, the taxis, the limos,
>> and I think that's in our lifetime.
>> What about shipping?
>> Yeah. Everything. Everything. Like I
think that long haul truckers that go
thousand miles, I think once that truck
is on the highway,
>> Yeah.
>> I think that's completely autonomous
within the next 10 years. Wow.
>> So, I think when it comes to um um
moving people and objects A to B, that
is probably coming. Um I don't think
it's coming for plumbers. I don't think
it's coming for electricians. I don't
think it's coming for construction
workers because I don't think the robots
are even close to that. But driving is a
very self-contained, relatively
self-contained relative to what a
plumber has to do, which is come into
your house, find the bathroom, diagnose
the problem, fix things, which is very
bespoke,
>> right? When you drive on a highway from
point A to point B, the rules are pretty
clear, right? Get from A to B, and don't
hit anything, right? And you can't say
that about plumbing, right? It's a much
more complicated process. So, I do think
it's coming for some of those jobs, but
I think other ones, frankly, are
probably much more secure, which again
is the ultimate irony, is that the nerds
like me are putting themselves out of
business.
>> Wow. Yeah. Yeah. So, it it's it's a um
trade school is going to make a
comeback.
>> I tell my students, and I'm only half
joking, you better learn how to fix a
toilet,
>> right?
>> I mean, no kidding. Uh, and I and you
know I I'm I I spent my first 20 years
at Dartmouth College and I very much
believe in the liberal arts uh
education. I don't like when students go
really deep very quickly all engineering
all the time for four years. Um I don't
think it's good intellectually but I
think we need to start rethinking higher
education. We have to rethink absolutely
everything has to be completely we are
in a brave new world and
>> I think if we don't and and by the way I
think we have to start thinking about
how do we teach students
>> to use AI to help them think right
>> because you know what what you're seeing
at universities right now is one of two
things
>> uh either bury your head in the sand and
pretend chat GPT and large language
models don't exist and that's a bad idea
uh ban it and saying you're not allowed
to use it. Um and neither of those are
good solutions. We have to think about
how to incorporate this into the
curriculum
>> and as an academic you know that the
university is not well known for being
very nimble and fast moving
>> and so we have to move a little bit
faster
>> but you know what also comes to mind is
something I experienced myself but then
recently I read an article that says
that no there's actually been a
biological change and what is that
>> I used to I I I live my life driving
cross country from throughout my
childhood right I never lived in the
same state two years in a Bro, when I
became an adult, I kept do kept it up,
kept going. And so I used to rely on
paper maps
>> and I would get a map when I moved to a
new town and I would study that map and
I would have a bird's eyee view of my
town in my mind, right? And I'm the type
of dude I like to take the back streets.
I don't like to take the main streets.
>> GPS comes out.
>> I could not navigate from where we are
now back home because I now rely on GPS.
And the article I read said that our
brains the part of it that that has this
memory map has actually
>> shrunk
>> shrunk the hippocampus
>> it's modified it
>> because of this reliance on GPS. So now
AI is going to come around and it's
going to do so much more for us right
>> you don't have to calculate you don't
have to memorize as much anymore. So you
know we're we're evolving ourselves
>> in a very short period of time. I think
this is 100% true. By the way, there's
no question that's changing the way we
think. Um, and I'll give you and I think
the the question is that's not
fundamentally bad or it doesn't have to
be bad because the the question is is so
I get in a car now and I'm the same way
as you. I I couldn't navigate my way,
you know, half a mile from my home. So
no matter where I am, I GPS it. But what
I do during that time is I think about
other things, right?
>> So the GPS enables me to think about the
problems I want to think about, right?
And so is that overall good or bad?
Okay, I'm not good at navigation. So, if
my phone breaks down, I'm not going to
be able to find my way out of paper bag,
but I get a lot of my time back in the
car to contemplate and to think and and
so that is really powerful. And will AI
do the same thing? So, for example, I'll
give you an example. I I use AI
>> every day in my work to help me write
code.
>> Yeah.
>> And so, I'm probably becoming a worse
coder.
>> I think that's probably true.
>> But I can now build prototypes and build
systems at a scale that I couldn't do
two years ago.
>> So, okay. So, it's it's a trade-off,
right? I'm less good at the
nitty-gritty, but I'm good at building
much more complicated systems because
the AI can help. And here's the thing
about about computer science in
particular that you have to understand
is that we've been moving in this
direction for 80 years because think
about what computer science is all
about. All of science from the very
early days has been about abstracting
more and more detail out of the system.
So, we used to start with punch cards.
We had to literally physically push
little things into punch cards to
program. And then we programmed an
assembly language, very low-level
language where we had to know about the
hardware to program. And then we went to
high level languages and then higher
level languages. And now it's natural
language.
>> And so it's part of an evolution.
>> Wow.
>> Right. And but it's also okay. So some
people purists can say well that's bad.
You should understand this. But I would
say hey if I can empower somebody who is
not a computer scientist to build
systems that's an incredible thing to
do. It is. It is. And you know, it
reminds me of the day I stepped foot
into graduate school, right? I was a
rural dude in the deep woods
>> and I had a academic education that did
not size up to my peers when I arrived.
>> But I was like, can't nobody in here
skin a squirrel better than me?
>> Right. I'm the only one know how to grow
some crops. That's right. And and so
what I'm saying is is that we are
replacing one set of skills and
knowledge with another set of skills and
knowledge for the world we live in
today. So if we're not going back to an
agrarian society, you know, nobody knows
how to shoe a horse anymore, right?
Where it was it was required. No one
knows how to take a piece of flint and
nap it to make a stone arrow head
anymore, right? And we continue on.
>> And I think the big question because
you're 100% right. We have been doing
this for our entire existence. I think
you do have to acknowledge or we should
acknowledge AI is is different. It
>> is different. It's it's different than a
flint versus a lighter because it is so
fundamental to so much what we do. And I
think the question is is will the
disruption be so great that unemployment
is 30%. Because that is a problem. Let's
agree on that because now the entire
social contract
>> is broken.
>> Yeah.
>> You have instability in your society
now.
>> We have to start talking about universal
basic income if we're going to talk
about this. Right. So I don't know and I
don't think anybody knows. But here's
the thing that I would argue is
>> for the first 20 years of the technology
revolution we largely took our hands off
the steering wheel. We largely said,
"Hey, look, let's let the internet be
the internet." Right.
>> And I don't think most people are
looking at the internet today being
like, "Let's do more of this." Right.
Right. It is not great. There's lots of
great things that came from it. Don't
get me wrong, but there is horrific
things that have been happening.
>> You ain't got to tell me, man.
>> Horrific. And if you have kids, as you
do, you know.
>> Oh, myself, man. I I started graduate
school in ' 91. Yeah.
>> And at the time, it was uh news groups,
right? All dot this dot that. Sure.
>> And the people, some people on the
planet thought it was really funny
>> to label something as one thing
>> when it was really something horrific
that you didn't want to see.
>> And then you opened it. Yeah. And you
open up and it's only gotten worse.
Right.
>> So, I don't think we have to put our
hands back on the steering wheel and we
have to start getting serious about how
do we put guardrails on the system
because what we know is that if you
unleash Silicon Valley, they will burn
the place to the ground to get to the
finish line first. And we've got to
start putting guard rails in place. And
the thing is is that
>> this is this is a borderless problem,
right? This can't be an only US or an
only EU or an only. We have got to start
thinking about this globally.
>> And I don't think we are doing that very
well. Honestly, the EU is probably the
leaders on this because they have the AI
safety bill. Uh the United Kingdom is is
doing fairly well. The Australians are
doing well. We I think are lost at sea
right now. And that's not, by the way, a
partisan issue. I don't I don't think
either party has done particularly well
on this regulation because frankly
nobody on the hill not too far from
where we are right now really
understands this. So I think we have to
get smart very fast and we have to start
thinking about how to put guard rails
that allows for the innovation but also
protects individuals societies
democracies and economies and I don't
think we're doing that right now. So
let's jump off to a a part of AI that is
not typically discussed but in in in
these sort of forums
>> and that is AI infrastructure. So when I
look at it there are there is like
energy.
>> Yeah.
>> There are data centers right then you
have companies like Nvidia chips then
you have software.
>> Yeah.
>> And I imagine that all of that is going
to evolve right higher efficiencies new
architectures. So I can even imagine
that there's going to be new computer
hardware that's made specifically to
serve the needs of AI. So where do you
see that entire infrastructure? How do
you see that evolving?
>> Good. Okay. So let me preface what I'm
about to say, what I'm about to say by
saying nobody knows what the next 5
years are going to look like. And here's
how I know that because if you would ask
any of us 5 years ago would we be here?
We would have said no.
>> So let's be honest about the future is
that it's very hard to predict. But
here's what we are seeing. Um, I just
saw an article the other day. Mark
Zuckerberg wants to build a data center
the size of Manhattan.
The size of Manhattan. This guy over
here is nodding his head. He read the
same article, by the way. Um, so we are
talking about data centers that are
massive
>> and they're also gobbling up water,
gobbling up energy at a time, by the
way, when we are in a crisis
>> and our climate.
>> And I'm not sure that this is the right
direction we want to be moving in with
respect to the broader picture. I'm not
anti- technology. I'm not anti-
innovation. But we have to think about a
broader picture here.
>> So right now what troubles me is because
I think you asked the right question
which is how do we make these things
more efficient? How do we make them um
require less power, less water, less
less data? And that is not what we are
doing because there is a race
>> to quote unquote win. And you are seeing
all the big tech companies just say
carpet bomb the place, build data
centers, gobble up all the water, gobble
up all the energy and go. And by the
way, what you should understand about
energy, there's actually two places the
energy is being consumed. One is of
course you have to power these machines,
>> but what really the energy is being used
is to cool them. What you have to
understand about GPUs and these
specialized hardware is they pump out a
phenomenal amount of heat.
>> So the amount of cooling that has to
come into these systems is enormous.
>> Wow. And now you are literally gobbling
up citywide cityscale energy
consumption. Now
>> does it does it matter if it's like
built in Alta Norway?
>> I mean I mean ideally that's what you
would do, right? So so 5 10 years ago
there were discussions about building
these data centers in the Pacific Ocean
where you could use the inherent cooling
of the ocean.
>> Well, I've seen some technology where
the servers were actually inside of
liquid.
>> That's right. Yeah, exactly.
>> Conductive liquid. Is that how that
works?
>> I don't know. you're outside of my area
of expertise. But now what they're doing
is going into rural parts of the of the
country and just taking over,
>> right? Sucking it all up. Right.
>> Yeah. Well, where I live in Northern
Virginia, the data centers are popping
up all over the place.
>> Everywhere. Yeah. So, I there are some
people who think uh that in order for
this really to scale, we're going to
have to figure out how to do this more
efficiently, right?
>> I I this is outside of my area of
expertise. I'm not a hardware person.
>> Well, listen there. So from the energy
side, we're trying to it's not coming
about in the next 5 10 years fusion
energy, right? That's cheaper
>> or quantum computing, right? I I don't
think that's going to happen in the next
5 to 10 years. We are not going to have
machines that suddenly become 100 a
thousand times more efficient. The trend
right now is simply to take the things
that we have GPUs, Nvidia, and just
scale them to massive scale. Okay.
And you know this better than anybody.
The advances in physics happen at a very
different time scale. This is not
happening over a one or two year. These
are decade long. And by the way if I may
at a time when we are investing less and
less and less in research in academia.
>> Big mistake.
>> Huge mistake. And we have we are going
to pay the price for that for
generations to come.
>> Yeah. So something you said earlier I
found really interesting is that is you
have some experience with neuroscience.
So now we see large language models are
pretty much predict the next word from
my understanding right
>> and we have these brains that we think
oh we're so complex and high powered and
spiritual and magical
>> has the research in AI thinking
illuminated
>> in any way how our brains are working
and could our brains be as simple as
>> binary code
>> all right so first I have to apologize
to my wife before I answer this question
because my wife is a very serious
computational neuroscientist and it
makes her crazy when I talk about the
brain. So I'm sorry sweetie. Um
so I I don't know is the is the simple
answer but here's the more uh
complicated question. There is a story
>> that although we feel like we are very
sophisticated, we are sophisticated
pattern matchers,
>> right? We see this, we do this,
>> right? Um, I don't think that
fundamentally we really understand how
the human brain does what it does, but I
think there is a story there that it's
actually simpler than we think it is.
And again, it's an emergent behavior
that a lot of very simple computations
give rise to very complicated human
beings.
>> Um, I I think right now in my view
>> AI has not illuminated a lot about the
human brain. I think the human brain has
motivated a lot of how AI architectures
are built using sort of similar types of
neural architectures.
>> Um whether it will illuminate more about
the human brain, I don't know.
>> But the human brain also there's a lot
going on, right? We are able to move or
dextrous. I can pick up this cup without
bumping into the microphone. I recognize
you now that I' met you just a few
minutes ago. I now recognize you. So, we
are pretty complicated. Um, but I the
short answer is I don't know, right? I
honestly don't know. But I I think more
likely than not we are simpler than we
think we are, right?
>> Um, I don't think it's I actually think
because the one thing you have seen
about AI is that it can generate
language, images, video, audio fairly
simply,
>> right? And the thing that's amazing
about, you said it right, large language
models, they are basically one word
predictors. You start with a beginning
of a sentence and you predict what's the
next word. What's the next word? What's
the next word? What's the next word?
>> Is that what I'm doing? As I'm speaking
right now,
>> I don't know. In my head, it seems more
complicated, but that could be an
illusion.
>> Yeah.
>> So, I don't know.
>> And what we see in nature is sometimes
the same solution is uh occurs by
different mechanisms. One that I was
just studying had to do with the
evolution of light skin.
>> Yeah.
>> Right. and how it happened in Europe and
Asia but via different genes. Yeah.
Right. So the same problem of uh you
know getting lighter to get more D more
vitamin D uh was solved in two different
ways. So
>> two different paths to the same thing.
>> Two different paths to the same thing.
So it could be that we are generating a
new form of what we do ourselves. Right.
>> I think that's right. Yeah.
>> All right. So now I'm going to put you
to the test, my friend. You put me to
the test. I'm going to put you to the
test. Good. So, I'm going to give you
some AI headlines,
>> and there's going to be
>> two real and one fake.
>> Okay,
>> here we go.
>> I guess I had this coming.
>> Ah, you got Man, listen. You brought it
on yourself. I didn't want to do this. I
did not want to do this. All right, so
here we go.
Google makes fixes to AI generated
search summaries after outlandish
answers went viral. Mhm.
>> Threaten an AI chatbot and it will lie,
cheat, and quote unquote let you die in
an effort to stop you. Study warns.
>> Good.
>> And the third one is Delaware court
rules an AI can be named as co-inventor
on a patent.
>> Okay, I know number two is real. Um,
which is I know I've read the anthropic
report so I'm cheating here. So first of
all, this is a terrifying study and let
me first give credit to some of the AI
companies like OpenAI and Enthropic,
which are two of the big AI companies
that are doing safety studies both in
mostly internally to understand how are
these AI bots and these agentic AIs
going to respond in real world
situations. And what both of these
companies anthropic and AI found
>> is that
>> wait Anthropic and Open AI. So what is
anthropics?
>> Yeah. So, does Claude Open AI does chat
GPT two of the same basically the same
thing large language models you ask
questions it gives you responses.
>> Um, don't forget to say please.
>> Um, so what they found and and we we
should understand this because some of
the media reports about this were
incorrect is this is did not actually
happen. So what what anthropic will do
is they will they will create
simulations and see how their AI
responds. So in a simulation and and and
to anthropic's point also pushed to an
extreme what we would call an edge case
>> but nevertheless disconcerting that when
the AI was presented with information
>> that it was going to be shut down it
tried to blackmail an engineer that it
believed was trying to shut it down.
>> Wow. So, it had that notion somehow of
self-preservation, which was worrisome
to begin with, but it was nothing
compared to the second study that found
that given a situation where it could
lock a door automatically that was
deprived of oxygen, knowing an engineer
was in it, it would do that if it
thought the engineer was going to shut
the system down.
>> Holy cow.
>> So, this is this is what the science
fiction movies were made of.
>> Wow. that we were warned about and the
AI systems are doing these things in a
way that we don't understand. They
haven't done them, please understand,
but they have the ability to do it,
which is disconcerting.
>> Um, and I think what is particularly
worrisome here is that it has
>> internalized this notion of
self-preservation without it being
programmed to do that. And that is the
very thing that people are concerned
about is that when you ingest these
massive amounts of data to train these
systems, you don't know what these
systems are learning. And this is a
perfect example of that.
>> Right. Right. So what really bothers me
is the story of blackmail. Does that
mean that the AI found personal
information about this person?
>> Yeah. So Anthropic did something very
clever is they they put some information
into Anthropic's knowledge base about an
extrammarital affair by one of the
engineers. So they gave it the ability
to do it and they saw will you do it
>> and it actually did.
>> It actually did and that is my and I
think even the folks at Enthropic
>> and by the way we talking about that.
>> Wait a minute. So Claude is a snitch.
That's what we're getting.
>> Claude is a rat.
>> It is going to rat you out. Um yeah. So
it is you know there's there's things we
should make it's it's funny to some
degree but it is terrifying because
here's here's a scenario which is not
that far off. We have self-driving cars
coming out. Yeah.
>> So, what if those self-driving cars are
being run by the very AIS that we were
just talking about, and there's an
engineer in a car that it thinks is
writing a kill switch? Is it going to
drive that car into a wall at 100 miles
an hour because it doesn't want that
engineer to do it? This is not
far-fetched. And I'm not saying the
probability is high, but if the
probability is greater than zero, we
have to have a very serious
conversation. And what we have learned
about these safety studies is that the
probability is non zero.
>> Wow.
>> And that I think is worrisome. Well,
this raises the obvious question of how
do we control potential bad outcomes
from AI?
>> Okay, so here's the reality of the
systems today is I don't think we know
how to do it given the way the systems
have been trained and the way they're
being deployed. We don't know
technically how to control them. So
here's what I think has to happen and I
this is an imperfect solution please
understand but what we have learned from
the physical world is that when you
create liability for companies they
build safer products.
>> Yeah.
>> The reason why we have safer products
that we bring into our home that we
ingest medicines etc is because we
created liability. We told companies
that you create a product that you knew
or should have known is dangerous and it
creates harm we're going to sue you back
to the dark ages.
>> Yeah.
>> We've not done that with the technology
sector. But if we tell these AI
companies, your AI systems start
creating damage, we're going to sue you
back to the dark ages. They will then
have to internalize that liability and
they will start to build safer products.
We know this. So, I think some of this
has to come from liability.
>> I think some of it has to come from
regulation. I think technology can get
you there to a certain degree.
>> Um, but none of these are perfect.
>> So, it's almost like you're you're
saying that there is an engineering
workflow fix that has to be done. is
like, "Okay, you need to install
scrubbers in your smoke stack." Yeah.
Right. But now you're saying in your
engineering, you need to
>> behave certain ways with your training
sets. You need to
>> or not release products until you know
they're safe.
>> Like when you when you when that plane
taxis down the runway for the first
time, it has gone through an awful lot
of safety testing. Cars are safety
tested to oblivion. But if the AI
determines, you know, so essentially if
if the AI determines a theory of mind
>> and an idea of deception,
>> then it could hide.
>> It can. And that is terrifying. So what
if it knows you're putting it through a
safety test and it's being deceptive?
What if it starts learning deception?
Yeah.
>> And this is exactly what science fiction
writers have been warning us for 50
years.
>> So and we have an inkling that this is
starting to happen.
>> Yeah.
>> Right. So, does it mean you have to have
a a kill switch that can't be
manipulated by AI? Is that possible? So,
I don't have great answers here. And
frankly, I don't think anybody has great
answers. But we now have an inkling of
of the art of the possible. So, I think
number two is real.
>> Wait a minute now. Wait a minute. All
right. So, you know the the probability
test where if I give you three doors
>> Yes.
>> and I reveal that one is
>> Yes.
>> not the answer, you're supposed to
change your answer.
>> I know. I'm not going to. Um, I think
the first one is real and the third one
is fake.
>> You nailed it.
>> Yeah.
>> Not only did you nail which headline was
a fake headline, that headline was AI
generated. It was not created by the
Nova production team.
>> What was the prompt, by the way, to get
it? Do you know?
>> I do not know.
>> All right.
>> Give me that. Wasn't my job. I'm
>> sorry. I didn't mean to put you on the
spot, man.
>> I know. It's fine. The spot is fine by
me. I will I love when I say I don't
know. Uh when I was in graduate school,
>> I felt like the greatest thing that I
learned was the difference between
knowing and not knowing.
>> I agree.
>> Right. Because I it had never struck me
before. And you'd be surprised how many
times I've said to people this thing
about the difference between knowing and
not knowing. And they immediately go
like, "Oh yeah, I know the difference."
And very rapidly illustrate that they
don't.
>> This is what I tell all my grad
students. The paradox of a PhD is you
will feel stupider graduating than when
you started because you realize how much
you don't know. And that's a gift. It's
a gift. It's a gift. It's a gift to know
that you don't know.
>> And by the way, I'm not sure how long
that third one is going to be true for,
>> right?
>> Um is, you know, will AI be able to to
to be a patent? Will will you be able to
use AI as a lawyer
>> um in the court of law instead of hiring
a lawyer? I don't know for how long
we're going to be able to hold off.
>> Or for example, you know, as a person
who writes books.
>> Mhm.
>> Instead of having my blurb from of what
Bill Nye thought of my book, maybe we
could say, "Oh, here's what Claude
thinks." Yeah. that is probably within
the next 6 months you're going to start
seeing that.
>> Wow.
>> Um so I something
>> I don't know what the next 5 years are
going to look like but I think something
is brewing and I think it's going to
make the last 20 years of the computer
internet and mobile revolution look like
a day in the park.
>> I think something is happening. I don't
think anybody really fundamentally
understands it. But there's hardly a day
goes by where I don't have a
conversation.
>> Wait a minute. You you you are a
co-founder of an AI startup. So that
means that you do have to have some
sense of what's coming next and I think
you're holding back.
>> I I can see I I think I can see 6 to 12
months out.
>> Um but even that is we are more often
wrong than we're right.
>> So in the semiconductor field we had
Moore's law, right? That's right. Is
there a sim is there a similar metric in
AI and what is its basis?
>> Yeah, that's a great question.
>> So in Mo's law, it was the number of of
of transistors on a chip would increase
double every 18 months every 12 to 18
months. So let's okay so Mo's law and by
the way phenomenal that continues to be
true and how many by the way how many
times have people said Moore's law is
over and they were wrong they were wrong
they were wrong they were wrong keeps
going keeps going
>> so what's amazing about Moors law and
I'm glad you brought this up is that we
would measure the doubling of
transistors on a chip in a 12 to 18month
period and that was really sort of the
way we tended to think about changes in
technology it was about the cadence was
about every 12 to 18 months it is now
every 12 to 18 days.
>> No way.
>> I mean, no kidding. Think about how
quickly
>> the next version of Chat GPT, the next
version of Claw, the next version of I
mean, it is happening. It is happening
>> at a pace of weeks.
>> Weeks. I'll give you an example of this.
By the way, uh just before the the
Christmas break last year, so this is
December of 2024, three students who
were in my uh uh office saying, "We want
to work on detecting uh full-blown AI
generated video, what's called text to
video." Yeah. Where you type a prompt
and instead of giving you an image, it
gives you a full-blown video with audio.
Think like essentially making short um
clips. I'm like, "This is a dumb
problem. There's no way this is going to
be out in the next 18 months."
Uh about a month ago, we had Google's V3
emerged and they did it.
>> Wow.
>> And I was wrong.
>> I was wrong. There it was. A couple
months later, the students were right. I
I got it wrong.
>> Um so, you know, but here's what I know
is I don't know what the equivalent of
Moore's law is. I don't know if it's 12
days or 24 days or 48 days, but we are
seeing a rapid development. Now there
are some people out there who saying
this is going to plateau that you can
only get so far with these mechanisms
and there are other people who are
saying it is endless. We're going to
keep going. But here's the thing you
have to understand about AI is I mean
you know this as a former academic as an
academic
>> is that um this all started in the
academy. Yeah. All of these technologies
um started in the academy but now where
is it? It's in Silicon Valley with
hundreds and hundreds of billions of
dollars being poured into it.
>> And that train doesn't stop. No. When
you start pouring that kind of resources
and that kind of talent and that kind of
money,
>> this is not stopping. Right.
>> Right. Um
>> so who's doing it now? Is it is it cuz
cuz if there are AIs that have not been
released to us that are being kept
behind by these Silicon Valley
companies,
>> is it now AI that's building the AI?
Have we gotten to that point?
>> Good. U first of all, yes. So if you
talk to uh leaders in Silicon Valley,
they will tell you somewhere between 20
and 50% of code being written for the
next generation is now being written by
the AI systems. And now there are still
humans in the loop. There are still
humans that are being part of the
designing. But a lot of what is
happening and part of the reason for
acceleration is that when you have AI
that helps you write code or writes it
entirely,
>> the systems just move faster.
>> But what if they're keeping a secret
from you? What if the AI can it can it
do that? Can it make a leap?
>> Yeah. Yeah. And by the way, the the the
thing you will hear from AI engineers is
we don't understand our systems anymore.
They have gotten so complicated. They
behave in ways and this is not what you
want to hear from engineers that we
don't understand why it does this. We
would not tolerate that by the way with
aero engineers who are building
airplanes. We're like, yeah, we don't
really understand why it did that. Like,
well, we're going to start grounding
planes until you figure that out. But we
don't do that with AI. We're like, well,
okay, let's, you know, we'll figure out
as as the plane is taxing down the
runway. Um, and I say that as a person
who's about to get on an airplane in a
few hours. So, um, so I I think you're
absolutely right, and let's come back to
that conversation is that if the AI
starts to have a sense of
self-preservation and you unleash it to
write the next generation of code, how
is it going to incorporate that? Is it
going to start putting its own guard
rails in place so you can't shut it
down?
>> So, I I come back to something I said
earlier. We have to start thinking about
how to put reasonable guard rails on
these technologies.
>> What about a filter? You can take your
old AI that had all this data and say
we're gonna pass that data through a
filter
>> in order to protect humanity and and
related to that is I feel like there are
clearly geopolitical implications here
because it is a competitive world that
we live in and we see now with the wars
in Ukraine and what the wars in the
Middle East that there is this rapid
innovation that's taking place.
Do you see what the future of
geopolitical,
>> you know, is it there's that document AI
2027 that
>> takes that stage of geopolitical
competition, adds AI, and it ends with
human extinction.
>> Yeah, that's right. So, you you can't
look at what China is doing right now
and the investment that the Chinese
government is putting into AI and not
take that seriously. If you look in the
academic literature now, you see a
domination of published papers coming
out of China. They are investing
phenomenal amount of resources and
talent into this at a time, by the way,
when we are doing almost exactly the
opposite here in the United States. We
are cutting funding. We are cutting
research. We are attacking universities
that are the the very place where this
innovation happens. We are moving in
exactly the wrong direction. I think
there are serious geopolitical um
implications here because these eyes
these AIs will do more than large
language models and creating images.
They will start flying our jets, driving
our tanks. There will be the
incorporation of AI into warfare and
it's already started. It started here in
the US. It's in China. It's in Russia.
This is coming.
>> Um and so we have to start thinking very
carefully. And by the way, if you go
look at the EU AI safety bill, they they
did something that I really like. They
have what's called a riskbased
um intervention.
>> So if you're using AI to predict what
movie will Hani like to watch next,
okay, fine. We don't need a lot of
guardrails on that, right? But if you
are using AI to make critical
life-saving decisions, um hiring and
firing decisions, criminal decision,
criminal justice decisions, we need a
much higher bar. And they have banned
the use of AI in the military. They have
just said this is a this is a line too
far right now. We are not willing to go
there until we understand more. So I
think this is the way we want to think
about this is riskbased.
>> Yeah.
>> Right. You put the AI here. What is the
implication? Let me give you some
examples. People are using AI right now
to make hiring and firing decisions who
gets an interview.
>> Yeah.
>> Right. Do we fundamentally understand
how these AI systems are they biased
against certain individuals, certain
categories?
>> We are using AI to make criminal justice
decisions where sentencing decisions and
bail decisions are being turned over to
AI. That's the thing that that bugs me
is,
you know, it's almost like stereotypes.
They're based on averages and averages
doesn't tell don't tell you anything
about an individual event.
>> 100% correct. And let's go back to where
we started this conversation. Yeah.
>> Which is what these systems do is
pattern matching, right?
>> They look at history and they repeat it.
>> Yeah. So if you have had historical
problems in a system in the criminal
justice system in Silicon Valley with
women making up less than 20% of the
technical workforce,
>> your AI is simply going to reproduce
that because it's not thinking about any
of these issues. It's simply saying what
is the pattern? Repro reproduce the
pattern. So when it comes to loan
decisions,
>> financial decisions, uh, hiring and
firing, university admissions, criminal
justice, we, in my opinion, should not
be unleashing AI systems without
understanding them better than we do
right now. If it comes to
>> help somebody write code, great. If it
comes to making predictions on what
you're going to listen on Spotify next,
fine. I don't care.
>> Right? Let's go even bigger with the
greatness. So it's not all bad, right?
AI has amazing, you know, I I when I
look at
>> society, I recognized as a young man in
my 20s, I was like, you know, there are
certain things that we create,
militaries, police forces that by their
very nature, religions, right? They have
great good built into them and they have
the ability for great bad built into
them.
And our job is to harvest the good while
keeping the bad but recognizing that
both are going to occur
>> 100%. It's a mitigation strategy.
>> It's a mitigation strategy. So what does
the positive future of AI look like and
how can we bring that about?
>> Good. Okay. So first of all I 100% agree
with you that everything we should be
doing is how do we harness the power and
mitigate the harm. I would argue that
for 20 years in the technology sector we
have harnessed the power but we have not
done a good job of mitigating harm. I
think the harm has been on balance with
the good things that has happened when
you when you look broadly at the issues
that have come out of technology. So
let's go forward. I have a prediction.
Um and you think I would learn not to
make predictions but here it is.
>> Um I think the next blockbuster movie is
going to come from some kid in their
bedroom making movies with this AI
technology. Cuz think about what you now
have access to as somebody who has a
creative mind. You can now make
full-blown videos with any character you
want, any storyline, any narrative, any
actor, any actress. And what else can
you do? You can distribute it.
>> You can put it on TikTok. You can put it
on YouTube.
>> Wow.
>> Right. So, I think that the ability to
create things without a multi
multi-million dollar budget, whether
it's a book, uh, music, a podcast, a
full feature film is going to be fully
democratized.
>> So, you're going to So, essentially, as
the human now,
>> if you wanted to create something, you
had to take your idea Yeah.
>> and then it had to be backed by a bunch
of money and a bunch of skill and
experts to bring it to fruition.
>> 100% correct. So what you're saying now
is that your mind and your AI business
partner is all you going to need. That's
all you need.
>> So human creativity is going to be
unleashed.
>> I think it's going to be unleashed. I
think and you're already seeing it and
that that's amazing.
>> Um I think if you wanted to so we were
talking recently about the company I
co-ounded. You know I had to go to
somebody who had a lot of money and
convince them to give me some of that
money to start a company so I could hire
lots and lots of people to build this
company over a multi-year process. I
think you're going to be able to start a
company with one person.
>> Wow.
>> Right. And again, there are serious
employment questions. There's lots of,
you know, but that's an incredibly
empowering. And now imagine that this
can be somebody who is in subsaharan
Africa.
>> Yeah.
>> They don't need to have access to
venture capitalist. They don't need to
have access to Hollywood studios. They
can do amazing things. And so I think
from a from a the creative energy now
has zero barriers to entry very soon.
And that's
>> how do we how do we put that into our
education system broadly? So instead of
so one of the criticisms of our current
education system that you see all over
the internet is hey you know what
they're what they created drones for
working factories that's what our
education system is designed to do. I
don't know if that's true, but that's
that's what's said all over. But now,
>> you know, if if there's anything that we
humans excel at, it is imagination,
>> right? And so now we have this tool to
bring our imagination to fruition. So
instead of having an education system
that people say sort of,
>> you know, tampers down our our
imagination and creativity, how do we
>> use AI? Because another way that we
could use AI is in as an educator,
right? And here's another thing that's
being talked about in the in the
academic spaces. Kids and their a and
their iPads. Take away the kids iPad and
they're just all over the place.
>> How could we use AI, for example, to
keep a brain from becoming addicted in
that way?
>> Yeah. Good. So, let me I'm going to back
through those questions. Let me talk
about the the second one first. I'm glad
you reminded me of this. Um, one of the
hardest things about teaching is that in
a class of 50 students, you have a huge
range of, let's call them, the top
performing and the bottom performing
students. Yeah.
>> And it's not because they're smarter or
dumber. It's because they come with
different experiences.
>> That's right. Different training.
>> Different experiences. And some of it
it's they naturally come to the
material. Some of them it's less
natural. And one of the hardest things
is how do you teach to all those
students and keep them all coming that
the highest performing students you
don't they're not bored, but the lowest
performing students don't fall off the
cliff. Yeah.
>> Now imagine that students can my all my
lectures will be recorded and they will
have an AI that summarizes it and that
AI will understand what the student
understands well and doesn't understand
well and it will be their essentially
personalized tutor.
>> So if you're if you're high performing
you move fast if you're lower performing
you move a little bit slower. And we
find ways to I love this idea as an
educator. I think it's incredible.
>> Bespoke education.
>> Bespoke education. And to do the things
that even in a class of 50 students I
can't do. I can't talk to every students
3 hours a week. It's physically
impossible.
>> Let alone a class by the way at UC
Berkeley with 2,000 students. I you
can't do this.
>> And so I think this is an enabling
technology. Now of course we have to be
we have to be careful about it. What is
what information is it collecting about
the kids? Is it privacy respecting? You
know, and so you have to think
>> what about medicine? Because one of the
things that really um is mindboggling to
me is that, you know, I can't go to a
doctor every year and say, "Hey, give me
an ultrasound of all of my organs." So,
is it possible for us to have AI
>> as a personalized medicine? Yeah.
>> Yeah. Personalized medicine.
>> And the answer is yes. And here's why.
Because a lot of that is pattern
matching,
>> right? So what is your age? What is your
ethnicity? What is your DNA?
>> DNA. What is your family history? Do you
smoke? Do you drink? What do you eat?
What do you not eat? A lot of that are
very good predictors.
>> And if the AI understands you and it is
watching what you do, how much do you
exercise? Are you getting enough of
this? Are you getting too much of that?
It can start to say, "All right, it's
time for an ultrasound." Right now, will
it be perfect? Of course not. Will it be
better than the system we have now?
almost certainly because it's highly
personalized
>> and has a now again we have to think
like how do you do that in a way that's
privacy preserving are you giving all
that information up to an AI company
that is going to feed that information
to an insurance company that's going to
affect your insurance rates because
you're at high risk cuz you had a cigar
that evening
>> right I say that as somebody who just
had a cigar last night so you know we
have to we have to think about that so
we have to think about the guard rails
>> like how do we protect individuals while
helping them because a lot of what
happened in the last 20 years is we
enabled things. We enabled phenomenal
technologies with this, but we also
created problems.
>> And so I think we just have to think
about how to do that in a way that is
respectful and and careful.
>> Man, this conversation
>> is one that for me it really it's almost
like when I watched the movie The
Matrix. I stepped out of that movie
thinking, am I living in reality or am I
living in the Matrix? I know
>> this is a a conversation that just
really left me with so much more that I
want to explore and understand and keep
track of.
>> And also I feel like
I need to go make a movie.
>> Yeah, that's right. That's what's next.
And invite me back cuz I like talking to
you cuz we cuz I'm No kidding. And a
year from now
>> I think the future five years from now
is going to look unbelievably different
than today. I think we are going through
something.
>> Yeah. And every there's every once in a
while when I'm in San Francisco and I
see these self-driving cars, I think,
"Oh my god, we're living in the future."
>> And there's a generation being born
today. This is this nor their normal.
>> Their normal is this is this is it. I
mean, think about you and I grew up
without these devices in our house.
>> That's right. Yeah.
>> And think about how different that world
was just not that long ago.
>> Well, listen, you know, you probably
confuse the audience members cuz they
look at me and they're like, "How has
this teenager established himself?"
You're like, you know, this youthful
beauty I possess, right? But no, I'm I'm
I'm a Gen Xer. So,
>> is the best generation, by the way. Just
saying.
>> The Gen X, I'm telling you, man, you
know, we we we did it without parenting.
>> That's right. That's right. We You know
what we did? We had Nova and PBS to
raise us.
>> We had Nova and PBS. Absolutely.
Absolutely. Hey, man. I really
appreciate you coming out. This has been
amazing. Thank you so much.
>> Thanks, Keem.
[Music]
[Applause]
[Music]
[Applause]
[Music]