Project Hail Mary Author Andy Weir Breaks Down AI "You’ll NEVER Watch Movies the Same"
ZrdVpioZ5dU • 2025-05-06
Transcript preview
Open
Kind: captions
Language: en
As a grounded sci-fi writer, how do you
think that AI is going to change society
in the next five years, 25 years, and a
hundred years? In the next 5 years, I
think we're looking at some disruptions
in certain um industries. Like for
instance, right now they're still kind
of working out how AI and AI art is
going to be perceived. Like a lot of
people say, "Oh, AI art is bad because
it samples, you know, from millions of
pieces of art that that it sees online
and those those creators don't get any
credit. They don't get any royalties.
They don't get any money." And so
there's a big moral issue related to
that. And so there'll be that kind of
fight. But I think in the end like the
the technology solution that is easier
and cheaper for everyone is the one that
always wins. So I think um over the next
five years people will argue about
whether or not it's okay to train AI on
humanmade stuff and hey what about me do
I get credit for that and then probably
within probably within the next 5 years
even 5 to 10 years that argument is just
going to go away. I think people are
going to just accept that okay AI is AI
can train on human art in the same way
that humans train on human art. I mean,
every artist out there, there's nobody
who like learned how to do their craft
in a vacuum. They looked at other
artists stuff. So, your brain is a
neural network. Why do you get to do
this, but an artificial neuronet network
doesn't? Eventually, it's going to go
away. And um it's going to be extremely
disruptive to the uh graphic arts
industry because now I don't need to
hire an artist to draw something for me
as long as it's not something precision
or you know with um legal ramifications.
If I just want an image of a crowd
cheering for my product or something
like that, I can have a computer make
that and then I can own that image and
that's it. And so, um, as as a company,
if I was trying to make advertisements
or something like that, sounds a lot
better to get those results in one
second and pay, you know, $10 a month
than it is to get those results in, you
know, two months and pay $5,000. There's
just
no um one thing I'm sure that you know
as much as anyone else or probably more
is that
like something that makes things cheaper
for businesses always ends up taking
over. It always ends up being the way
things go and protectionism never works.
I mean, now we're getting into like
economic theories, and I'm a sci-fi
writer, so maybe I shouldn't get too
deep into that, but I feel like
uh I I think of economics as like a type
of physics, and you can imagine money
flow as being kind of like energy. It
always tries to go to the lowest energy
state. So, one way or another, the
economically wise thing ends up being
the socially acceptable
thing. And that's that's been true
throughout all history. So even though a
lot of artists will get kind of screwed
in the short term, I think that that's
eventually people are going to stop
talking about it and that that issue is
going to go away and then the artists
will be some of them will do humanmade
art for the purpose of being artistic
and others the now artists will be the
people who use the AIS to refine and
make better looking images than a layman
can. Um, that's what I always say is
like for any disruptive technology that
puts jobs and careers out of business,
it also creates jobs and careers. So,
you know, for, you know, artists might
lose their jobs, but then they will be
the ones who are already have the
built-in skill set to understand what
looks good and say like, "No, no, no.
I'm going to tell the AI do this
instead. No, no. Okay, I'm going to give
these little tweaks." they'll become,
you know, the kind of super users, the
people who know how to use an AI to to
do this stuff. I mean, kind of like in
the same in the same vein of like you
could do all sorts of amazing things
with Photoshop, but you have to have
skills. You have to know how to do them.
So, I think that's what we're going to
see in the next 5 years is a transition
from those things out. In the next 25
years, I think we're going to see a big
tumultu big tumultuous changes in the
entire entertainment industry.
Um, my prediction is that um the concept
of event-based entertainment is going to
not go away, but it's going to kind of
go the way of horsemanship. It'll be
it'll go from something everybody did to
something that's kind of a niche
interest because the notion of like, oh,
Avengers Endgame is coming out uh on
such and such a date. We're all going to
see it. It's going to make a billions of
dollars and everybody's going to love
it. Everybody's be talking about it.
Everybody's going, "Oh, what do you
think about this scene, that scene, that
scene?" It's like an event that happens
that affects millions of people, not
like a tumultuous event, just an
entertainment event. But I think those
days are going to go away. I think
what's going to happen is eventually
you're going to have, you know, we're
talking in the 25 year time frame now.
We're gonna have AIs that can write
stories as well as people can. I think
that's pretty straightforward. An AI can
read a billion books and say like, okay,
I know I understand how stories get put
together and I can write a story that
does this or that or the other thing.
Then you're also going to get AIs that
learn what you are all about. Like so
you'll have on your computer an AI
assistant or something like that. Just
in the same way that
your that your search engine knows like
what products you're interested in, this
AI will know what you like and don't
like in entertainment, what you what you
think, what your opinions are,
political, ideological, personal, what
things you think are cool and
interesting, what things you think are
lame and boring based on your viewing
activity history. then you will have on
your system um it will be able to write
a story that you personally will think
is awesome. Maybe the guy sitting next
to you would think it's lame. Maybe
everyone in the world would say this is
a dumb story. I don't understand why
anybody would like this. But it doesn't
matter. This thing makes the story for
you. And then it can also create it as a
film. You know, it can create all the
imagery. It can create the animation. It
can do everything. And so imagine if
whatever your favorite movie in the
world is, imagine that. And imagine you
tell your AI, you're like, I want to see
um a sequel to that movie, you know, and
and maybe your favorite movie in the
world is one that was AI generated. It's
like, you know, about purple bunnies on
the planet Zorbback, whatever. And like
everybody says like, why do you even
like that? And it's like, I don't know.
I just I think it's cool the way the
taller purple bunny is like manipulating
the whatever you know and so eventually
I think entertainment will become a a
very personalized experience like you
will be you will be watching a movie
that was made for you and for no other
consumer.
Do you think there'll be societal
consequences to having a lack of shared
narrative? Well, people will still
communicate about, you know, the real
world, but lack of shared narrative. I
think what it'll do is it'll actually
remove narrative control out of the
hands of a of a few. Like, so right now,
the people who make the movies, the
people, you know, who create the
entertainment can kind of guide attempt
to guide your worldview. And I think
we're seeing a lot of that right now.
And I'm I'm I'm I I try never to do
that, but um ideology and messaging put
into storytelling and it seems like that
I think I think it's kind of an issue in
the industry lately is that messaging is
taking a front seat ahead of
entertainment and plot development and
um stuff like that. I try never to have
any political messaging in my stuff. But
regardless of that, it is definitely
there. And what me this means that you
have a small cloistered group of people
who live in their own kind of political
bubble and they get to determine the
messaging that's happening across the
board in both television and film
industry um just in every way. But
that's going to go away when it becomes
completely democratized, right? Um, and
we've seen that to a smaller extent with
like for instance news media. How um
there used to be just three news
networks. You could watch ABC, CBS or
NBC. As when I was a kid, you know,
that's it. Those those that was that
that was all the news you could watch.
But now there are specialist news
channels that can cater to the far
right, far left, middle right, middle
left, whatever. And so everybody
gravitates toward those news channels
that that keep them in their own bubble.
Um, this uh is not the case with
entertainment. You still have one kind
of monolithic entity, mostly Hollywood,
uh, where people tend to be fairly
like-minded. There's a little bit of
disagreement, but they tend to skew
left. It happens to be left, but in the
50s it was they skewed very far to the
right. So, it's like whatever Hollywood
feels is what you see in all the
entertainment that um, gets created and
validated. And that'll just go away.
there won't be an arbiter of messaging
in entertainment anymore.
And when you think about that, uh do you
have any anxiety over the fact that
there won't be this top- down
coordinated
uh this this is our value system,
everybody, and if you're not on board
with this, there's a problem. Do you see
any downside to that breaking down? No,
not at all. I think it's great. Um, I'm
I'm sort of an um an evangelist for
this, but I feel like or just the way I
write and the way I consume
entertainment is I just want to be
entertained. I don't want to be preached
at. I don't want to be told what my
morals should be. I don't want to I I
don't want to be made to think about
anything, unless that's the sort of
thing that I'm trying going out of my
way to watch. I just want to I watch
entertainment to have fun to enjoy
myself, you know, and um if I want the
entertainment equivalent of like fast
food, then that's what I want. And you
know, I don't want someone if I go to
McDonald's, I don't want them to say
like, "Here's your Big Mac and here's
your broccoli." And I'm like, "Well, I
didn't want broccoli." It's like, "Well,
we went ahead and mixed the broccoli in
with your Big Mac because it's better
for you." I'm like, "I don't want what's
better for me. I want what I'm going to
enjoy."
Okay, stop stop trying to make things
better for me. I came here with a
purpose and I want to eat a Big Mac, you
know? So, um I think it's great. I think
it's I entertainment should be in my
opinion about being entertained. I mean,
it in the end it's like this is a
leisure time activity that you're doing
for fun, watching video, watching a
movie, reading a book, whatever. And you
should get to be the one who decides how
much if any messaging there is. If you
really want it, you can ask your AI,
"Oh, I want an action movie, but I want
it to have social overtones about wealth
divine. Please, please include that."
Then it will.
So when I look at that, one of the
things that I think a lot about is um
there's a guy named James Burnham who
wrote this book called The Makia of
Eliens. And in the book he he puts a lot
of things forward but one of the most
important is that the only way to get a
large group of people to work together
in a flexible manner is to have a shared
narrative so that everybody understands
um uh what do we as a tribe believe
where are we pointed and that there's
something in the architecture of the
human mind that they want to follow
somebody. And so when I think through
what does the world look like? Because I
think you're right that this is all
going to be individualized. That
everybody's going to be engaging with
everything at the level of this is how I
want it. This is how I want my news. I
want my news to skew left or I want my
news to skew right. I want to see uh the
things that I already believe to be
echoed back to me. And now what you end
up with is this massive spectrum. Rather
than people falling into these easy
camps, you have people all across the
spectrum. And so I can't help but go,
huh, if for all of human history we have
looked up to the leaders, the leaders
through story, through political
maneuvering, they always gave us a
direction to move in. And when we wanted
opposition, it was like very controlled
opposition. It's, you know, the left
versus the right. It's a very simplified
notion. uh that there's an inevitable
sort of scattering of humans because I I
think they will still want to tribe up
and as the tribes become increasingly
niche then how do we move forward
effectively and I think that there's
going to be an element of chaos that
will come from people not having a
shared narrative. Now I'm with you. I
don't want it to be forced, but I do see
an inevitable cultural fractionating
happening due to the way the AI will be
so singular in its message delivery.
Interesting. I I kind of disagree with
you a little bit on that because I think
that um or rather I don't disagree with
you. I guess I disagree with that book a
bit. If you go back to the era before
mass communication, we have always had
nations. We've always had countries. I
mean, we had World War I. We We were
this vast country. We were the United
States, 3,000 miles across. And the
fastest form of communication was a
letter being delivered by train, right?
And we still were a cohesive nation that
had core ideologies that we held
together. It didn't need a daily
reinforcement of narrative control from
a centralized source. There wasn't one.
It's interesting. I would say that would
make it easier because now whatever
becomes the dominant story that's being
passed on, which was traditionally via
religion, religion gave you the
oversimplified story that everybody
could get behind. So we are around these
parts. We're Christian, we're Catholic,
we're Muslim, whatever. But you had a
sort of the ultimate self-help book that
gave you an instruction manual for life
and it got passed on through the
churches. And there was nobody that
could fight for your opinion. like the
odds that you even heard about an
alternative way to view life or an
alternative value stack was next to
impossible for the reasons that you laid
out. Um, so yeah, I think that uh again,
both of us are obviously just
prognosticating, but I feel like knowing
what I know about the way the human mind
works, it seems inevitable that there
will be some sort of second and third
order consequence to there not being
these really tight shared narratives,
which I'll tie to this moment. Part of
why people say we live in this
post-truth moment, is that it is
extremely hard to define what is once
you get outside of physics, what is
objectively true. Most of it's just
human interpretation. And so if most of
life is human interpretation and now
that human interpretation of what life
is, what it means, what one ought to
pursue becomes so individualized, feels
like something weird or at a minimum
something unexpected is going to happen
from that. Um, again, just breaking
apart into these individual narratives.
Yeah, I mean I I can see what you're
saying, but I still have to go back and
say that we've we've we've lived in that
world before. And yes, it's true.
There's the overarching ideology and
belief system of like Christianity was
prevalent all throughout the United
States in the 1800s and early 1900s, but
people in, you know, people in Maine
didn't have a lot of interaction with
people in California, right? they didn't
they didn't you know react on they
didn't interact on a daily basis or on
an hourly basis like we do now. So
basically those were two completely
isolated societies that had like nothing
to do with each other. Now when you
become I I think part of being a nation
back then was a smaller list of core
ideologies and for the United States
they're all codified in the
constitution. So that's like the one
thing we we say okay so that document
there that's that's how we do things
here in this whole big country. Um so
you're right there's that central thing
but it's static. It's not constantly
shifting. When you have a group of
people in charge of a narrative and
those people can change it any way they
like. Um, that's when I think the
concept of a central narrative control
becomes disruptive for society because
you have a core group of people that can
kind of suddenly change
morality. Like we saw this a lot with I
mean I don't know what people are going
to call this era in entertainment. Some
people use the word woke a lot. Some
people hate it if you say the word woke.
But it's going to have some kind of name
like the woke era or something like that
and people are going to study it in the
future because it's like similar to
McCarthyism. Things suddenly changed to
the point where something that you said
10 years ago wouldn't have upset or had
any effect effect on anybody. And now if
you say that exact same sentence, your
entire career will be over and your life
will be ruined. So social change at that
pace only happens when you have a core
small group of people controlling the
narrative I believe and when those
people either suddenly change their mind
or those people suddenly change like
they're new people now um that that
can't happen when you democratize
ideology across everybody you know each
individual computer you know if it's
your computer is no longer you YouTube
or the movie industry or something like
that is no longer feeding you a
narrative. Instead, it's just like,
okay, you wanted you wanted a movie that
was at least 50% car chases. There you
go. All right. So, let me break down
make sure that I understand what you're
saying. Uh Tom, the problem historically
has been that people can control the
narrative and having that kind of top-
down control is deeply problematic. You
can sway morality. Um, and the fact that
people were not aware of what's going on
over here or over here, um, there was a
they fell into a bit of a trap in that
once somebody gets a hold of that
narrative, it's all they know and so
they're going to succumb to that. Now,
um, what we're seeing is sort of the end
of that as AI is coming on board, people
are going to be unshackled from the top
down narrative control. people are going
to
um be able to I guess have freedom of
choice in terms of how they begin to
structure the narrative under which they
live and you're not seeing any negative
second and third order consequences that
come from that. I wouldn't say I'm not
seeing any negatives. I'm just saying
that that's I think so far I I think
that that is as far as I've taken it in
my mind right this second is that I I
don't see that it's bad to have the
central narrative structure toppled. Um
if you want to talk about second and
third order um um effects is like well
we already have with the democratization
of uh communication thanks to the
internet. We have people able to self-
sort into niches of ideology and belief.
Whereas used to be if you didn't believe
the moon landing happened you'd be the
only guy in you know for 10 city blocks
that had that opinion and people would
say you're crazy. But now you can find
all the other people who think that way
and you can all hang out together and
talk to each other and reinforce your
belief system. And you could not do that
any other time earlier in history. You
could never uh the you know the the town
lunatics couldn't form a town lunatics
chat group, you know, with one lunatic
per town. It it didn't work that way. So
now they can. So there's the all the
downsides. The democratization of
information means that people can form a
bubble where it turns out their core
assumptions are wrong, like objectively
wrong, but can't tell them otherwise.
Um,
do you think AI speeds that up or slows
that down? I think it would speed it up
because people are never going to tell a
subordinate to challenge their beliefs,
right? Nobody ever or it's very rare for
when somebody has like something you
know like like for instance people watch
movies if if you know that a movie is
like pressing an ideology that you
disagree with you're probably not going
to watch it at all. Um you people don't
deliberately choose to have their
ideology challenged. So if you can tell
your computer, hey I want to watch a
movie. It's an action movie. I want
there to be gunfights and good guys and
bad guys and maybe maybe maybe somebody
goes this base, you know, whatever.
You're not going to have it say, "Oh, by
the way, I'm pro-choice and I want this
movie to have a strong pro-life
message." You're you're not going to do
that.
Yeah. No, I totally agree. I think it is
going to exacerbate that. I think it's
um the thing that I have a hard time
wrapping my head around is just how far
does that go? How fragmented do we
become? How many tribes do we break
into? Um, but so all right, staying in
the 25-y year range, I would love as as
an author who has done an absolutely
profound job. And for people that have
not read your books, uh, I am screaming
from the rooftops, please, for the love
of God, if you like sci-fi, read Andy
Weir's books. That it is some of my
all-time favorite sci-fi. Uh, but talk
to me. You're very good at grounding
things in what is real. What do you see
happening in say material science over
the next um, 25 years? And if you can
encapsulate that in what your base
assumptions are about what AI will be
able to achieve, um I that would be
really helpful. The better that AI gets
at um making physical models, the better
we're all going to be because
eventually, you know, they're already
using they've already got what is it?
Alpha fold. Um yeah. Yeah. So used to be
they they only knew how a few proteins
folded. Now they know like
200 million of them because it can just
solve it and because it's it's AI that's
made specifically for figuring out how a
protein is going to fold eventually.
Here here's some ideas I have for things
that we might see in the next 25 years
uh that AI will do. First off, a be able
to say like you're like, "Okay, here's a
new virus that's going around, you know,
it would be, you know, um, COVID, you
know, 32 or whatever, right?" And and
he's like, "Here here, here here's a new
virus that's going around. We've
sequenced the RNA genome. Okay, AI, what
do we do about this?" And the AI is
like, "Well, first off, I know the shape
the virus is going to be because you
told me the sequence. So now that's how
it goes together. Okay, now I can model
it working on human cells and see how
it's interacting with human cells and
then I can say like, oh, here is here's
the antigen that would take care of that
problem that your body would make over
time anyway. Here's how to manufacture
it here. Here's the answer. Like, and
it'd be like it'd be ridiculous. It'd be
like, oh, here's a new disease. And then
like an hour later, the computer's like,
here's the vaccine. Just put this
sequence of genomes together, throw it
in some E.coli,
so that it'll like mass produce it for
you and then uh then uh there you go.
Let me know if you need anything else.
And I also think we might see 25 years
might be a little optimistic for this,
but be in the middle of that 25
somewhere between 25 and 100 we get like
where they say like okay AI Bob here has
cancer and it's an aggressive form of
small cell carcinoma that's from his
lungs and um we've uh taken one of the
cancer cells and sequenced the entire
DNA genome of that cell of that one cell
by you know with lab equipment and the
um you know the the AI goes like okay
here's the cancer cell here's how it's
working what would disrupt it how does
it differ from healthy lung tissue and
okay I've designed a variant of the
influenza virus that attack lung tissue
cells that'll only attack the cancer
version here you go inject this in the
patient suppress his immune system so
that it doesn't kill it and then this
virus will go kill all the cancer cells
only like I mean this is the sort stuff
that you can expect AI to be able to do
because AI is really good at if there's
a cloud of seemingly infinite possible
solutions. AI is very very good at
narrowing that down to tangible real
solutions. We'll get back to the show in
a moment, but first let's talk about the
one thing every founder, operator, and
optimizer needs. Clarity. Netswuite
gives you one dashboard, one system, one
source of truth, so you stop reacting
and start anticipating. Over
41,000 businesses have already
futureproof their operations with
Netswuite by Oracle, the number one
cloud ERP that brings accounting,
financials, inventory, HR, and more into
a single unified platform. With real
time insights and forecasting, you're
not just tracking the past, you're
predicting what's next. And when you're
closing books in days, not weeks, that's
time you get back to actually run the
business. Whether you're doing millions
or hundreds of millions in revenue,
Netswuite gives you the visibility and
control to move fast and win bigger.
Speaking of opportunity, download the
CFO's guide to AI and machine learning
at
netswuite.com/ theory. Guide is free at
netswuite.com/theory.
netswuite.com/theory. This is a paid
advertisement. And now let's get back to
the show. Yeah. Yeah. Now you're you're
getting into stuff that gets me super
excited. One, you talked about AI being
able to build physical models. Do you
think that there is a rate limiter on
the amount of intelligence that AI can
gain? And um if not, do you believe that
AI will ever be able to understand
physics to the point where it can make
novel breakthroughs in physics? Oh,
absolutely. I think it absolutely can.
Um I mean will be able to. Yeah. You got
to remember it's like people think that
human brains are somehow magical and
handed to us by the Lord, but you're
just a neural network. So anything that
you can do is something that a neural
network can do by definition, right? So
the real question I think you're getting
at is at what point do you know at what
point will we have AIs that are
comparable in complexity and intellect
to a human brain? Well, a human brain
has about 80 billion
neurons.
adult and a few billion years of
evolution figuring out exactly how to
connect them for an optimal well I mean
the vast majority of your brain is you
know figuring out how to not be eaten by
wolves and stuff like that but an AI
doesn't have to worry about that as much
but the point is um there is nothing
that a human brain can do that that AI
won't be able to eventually do because a
human brain is just a neural network
literally that's it and so is AI Okay.
Do you think that we will just need to
continue to um scale the clusters plus
increase efficiency and we'll hit
artificial super intelligence or we
don't have the technology to do that
right now. But I think AI will help us
make that technology. It'll say you
it'll start off with like we're at our
kind of alpha levels of AI and we'll
start off with humans trying to figure
out okay you know what would be cool is
if we could make this smaller use less
energy more efficient stuff like that
and here are like a hundred billion
possible ways it might work you were
talking about material science that's a
big part of it and then uh use AI to
narrow it down and AI you end up
figuring out oh I bet you I could make a
better AI this way then you make that
better AI and then for that better AI
you can say like hey start working on
making an even better AI. It's like,
okay, I'm on it, you know, and so it it
can bootstrap itself up, which is
something unique. It's not something
that happened in nature or did it or did
it because our brains are neural
networks and we're sitting around trying
to figure out how to make better neural
networks. So you could say that we're
already like the singularity began a few
million years ago when human minds
started to become vastly uh superior to
all the other animals on the planet. You
could say that was the beginning of the
singularity. Took us a little while to
get to this next
step where this neural network is
working on new neural networks that are
better than it. But you know, it's kind
of like uh what do you like the uh life
evolved on Earth about 4 billion years
ago, but it was only like two billion
years ago or something that we had
anything more complicated than a single
cell. You know, there's a little dead
period for a while sometime before you
get that exponential spike. I see all
sorts of like big benefits coming in a
hundred years span. Is that where you
were going next? Sorry. Well, first
before we get to that, but yes, I very
much want to hear your take on that.
What is the rate limiter that you see
right now that you think AI is going to
have to help us overcome? Because you
said I don't know that we have the
technology to do it now. We're going to
need AI for that. Do are you already
aware of like where we're going to hit a
ceiling? Yeah, I think it'll it'll have
to do with computational power like the
ability to run massive parallel neural
networks. So the next thing they'll be
doing now is because AI is still very
fresh. You know how when they first
invented
graphics, you know, you know, really
high resolution graphics, really good
stuff on your monitor, they did
everything algorithmically. Then they
figured out how to make graphics cards.
And then they figured out how to make
graphics cards like wildly parallel
because they figured out how to make it
so that oh graphic every every pixel is
basically its own little computer like
doing this you pixel shader algorithms.
And then people started to use graphics
cards to do all sorts of weird things
unrelated to graphics like trying to
mine Bitcoin or whatever because of the
massive parallel nature of graphics
cards. Um I think the next thing that's
going to happen is they're going to
start inventing hardware that is
optimized for running neural networks. I
mean we already have some of that but
it's always very specially made in labs
and stuff like that. But I think
eventually we're going to have the
graphic card equivalent of neural
networks. It'll be like, okay, here's
here's your neural network card or your
AI card that's just got the hardware
necessary to to really really quickly
and efficiently run massive numbers of
parallel nodes of AI. I think that
that's kind of um one of the limiters we
have right now. So, you can think of us
as being kind of like video games before
we had graphics card, you know, like
we're in our Duke Nukem 3D phase where
absolutely every pixel had to be
calculated by the CPU instead of a
graphics card, right? So, it's going to
take more tailored technology to run
this stuff more efficiency efficiently.
Um, okay, that's a fair assumption. Now,
admittedly, I am not close enough to the
technology
uh to know if what I'm about to say is
pure delusion. But when I look at like
the recent um demonstration that Elon
put together with their supercluster
where people thought, I don't know if
it's going to be easy to keep making
these bigger and if you're going to get
any benefit out of making them bigger.
And then he effectively doubled the size
and is now doubling it again and showing
that you really can continue to daisy
chain these. Uh so supposedly big
breakthroughs coming there just on the
things we already understand. Then on
top of that you've got DeepSeek coming
in and saying you guys are playing the
wrong game. This is a game about
compression and efficiency and look what
we've been able to do just by improving
the compression. And literally just a
couple days ago. Um so they came out
like January 16th or something blew
everybody away. Everybody was freaked
out at how inexpensive certainly the
final leg of their training was. uh and
said, "Okay, game of efficiency. They're
playing it better than we're playing it
in the US." But now they just came out
with another one and like re-uped the
level of efficiency that they're able to
get and they're able to according to
certain benchmarks uh hit chat GPT 40
levels on a 1.5 billion parameter model
which is crazy given the size of the big
models like 70 billion plus parameters.
Uh so to be able to on a benchmark,
which again is different, but on a
benchmark to still be able to match the
performance of something so many times
bigger, um do you think we'll continue
to pull down the level of computation
that's necessary by developing more
efficient algorithms? Uh yes, but not at
the absurd rate we're seeing right now.
We are at the very very beginning of
this new technology. And right at the
beginning of any new technology, you see
this tremendous spike in efficiency,
cost effectiveness, all these things
like that. I want you to consider how
much the aviation industry changed
between 1935 and
1965. In 1935, you had propeller-driven
planes, very very small amounts of like
commercial air travel, just not a lot
going on. And by 1965, you had jets that
could take you from New York to London.
And there it was like routine and even
boring. Like in the early days of the
aviation industry, we had just wildly
tremendous advances that you might
think, "Holy crap, this is moving so
fast." I mean, we went from figuring out
how to do powered flight to landing on
the moon in 66 years. It's crazy, right?
But then since then, it hasn't changed
that much because what happened was we
we got all the lowhanging fruit. It's
like, "Wow, here's all the things you
can do." Okay. Yeah, we figured all that
out. Now it's all about like okay how
can we use like carbon fiber to make the
holes a little lighter? How do we make
you know the the engines more fuel
efficient? How you know it's like
they're figuring there there's some
asmtote that represents solving air
travel and there we're always
approaching it now but man right at the
beginning it's crazy. AI I think like
any technology is going to be the same.
We're right at the beginning. So we're
going to see these oh yeah mine's twice
as better twice as strong as yours. Oh
yeah well mine's twice as strong as that
one. Oh mine's twice as strong as that
one. Eventually, it's going to be like,
"Oh, okay. Now, we're just fighting over
minor scraps." But I think that's good.
We're we're
rapidly rapidly getting rid of all the
lowhanging fruit we can do before we get
to the more difficult aspects of AI
stuff. Earlier, you mentioned something
we flew by it and didn't talk about it
again, but the idea of how AI can affect
materials technology. Um, I'm pretty
excited by that because I think
materials technology is the solution to
a lot of issues.
Um, most notably for my favorite things,
space travel. Um, the most efficient
possible spacecraft fuel is just
hydrogen and oxygen. The simplest
possible thing. It has a tremendous
amount of specific impulse. It has a a
huge amount of heat and force generated
just by burning hydrogen and oxygen.
It's one of the simplest reactions there
are. And we have a lot of that. All you
have to do is use electricity on water
and you get hydrogen and oxygen. Then
you let the rocket put it back together.
really really fast and hard, right? So,
it gives us a method by which we can
spend energy that we create however we
we like on Earth and ultimately turn
that into propulsion on a rocket. Okay,
it's great. So, why aren't we doing
that? Well, we are. For the most part,
we are. They're always like a variant of
the hydrogen oxygen reaction. But
hydrogen oxygen, if you just let it go
with no limiter, it burns so hot that
it'll melt any engine. like it'll melt
whatever it's in. It just gets so damn
hot. They have to deliberately kind of
calm it down, put other things in there,
maybe things that it can kick out the
back to add a little more kick of
propulsion, but they don't let it go
wild, right? Because if you did, they
can't dissipate the heat away. They just
can't get rid of it fast enough. They
can't cool the engine enough at the
beginning to make it it's just it'll
melt everything that we have. Now
imagine if you imagine if you developed
a material that was hard that could put
up that could stand up to a lot of
force, could stand up to a lot of shock
and wouldn't melt or at least not at
those temperatures. Then commercial
space travel is just like invented that
day. Like literally if you invent that
material you will have like within two
or three years you will have you know t
tickets to low earth orbit for middle
class person. I I strongly believe that.
Yeah. So materials technology is so many
things just come down to materials
technology.
Yeah. No doubt. Uh okay. So I assume
you're watching SpaceX. Um do you are
they talking about that kind of thing
because they've said look we've built
all this without AI. Now imagine what
we're going to be able to do with AI.
Um, do you know is that the kind of
thing that they're pursuing or this
isn't really on I don't know if they're
pursuing it, but I imagine this sort of
materials technology wouldn't be
invented uh by a purpose-driven company.
It would be invented in a lab somewhere.
It would be invented by material
scientists who then use AI to figure
out. Okay. Well, let's see. I remember I
saw a um I went to JPL. Yeah, JPL. And I
did a tour and a bunch of Caltech labs
as well. I was there during the height
of the Martian when people cared who I
was and like I went and other things
there was one group I wish I could name
the doctors involved and all this I but
I can't remember even the name of the
group but what they were doing was they
were trying to find better
superconductors right they're trying or
better conductors in general and they
were just doing it with this kind of
brute force approach where they're just
okay we're going to try all these
combinations of these four elements in
different proportions and stuff like
that and we're going to check the
conductivity and stuff like that and
then I But that takes a long time to mix
these things together and do all this. I
mean, we want to do millions of
different variations and check them out.
And what they've done is they repurposed
this old school printer from like the
1970s that had this really robust inkjet
thing and they changed it such that it's
shooting the powder of these metals
down. And so they're printing little
dots onto um onto like a ceramic sheet
or something like that of different
proportions of these metals and they
bake it so it mixes together. sell these
little dots of metal that they're
basically printing and then they have
like a thing a sample of a thing go and
test the conductivity of each one of
those dots and just see how how's it
doing. Did we find this one's
interesting? Okay, now I keep going. I
just thought that was amazing. But
imagine if you could virtualize that.
Imagine if you could figure out atomic
interactions all the you know how you
know what's going on inside of metals
that are coming together to become
better conductors and all these alloys
and stuff. What if you could stimulate
that with
AI and then you could say tell the AI
okay spend I don't know the next year
trying these billion possible variations
and in your modeling tell me which one
has the best connectivity.
that that to me is very interesting. Um
when we were talking about protein
folding, one of the things we didn't
touch on is that Alphafold can actually
predict novel proteins and say, "Oh,
make a protein that moves like this." Uh
and given what proteins do in the body,
that is pretty phenomenal because now
you can get novel things to happen
inside of a cell based on creating these
novel proteins. uh seeing them do the
same thing uh like you said in a
simulation so that they can move really
fast test a lot of these things uh would
be very interesting to see what that
outputs now when you think about this
for space travel okay that's one thing
when you start thinking about this
inside of a biological system uh does
that raise any ethical concerns for you
like if I said hey uh I think in the
next 25 years and I actually do believe
this that you're going to have designer
babies or certainly the ability to
design a child. Um, do you think at all
about that? Do you worry about that? Is
that something that you'd want to see um
some tight restraint put on or is that
an exciting part of the future for you?
I think that's exciting. I think because
the first quote unquote designer babies
will be like, "Hey, me and my me and my
wife both carry the recessive T-ax gene.
We'd like it if our baby didn't have
that because that means you die by age
10 or or whatever." remember my wife and
I both carry the cickle cell anemia gene
and we we'd like our child not to hit
that one in4 chance of cickle cell
anemia death you know so those are going
to be the first designer thing the
correcting invariably fatal genetic
flaws and nobody's really going to argue
about that right nobody's going to say
no no no you must make a baby that will
suffer for five years and die right the
question becomes the only ethical now
now we're talking morality right so
there is no objective truth on this but
for me the only real ethical concern
concern is, are you sure you're not
going to introduce some other problem
into this baby that's going to make
that's going to make their life painful
or unhappy or unpleasant? Like it's
like, hey, I want my baby to have blue
eyes and dark hair and maybe olive
complexion skin and I want to be really
tall. I want to be like six foot tall
when he's an adult and uh you know, da
da da. And they're like, okay, we made
all those changes. Unfortunately, he has
this he has Crohn's disease because
yeah, we didn't Yeah, we made some
mistakes or whatever. We you know, we
changed these things. Turns out that
gives him Crohn's disease. That's the
ethical concern that I'm worried about.
So, that's I mean, a lot of people would
would disagree with what I just said. A
lot of people would say like, no, if
you're changing a human being at all,
you're messing with God's domain and
you're you're you're doing a morally bad
thing. My personal opinion is that it's
only morally bad if you cause human
suffering. So if you um as long as you
are sure that what you're doing isn't
going to end up making a human that has
to suffer as a result that that would be
I think if we're able to use AI to
create a simulation of human biology
full stop like it knows it top to bottom
all the different interactions how
protein folding works how novel proteins
work all of that can read DNA perfectly
understands um the epigenetics of it all
as well just really really has a
full-blown picture of how this is going
to work. And we could begin to optimize
not not the sick, but we could actually
optimize uh a child for whether it's
higher intelligence, which um there was
a big kurfuffle with a um Chinese doctor
that did gene editing who he claimed it
was about reducing the likelihood of
HIV, but people were like, "Huh, but
it's also likely to make them more
intelligent."
Uh would you is that something you would
want to see? I want to know what the
mistakes are. When you invent the plane,
you invent the plane crash, right? So I
want to know what I I would I would be
very cautious with any sort of human
related experimentation because I
believe the most valuable thing on earth
is the human experience, right? And so I
think that whatever you're going to do
with your design your baby, as long as
you're not causing human
suffering, I'm probably okay with it.
But I really want you to be sure that
you're not going to cause that child to
suffer either as a baby or as an adult.
All right.
Um so yeah I mean and then people then
you start getting into all these uh you
know moral or ethical things of like
well how much right does a parent have
over their child's body and so on and
you know someone might be like you know
I'm I'm deaf my wife is deaf that's the
lifestyle we've chosen that or that's
the lifestyle we have there are deaf
activists you know and some people might
say I want our baby to be born deaf and
then you say like, well, hang
on, you know, so now you're talking
about giving deliberately giving a
disability to a child, but those the
deaf activists would say it's not a
disability, it's a lifestyle choice. How
is it any different than circumcision,
you know, and you know, so that's where
you start getting into those morally
gray areas. And I'm not interested in
arguing about those because I'm I'm far
more interested in the science, but
those are the arguments that people will
be having. So that's my prediction for
the future. Um, a couple other things
when it comes to designer genes. You
were talking about uh novel um novel
protein. Well, imagine I I don't think
we're too far away from novel proteins
being able to go modify your DNA like
let's say you are a a 40-year-old man
and you have I don't know some genetic
problem, right? And then maybe they
could make a novel protein that can
literally go in and change the DNA of
every cell in your body. Like it just
goes in and all this protein does is
enter the cell, make that change, and
then die, you know? Like, and what if it
could be like you could you could like
you could just no longer have the TAC
gene. You could no longer have anemia.
You could whatever whatever I'm I'm
coming up blank on genetic disorders,
but you you see what I'm saying? What if
you could actually solve that? Then we
get into things of like, okay, awesome.
What about cosmetics? I want to be
black. I think it'd be cool. I think
black skin is beautiful and I want my
skin to be black. So, I can put in this
I can inject myself with this novel
protein that will go actually change the
melanin the melanin production in my
skin and I will become as black as a
natural African man. A lot of people
would really get upset about that. And
I'm like, why? This is my body. Who are
you to tell me what I can do with it?
Who are you to tell me what I can and
can't look like? I'm not even making a
decision for a child here. This is me
making a decision for me, you know? So,
there's an interesting argument that'll
come up someday. Cosmetic ethnicity, I
think, is a an interesting argument
that's going to happen in the future.
And then we're going to see the concept
of identity politics just go away
because identity politics has no meaning
if you can change the identity that
you're in. The whole point of identity
politics is you're locked into an
identity, so you can't change it. That's
why we have political ideologies wrapped
around identity. But if you can just
change your identity, then nobody cares
anymore. So that's an interesting one.
Here's another one I've thought of that
I think is uh probably this is the one I
think would be more disruptive is like
nobody likes to be fat. How would you
like it if you could just get a shot and
it modifies your DNA or changes your
body in some way such that after a
certain amount of processed calories,
it'll just stop digesting food and just
pass it through. So you can eat whatever
you want and you will stay at your
optimal like weight. You know, you'll
stay at your healthy weight. Okay, at
first it seems like, oh, that's great.
I'm going to stay healthy. Everybody who
does this is going to stay healthy. But
then you're like, okay, but as a
society, we would be consuming like way
more calories than we need to. Is
there's there's still starvation in
other parts of the world and we're going
out of our way to just deliberately
waste like food
energy? Like I think I'll have four
cheeseburgers for dinner tonight. I make
it five. I'm hungry. You know what? I
like eating. I'm gonna jab myself with
something that makes me hungrier. And
then we're just like just you might end
up with this incredibly wasteful society
of people who are perfectly healthy.
Meanwhile, other people are starving
while we're eating all the food, you
know. So this is a these are kind of the
sorts of things that biio medicine
enabled by AI might lead to.
That is fascinating. Cosmetic ethnicity.
Uh that is one that never made my radar.
That is uh that's utterly fascinating.
And I think that whatever people can do,
they will do. So, uh, regardless of the
ethics, you might be able to postpone it
or whatever. But if we can edit genes,
people are going to do it on a long
enough timeline of that, I assure you.
Uh, so and then it'll get weird. People
will be like, "Oh, we found we found a
sequence of genes that'll make your skin
blue." And people are like, "Oh, I want
to be blue." The new thing is being
blue, you know, guaranteed. There are
already people injecting essentially dye
into their eyes to make like their
entire pupil black. Uh there is or not
just the pupil but the even the whites
of their eyes. Yep. There are people
that are altering the color of their
pupils. So you can get like oh I want
crystal clear blue eyes. You can go get
that surgery done right now today. Uh so
that's really going to be interesting.
Now, going back to people are going to
edit, but I think they will largely do
it in response to something. And I think
one of the somethings that's going to
drive people to want to edit the human
genome is to be in a race with AI for
ability. And that if there is no upper
bound and AI is able to achieve super
intelligence, and a stat I like to
remind my audience of is um Einstein was
2.4 four times smarter by IQ than a
definitional [ __ ] who's like 82 or 83
points something like that and obviously
the results that were given to the world
by Einstein versus somebody who's
definitionally a [ __ ] is vast and so if
that's only 2.4x 4X. Um, it seems
self-evident to me that give it enough
years, and I'll certainly say within 25
years, I cannot fathom a universe in
which AI is not uh 10 times or more
smarter than the average person. Uh, so
now we're getting into a world where
artificial intelligence absolutely
dwarfs human intelligence. And I know
that some people, myself included, are
not just going to take that sitting
down. And if there is a safe tec
Resume
Read
file updated 2026-02-12 01:38:05 UTC
Categories
Manage