Transcript
jvqFAi7vkBc • Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0772_jvqFAi7vkBc.txt
Kind: captions
Language: en
I think compute is going to be the
currency of the future I think it will
be maybe the most precious commodity in
the world I expect that by the end of
this
decade
and
possibly somewhat sooner than that we
will have quite capable systems that we
look at and say wow that's really
remarkable the road to AGI should be a
giant power struggle I expect that to be
the case whoever builds AGI first gets a
lot of power
do you trust yourself with that much
power the following is a conversation
with Sam Altman his second time in the
podcast he is the CEO of open AI the
company behind GPT 4 Chad GPT Sora and
perhaps one day the very company that
will build
AGI this is Alex Freedman podcast to
support it please check out our sponsors
in the the description and now dear
friends here's Sam
Alman take me through the open AI board
Saga that started on Thursday November
16th maybe Friday November 17th for you
that was definitely the most painful
professional experience of my
life
and chaotic and
shameful and
upsetting and a bunch of other negative
things uh there were great things about
it too and I wish I wish it had not
been in such an adrenaline rush that I
wasn't able to stop and appreciate them
at the time but
um I came across this old tweet of mine
or this tweet of mine from that time
period which was like it was like you
know kind of going to your own eulogy
watching people say all these great
things about you and uh just like
unbelievable support from people I love
and care about
uh that was really nice um that whole
weekend I I kind of like felt with one
big exception I I felt like a great deal
of
love and very little
hate
um even though it felt like I just I
have no idea what's happening and what's
going to happen here and this feels
really bad and there were definitely
times I thought it was going to be like
one of the worst things to ever happen
for AI safety
well I also think I'm happy that it
happened relatively early I thought at
some point between when opening I
started and when we created AGI there
was going to be something crazy and
explosive that happened but there may be
more crazy and explosive things still to
happen um it
still I think helped us build up some
resilience and be ready
for more challenges in the future but
the thing you had a sense that you would
experience is some kind of power
struggle the road to AGI should be a
giant power struggle like the world
should I like well not should I expect
that to be the case and so you have to
go through that like you said iterate as
often as
possible uh in figuring out how to have
a board structure how to have
organization how to have um the kind of
people that you're working with how to
communicate all that in order to uh
deescalate the power struggle as much as
possible yeah pacify it but at this
point it
feels you know like something that was
in the past that was really unpleasant
and really difficult and painful but
we're back to work and things are so
busy and so intense that I don't spend a
lot of time thinking about it there was
a time after uh there was like this
Fugue State um for kind of like the
month after maybe 45 days after that
was I was just sort of like drifting
through the days I was so out of it um I
was feeling so down uh just on a
personal psychological level yeah really
painful um and hard to like have to keep
running open a ey in the middle of that
I just wanted to like crawl into to a
cave and kind of recover for a while but
you know now it's like we're just back
to working on the
mission well it's still useful to go
back there and
reflect on board structures on power
dynamics on how companies are run the
tension between research and product
development and money and all this kind
of stuff so that you whoever a very high
potential of building AGI would do so in
a slightly more organized less dramatic
way yeah in the future so there's value
there to go both the personal
psychological aspects of you as a leader
and also just the the board structure
and all this kind of messy stuff
definitely learned a lot about
um structure and incentives and um what
we need out of a a board um and I think
that is it is valuable that this
happened now in some sense um I think
this is probably not like the last high
stress moment of opening eye but it was
quite a high stress moment like company
very nearly got destroyed and we think a
lot
about many of the other things we've got
to get right for AGI but thinking about
uh how to build a resilient org and how
to build a structure that will stand up
to like a lot of pressure in the world
which I expect more and more as we get
closer I think that's super important do
you have a sense of how deep and
rigorous the deliberation process by the
board was like can you shine some light
on just human dynamics involved in
situations like this was it just a few
conversations and all of a sudden it
escalates and why don't we fire Sam kind
of thing I
think I think the board members were are
well meaning people on the whole um
and I believe
that in stressful situations
um where people feel time pressure or
whatever
uh people understandingly make
suboptimal decisions and I think one of
the challenges for open AI will be we're
going to have to have a board and a team
uh that are good at operating under
Under Pressure do you think the board
had too much power I think boards are
supposed to have a lot of power um but
one of the things that we did see is in
in most corporate structures boards are
usually answerable to shareholders you
know there's sometimes people have like
super voting shares or whatever um in
this case and I think one of the things
with our structure that we maybe should
have thought about more than we did is
that the board of a nonprofit has unless
you put other their rules in place like
quite a quite a lot of power they don't
really answer to anyone but themselves
and there's ways in which that's good
but what we'd really like is for the
board of open a to like answer to the
world as a whole as much as that's a
practical thing so there's a new board
announced yeah there's I
guess uh a new smaller board at first
and now there's a new Final board not a
final board yet we've added some We'll
add more added some okay
what is fixed in the new one that was
perhaps
broken in the previous one the old board
sort of got smaller uh over the course
of about a year it was nine and then it
went down to six and then we couldn't
agree on who to add and the board also
uh I think didn't have a lot of
experienced board members and a lot of
the new board members at open have just
have more experience as board members um
I think that'll help it's been
criticized some of the people that added
to the board I heard a lot of people
criticizing the addition of Larry
Summers for example what what's the
process of selecting the board like
what's involved in that so Brett and
Larry were kind of uh decided In the
Heat of the Moment over this like very
tense weekend and that was that mean
that weekend was like a real roller
coaster it's like a lot of a lot of ups
and downs um and we were trying to agree
on
new board members that both sort of the
executive team here and the old board
members felt would be reasonable um
Larry was actually one of their
suggestions the old board members um
Brett I think I had even previous to
that weekend suggested but he was you
know busy and didn't want to do it and
then we really needed help in Wood um we
talked about a lot of other people too
uh but that
was I felt like if I was going to come
back uh I needed new board members um I
didn't think I could work with the old
board again in the same configuration
although we then decided uh and I'm
grateful that Adam would stay um but we
wanted to get to uh we considered
various configurations decided we wanted
to get to a board of three and uh had to
find two new board members over the
course of sort of a short period of time
so those were decided honestly without
uh you know that's like you kind of do
that on the battlefield you don't have
time to design a rigorous process then
um for new board members since and new
board members will add going forward um
we have some criteria uh that we think
are important for the board to have
different expertise that we want the
board to have um unlike hiring an
executive where you need them to do one
role well the Board needs to do a whole
role of kind of governance and
thoughtfulness uh well and so one thing
that Brett says which I really like is
that you know we want to hire board
members in slates not as individuals one
at a time and uh you know thinking about
a group of people that will
bring nonprofit expertise expertise at
running companies sort of good legal and
governance expertise uh that's kind of
what we've tried to optimize for so is
technical Savvy important for the
individual board members not for every
board member but for certainly some you
need that that's part of what the Board
needs to do so I mean the interesting
thing that people don't understand about
open a I certainly don't is like all the
details of running the business when
they think about the board given the
drama they think about you they think
about
like if you reach AGI or you reach some
of these incredibly impactful products
and you build them and deploy them
what's the conversation with the board
like and they kind of think all right
what's the right Squad to have in that
kind of situation to deliberate look I
think you definitely need some technical
experts there and then you need some
people who are
like what can how can we deploy this in
a way that will help people in the world
the most and people who have a very
different perspective you know I think a
mistake that you or I might make is to
think that only the technical
understanding matters and that's
definitely part of the conversation you
want that board to have but there's a
lot more about how that's going to just
like impact society and people's lives
that you really want represented in
there too and you're just kind of are
you looking at the track record of
people or you're just having
conversations track record is a big deal
you of course have a lot of
conversations but I
um you know there's some roles where I
kind
of totally ignore track record and just
look at slope kind of ignore the Y
intercept thank you thank you for making
it mathematical for the for the audience
for a board member like I do care much
more about the Y intercept like I think
there is something deep to say about
track record there and experiences
sometimes very hard to replace do you
try to fit a polinomial function or
exponential one to the to the track
record that's not that it analogy
doesn't carry that far all right you
mentioned some of the low
points uh that weekend what were some of
the low points psychologically for you
uh did you consider going um to the
Amazon jungle and just taking IA and
disappearing forever or I mean there's
so many like it was that was a very bad
period of time there were great High
points too like uh my phone was just
like sort of non-stop blowing up with
nice messages from people I work with
every day people I hadn't talked to in a
decade I didn't get to like appreciate
that as much as I should have because I
was just like in the middle of this
firefight but that was really nice but
on the whole it was like a very painful
weekend and also just like a
very it was like a battle thought in
public to a surprising degree and that's
that was extremely exhausting to me much
more more than I expected um I think
fights are generally exhausting but this
one really was you know board did this
uh Friday afternoon I really couldn't
get much in the way of answers but I
also was just like well the board gets
to do this and
so I'm going to think for a little bit
about what I want to do but I'll try to
find the the blessing in disguise here
and I was like well
I you know my current job at opening eye
is or it was like to like run a you know
decently sized company at this point and
the thing I had always liked the most
was just getting to like work on work
with the researchers and I was like yeah
I can just go do like a very focused AI
research effort and I got excited about
that didn't even occur to me at the time
to like possibly that this was all going
to get undone this was like Friday
afternoon so you've accepted your the
death of very quickly like within you
know I mean I went through like a little
period of confusion and rage but very
quickly and by Friday night I was like
talking to people about what was going
to be next and I was excited about that
um I think it was Friday night evening
for the first time that I heard from the
exec team here which is like Hey we're
going to like fight this and you know we
think whatever and then I went to bed
just still being like okay excited like
onward were you able to sleep not a lot
it was one of one of the weird things
was it was this like period of four four
and a half days days where sort of
didn't sleep much didn't eat much and
still kind of had like a surprising
amount of energy it was you learned like
a weird thing about adrenaline in more
time so you kind of accepted the death
of a you know this baby opening I was
excited for the new thing I was just
like okay this was crazy but whatever
it's a very good coping mechanism and
then Saturday morning uh two of the
board members called and said hey we you
know destabilize we didn't mean to
destabilize things we don't want to
destroy a lot of value here you know can
we talk about you coming back and I
immediately didn't want to do that but I
thought a little more and I was like
well I you don't really care about the
people here the partners shareholders
like all of the I love this company and
so I thought about it and I was like
well okay but like here's here's the
stuff I would need and and then the most
painful time of all was over the course
of that weekend
um I kept thinking and being told and we
all kept not just me like the whole team
here kept thinking while we were trying
to like keep and I
stabilized while the whole world was
trying to break it apart people trying
to recruit whatever um we kept being
told like all right we're almost done
we're almost done we just need like a
little bit more time um and it was this
like very confusing State and then
Sunday evening when again like every few
hours I expected that we were going to
be done and we're going to like figure
out a way for me to return and things to
go back to how they were um the board
then uh appointed a new interim
CEO and then I was like I mean that is
that is that feels really bad that was
the low point of the whole
thing you know I'll tell you something I
it felt very painful but I felt a lot of
love that whole weekend it was not other
than that one moment Sunday night I
would not characterize my emotions as
anger or hate um but I really just
like I felt a lot of love from people
towards
people it was like painful but it would
like the dominant emotion of the weekend
was love not hate you've spoken highly
of uh Mera moradi that she helped
especially as he put in a tweet in The
Quiet Moments When it counts perhaps we
could take a bit of a tangent what do
you admire about well she did a great
job during that weekend in a lot of
chaos but but people often see leaders
in the moment in like the crisis moments
good or
bad um but a thing I really value and
leaders is how people act on a boring
Tuesday at 9:46 in the morning and in in
just sort of the the the normal drudgery
of the
day-to-day how someone shows up in a
meeting the quality of the decisions
they make that was what I meant about
the Quiet Moments meaning like most of
the work is done on a day by day in the
meeting by meeting just just be present
and and make great decisions yeah I me
mean look what you wanted to have wanted
to spend the last 20 minutes about and I
understand is like this one very
dramatic weekend yeah but that's not
really what opening eye is about opening
eye is really about the other seven
years well yeah human civilization is
not about the invasion of the Soviet
Union by Nazi Germany but still that's
something people focus on because very
understandable it gives us an insight
into human nature the extremes of human
nature and perhaps some of the damage
and some of the triumphs of human
civilization can happen in those moments
it's like
illustrative let me ask you about
Ilia is he being held hostage in a
secret nuclear facility no what about a
regular secret facility no what about a
nuclear non secret facility neither not
that either I mean this becoming a meme
at some point you've known Ilia for for
a long time he was obviously in part
part of this drama with the board and
all that kind of
stuff what's your relationship with him
now I love Ilia I have tremendous
respect for Ilia I uh I don't have
anything I can like say about his plans
right now that's that's a question for
him um but I really hope we work
together for you know certainly the rest
of my
career he's a little bit younger than me
maybe he works a little bit
longer you know there's a there's a meme
that he saw something like he maybe saw
AGI and that gave him a lot of worry
internally uh what did ilas
see uh oh has not seen AGI none of us
have seen AGI we've not built
AGI
uh I do think uh one of the many things
that I really love about Ilia is he
takes AGI and the safety concerns
broadly speaking you know including
things like the impact this is going to
have on society very seriously and we as
we continue to make significant progress
um Ilia is one of the people that I've
spent the most time over the last couple
of years talking about what this is
going to mean what we need to do to
ensure we get it right to ensure that we
succeed at the mission um
so Ilia did not see AGI um but Ilia is
a credit to humanity in terms of how
much he
thinks and worries about making sure we
get this right I've had a bunch of
conversation with him in the past I
think when he talks about technology
he's always like doing this long-term
thinking type of thing so he's not
thinking about what this is going to be
in a year he's thinking about in 10
years yeah just thinking from first
principles like
okay if the scales what are the
fundamentals here where is this going
and so that that's a foundation for them
thinking about like all the other safety
concerns and all that kind of stuff um
which makes him really fascinating human
uh to talk with do you have any idea why
he's been kind of quiet is it he's just
doing some soul searching again I don't
want to like speak for oh yeah I think
that you should ask him that
um he's definitely a thoughtful guy
uh I think I kind of think ailia is like
always on a soul search in a really good
way yes yeah also he appreciates the
power of Silence also I'm told he can be
a silly guy which I've never I've never
seen that side of it's very sweet when
that
happens I've never witnessed a silly
Ilia but um I look forward to to that as
well I was at a dinner party with him
recently and he was playing with a puppy
and I and he was like in a very silly
moood very endearing and I was thinking
like oh man this is like not the side of
the ILO that the world sees the most so
just to wrap up this whole Saga are you
feeling good about the board structure
about all of this and like where it's
moving I feel great about the new board
in terms of the structure of openi I you
know one of the board's tasks is to look
at that and see where we can make it
more robust um we wanted to get new
board members in place first uh but you
know we clearly learned a lesson about
structure throughout this process I
don't have I think super deep things to
say it was a crazy very painful
experience I think it was like a perfect
storm of weirdness it was like a preview
for me of what's going to happen as the
stakes get higher and higher in the need
that we have like robust governance
structures and process as in people
um I am kind of happy it happened when
it did but it
was a shockingly painful thing to go
through did it make you be more hesitant
and trusting people yes just in a
personal level I think I'm like an
extremely trusting person I have always
had a life philosophy of you know like
don't worry about all of the paranoia
don't worry about the edge cases you
know you get a little bit screwed in
exchange for getting to live with your
guard down and this was so shocking to
me I was so caught off guard that it has
definitely
changed and I really don't like this
it's definitely changed how I think
about just like default Trust of people
and planning for the bad scenarios you
got to be careful with that are you
worried about becoming a little too
cynical um I'm not worried about
becoming too cynical I think I'm like
the extreme opposite of a cynical person
but I'm I'm I'm worried about just
becoming like less of a default trusting
person I'm actually not sure which mode
is best to operate in for a person who's
developing
AGI trusting or untrusting it's an
interesting Journey you're
on but in terms of structure see I'm
more interested on the human level like
how do you surround yourself with humans
that're building cool but also are
making wise decisions because the more
money you start making the more power
the thing has the weirder people get you
know I think you could like you can make
all kinds of comments about
the board members and the level of trust
I should have had there or how I should
have done things differently but in
terms of the team here I think you'd
have to like give me a very good grade
on that one um and I have uh just like
enormous gratitude and trust and respect
for the people that I work with every
day and I think being surrounded with
people like that
is is really
[Music]
important our mutual friend Elon sued
open
AI m is the essence of what he's
criticizing to what degree does he have
a point to what degree is he wrong I
don't know what it's really about we
started
off just thinking we're going to be a
research lab and having no idea about
how this technology was going to go it's
hard to because it was only you know
seven or eight years ago it's hard to go
back and really remember what it was
like then but this before language
models were a big deal this was before
we had any idea about an API or selling
access to a chatbot is before we had any
idea we were going to productize it all
so we're like we're just like going to
try to do research and you know we don't
really know what we're going to do with
that I think with like many new
fundamentally new things you start
fumbling through the dark and you make
some assumptions most of which turn out
to be wrong and then it became clear
that we were going to need to
do different things and also have huge
amounts more Capital so we said okay
well the structure doesn't quite work
for that how do we patch the structure
um and then you patch it again and Patch
it again and you end up with something
that does look kind of eyebrow raising
to say the least but we got here
gradually with I think reasonable
decisions at each point along the way
and doesn't mean I wouldn't do it
totally differently if we could go back
now with an oracle but you don't get the
Oracle at the time but anyway in terms
of what elon's real motivations here are
I don't
know to the degree you remember what was
the response that open AI gave in the
blog post can you summarize
it oh we just said
like you know Elon said this set of
things here's our character ation or
here's the sort of not our
characterization here's like the
characterization of how this went down
um we tried to like not make it
emotional and just sort of say
like here's the history I do
think there's a degree
of mischaracterization from Elon here
about one of the points he just made
which is the degree of uncertainty has
at the time you guys are a bunch of like
a small group of
researchers craz talking about AGI when
everybody's laughing at that thought
wasn't that long ago Elon was crazily
talking about launching Rockets yeah
when people were laughing at that
thought
uh so I think he'd have more empathy for
this I mean I I do think that there's
personal stuff here that there was a
split that open Ai and a lot of amazing
people here chose the part ways of Elon
so there's a personal Elon chose the
part
ways can you describe that exactly the
the the choosing to part ways he thought
open ey was going to fail um he wanted
total control to sort of turn it around
we wanted to keep going in the direction
that now has become open AI he also
wanted Tesla to be able to build an AGI
effort at various times he wanted to
make open AI into a for-profit company
that he could have control of or have it
merge with Tesla um we didn't want to do
that and he decided to leave which
that's
fine so you're saying and that's one of
the things that the blog post says is
that he wanted open AI to be basically
acquired by Tesla yeah in the same way
that or maybe something similar or maybe
something more dramatic than the
partnership with Microsoft my memory as
the proposal was just like yeah like get
acquired by Tesla and have Tesla have
full control over it I'm pretty sure
that's what it was so what is the word
open in open AI mean to Elon at the time
Ilia has talked about this in the email
exchanges and all this kind of stuff
what does it mean to you at the time
what does it mean to you now I would
definitely pick a diff speaking of going
back with an oracle I'd pick a different
name
um one of the things that I think
opening eye is doing that is the most
important of everything that we're doing
is putting powerful technology in the
hands of people for free as a public
good not we're not you know we don't run
ads on our free version we don't
monetize it in other ways we just say
it's part of our mission we want to put
increasingly powerful tools in the hands
of people for free and get them to use
them and I
think that kind of open is really
important to our mission I think if you
give people great tools and teach them
to use them or don't even teach them
they'll figure it out and let them go
build an incredible future for each
other with that uh that's a big deal so
if we can keep putting like free or
lowcost or free and low cost powerful AI
tools out in the world uh I it's a huge
deal for how we fulfill the mission um
open source or not yeah I think we
should open source some stuff and not
other stuff uh the it does become this
like religious battle line where Nuance
is hard to have but I think Nuance is
the right answer so he said change your
name to closed Ai and I'll drop the
lawsuit I mean is it going to become
this
Battleground in in the land of memes
above I think that
speaks to the seriousness with which
Elon means the
lawsuit and
uh yeah I mean that's like an
astonishing thing to say I think like
well I don't think the lawsuit may maybe
correct me if I'm wrong but I don't
think the lawsuit is legally serious
it's more to make a point about the
future of AGI and the company that's
currently leading the
way so look I mean grock had not open
sourced anything until people pointed
out it was a little bit hypocritical and
then he announced that Gro will open
source things this week I don't think
open source versus not is what this is
really about for him well we'll talk
about open source and not I do think
maybe criticizing the competition is
great just talking a little that's
great but friendly competition versus
like
I personally hate lawsuits yeah look I
think this whole thing is like
Unbecoming of a builder and I respect
Elon as one of the great Builders of our
time and
um I know he knows what it's like to
have like haters attack him and it makes
me extra sad he's doing it toss yeah
he's one of the greatest Builders of all
time potentially the greatest builder of
all time it makes me sad and I think it
makes a lot of people sad like there's a
lot of people who've really looked up to
him for a long time and
said this I said you know in some
interview or something that I missed the
old Elon and the number of messages I
got being like that exactly encapsulates
how I feel I think he should just win he
should just make X grock beat GPT and
then GPT beats Croc and it's just a
competition and it's it's beautiful for
everybody but on the question of Open
Source do you think there's a lot of
companies playing with this idea it's
quite interesting I would
say
surprisingly has led the way on this or
like uh at least took the first step in
the game of chess of like really open
sourcing the model of course it's not
the state-ofthe-art model but open
sourcing
llama and you Google is flirting with
the idea of open sourcing a smaller
version have you what are the pros and
cons of open sourcing have you played
around this idea yeah I think there
there is definitely a place for open
source models particularly smaller
models that people can run locally I
think there's huge demand for um I think
there will be some open source models
there will be some close Source models
uh this it won't be unlike other
ecosystems in that way I listened to uh
all-in podcast talking about this this
lwuit and all that kind of stuff and
they were more concerned about the
precedent of going from nonprofit to
this cap for
profit what president ass says for other
startups is that I don't I would heavily
discourage any startup that was thinking
about starting as a nonprofit and adding
like a for-profit arm later I'd heavily
discourage them from doing that I don't
think we'll set a precedent here okay so
most most startups should go just for
sure and again if we knew what was going
to happen we would have done that too
well like in theory if you like dance
beautifully here you could there's like
some tax incentives or whatever but I I
don't think that's like how most people
think about these things just not
possible to save a lot of money for a
startup if you do it this way no I think
there's like laws that would make that
pretty difficult where do you hope this
goes with
Elon what this this tension this dance
what do you hope this like if we go one
two 3 years from
now your relationship with him on a
personal level too like friendship
friendly competition just all this kind
of
stuff yeah I mean I really respect Elon
um
and I hope that years in the future we
have an amicable
relationship yeah I hope you guys have
an amicable relationship like this
month and just compete and win and uh
and explore these ideas together um I do
suppose there's competition for talent
or whatever
but it should be friendly competition
just build build cool
and Elon is pretty good at building cool
but so are you so speaking of cool
uh Sora there's like a million
questions I could ask first of all it's
amazing it truly is amazing on a product
level but also just on a philosophical
level so let me just a technical
philosophical ask what do you think it
understands about the
world more or less than GPT 4 for
example like the world model when you
train on these patches versus language
tokens I think all of these models
understand something more about the
world model than most of us give them
credit for and because they're also very
clear things they just don't understand
or don't get right it's easy to like
look at the weaknesses see through the
veil and say ah this is just this is all
fake but it's not all fake it's just
some of it works and some of it doesn't
work like I remember when I started
first watching Sora videos and I would
see like a person walk in front of
something for a few seconds and include
it and then walk away and the same thing
was still there I was like oh it's
pretty good or there's examples where
like the underlying physics looks so
well represented over you know a lot of
steps in a sequence it's like oh this is
this is like quite impressive but like
fundamentally these models are just
getting better and that will keep
happening if you look at the trajectory
from Dolly 1 to 2 to 3
Sora you know there were a lot of people
that were dunked on each version saying
it can't do this it can't do that and
like look at it now so well the thing
you just mentioned is kind of with the
occlusions is basically modeling the
physics of the threedimensional physics
of the world sufficiently well to
capture those kinds of things well or
like under or yeah maybe you can tell me
in order to deal with occlusions what
does the world model need to yeah so
what I would say is it's doing something
to deal with occlusions really well what
I represent that it has like a great
underlying 3D model of the world it's a
little bit more of a stretch but can you
get there through just these kinds of
two-dimensional training data
approaches uh it looks like this
approach is going to go surprisingly far
I don't want to speculate too much about
what limits it will surmount and which
it won't but what are some interesting
limitations of the system that you've
seen I mean there's been some fun ones
you've posted there's all kinds of one I
mean like you know cats sprout in a
extra limb at random points in a video
uh like pick what you want but there's
still a lot of problem a lot of
weaknesses do you think it's a
fundamental flaw of the
approach or is it just you know bigger
model or better like technical details
or better data more data is going to
solve those the cat sprouting I say yes
to both like I think there is something
about the approach which just seems to
feel different from how we think and
learn and whatever
and then also I think it'll get better
with skill like I mentioned llms have
tokens text tokens and Sora has visual
patches so it converts all visual data
at diverse kinds of visual data videos
and images into patches is the training
to the degree you can say fully
self-supervised there's there some
manual labeling going on like what's the
involvement of humans in all
this I mean without saying anything
specific about the Sora approach we we
use lots of human data
in our
work but not internet scale data so lots
of humans Lots is a complicated word Sam
I think Lots is a fair word in this
case but it doesn't because to me lots
like listen I'm an introvert and when I
hang out with like three people that's a
lot of people four people that's a lot
but I suppose you mean more than more
than three people work on labeling the
data for these models yeah okay all
right but fundamentally there's a a lot
of self-supervised learning cuz what you
mentioned in the technical report is
internet scale data that's another
beautiful it's like poetry uh so it's a
lot of data that's not human label it's
like it's selfs supervised in that way
yeah and then the question is how much
how much data is there on the internet
that could be used in this that uh is
conducive to this kind of
self-supervised way if only we knew the
details of the self-supervised do you
have you considered opening it up a
little more details we have you mean for
Sora specifically Sora specifically the
because it's so
interesting that like can this L can the
same magic of llms now start moving
towards visual data and what does that
take to do that I mean it looks to me
like yes but we have more work to do
sure what are the dangers why are you
concerned about releasing the system
what uh what are some possible dangers
of this I Frankly Speaking one thing we
have to do before releasing the system
is is just like get it to work
at a level of efficiency that will
deliver the scale people are going to
want from this so that I don't want to
like downplay that and there's still a
ton ton of work to do there but you know
you can imagine
like issues with deep fakes
misinformation um like we try to be
thoughtful company about what we put out
into the world and it doesn't take much
thought to think about the ways this can
go badly there's a lot of tough
questions here uh you're dealing in a
very tough space do you think training
AI should be or is fair use under
copyright law I think the question
behind that question is do people who
create valuable data deserve to have
some way that they get compensated for
use of it and that I think the answer is
yes I don't know yet what the answer is
people have proposed a lot of different
things we've some tried some different
models but you know if I'm like an
artist for
example a I would like to be able to opt
out of people generating art in my style
and B if they do generate art in my
style I'd like to have some economic
model associated with that yeah it's
that uh transition from CDs to Napster
to
Spotify we have to figure out some kind
of model the model changes but people
have got to get paid well there should
be some kind of intive if we zoom out
even more for humans to keep doing cool
everything I worry about humans are
going to do cool and Society is
going to find some way to reward it I I
that seems pretty hardwired we want to
create we want to be useful we want to
like achieve status in whatever way
that's not going anywhere I don't think
but the reward might not be monetary
Financial it might be like Fame and
celebration of other cool maybe
Financial in some other way I guess I
don't think we've seen like the last
evolution of how the economic system is
going to work yeah but artists and
creators are worried when they see Sora
they're like holy sure artists were
also super worried when photography came
out yeah and then photography became a
new art form and people made a lot of
money taking pictures
and I think things like that will keep
happening people will use the new Tools
in new ways if we just look on YouTube
or something like this how much of that
will be using Sora like
AI generated content do you think in the
next five years people talk about like
how many jobs is they going to do in
five years and and the framework that
people have is what percentage of
current jobs are just going to be
totally replaced by some AI doing the
job the way I think about it is not what
percent of jobs AI will do but what
percent of tasks will AI do and over
what time Horizon so if you think of all
of the like five second tasks in the
economy five minute tasks the five hour
tasks maybe even the five day tasks how
many of those can AI do and I think
that's a way more interesting impactful
important question
than how many jobs AI can do because it
is a tool that will work at increasing
levels of sophistication and over longer
and longer time
Horizons for more and more tasks and let
people operate at a higher level of
abstraction so maybe people are way more
efficient at the job they do and at some
point that's not just a quantitative
change but that's a qualitative one too
about
the kinds of problems you can keep in
your head I think that for videos on
Youtube it'll be the same many videos
maybe most of them will use AI tools in
the production but they'll still be
fundamentally driven by a person
thinking about it putting it together
you know doing parts of it sort of
directing and running it yeah it's so
interesting I mean it's scary but it's
interesting to think about I tend to
believe that humans like to watch other
humans or other human hum really care
about other humans a lot yeah if there's
a cooler thing that's more that's better
than a
human humans care about that for like
two days and then they go back to humans
that seems very deeply wired it's the
whole chess thing oh yeah but no let's
everybody keep playing CH and Let's
ignore the elephant in the room that
humans are really bad at chess relative
to AI systems we still run races and
cars are much faster I mean this is
there's like a lot of examples yeah and
maybe just be
tooling like in the Adobe sweet type of
way where you can just make videos much
easier and all that kind of
stuff listen I hate being in front of
the camera if I can figure out a way to
not be in front of the camera I would
love it unfortun it'll take a while like
that generating faces it's it's getting
there but generating faces in video
format is tricky when it's specific
people versus generic people let me ask
you about gbt
4 so many questions uh first of all also
amazing it's looking back it'll probably
be this kind of historic pivotal moment
with 3 five and four which had BT maybe
five will be the pivotal moment I don't
know hard to say that looking forwards
we never know that's the annoying thing
about the future it's hard to predict
but for me looking back GPT for Chad gbt
is pretty damn impressive like
historically impressive so allow me uh
to ask ask what's been the most
impressive capabilities of gp4 to you
and gp4
turbo I think it kind of sucks H typical
human also gotten used to an awesome
thing no I think it is an amazing thing
um
but relative to where we need to get to
and where I believe we will get to uh
you know at the time of like
gpt3 people were like oh this is amazing
this is this like Marvel of technology
and it is it was uh but you know now we
have gp4
and look at GB3 and you're like that's
unimaginably horrible um I expect that
the Delta between five and four will be
the same as between four and three and I
think it is our job to live a few years
in the future and remember that the
tools we have now
are going to kind of suck looking
backwards at them
and that's how we make sure the future
is better what are the most glorious
ways in that GPT for sucks meaning uh
what are the best things it can do what
are the best things it can do and the
the limits of those best things that
allow you to say it sucks therefore
gives you an inspiration and hope for
the future you know one thing I've been
using it for more recently is sort of a
like a brainstorming partner Y and for
that there's a glimmer of something
amazing in there I don't think it gets
you know when people talk about it it
what it does they're like ah it helps me
code more productively it helps me write
more faster and better it helps me you
know translate from this language to
another all these like amazing things
but there's something about the like
kind of creative brainstorming partner I
need to come up with a name for this
thing I need to like think about this
problem in a different way I'm not sure
what to do here
uh that I think like gives a glimpse of
something I hope to see more of um one
of the other things that you can see
like a very small glimpse of
is when it can help on longer Horizon
tasks you know break down some multiple
steps maybe like execute some of those
steps search the internet write code
whatever put that together uh when that
works which is not very often it's like
very
magical the iterative back and forth
with a human well it works a lot for me
what do you mean it uh iterative back
and forth the human can get more often
when it can go do like a 10-step problem
on its own oh doesn't work for that too
often sometimes at multiple layers of
abstraction or do you mean just
sequential both like you know to break
it down and then do things at different
layers of abstraction and put them
together look I don't want to I don't
want to like downplay the accomplishment
of gp4
um but I don't want to overstate it
either and I think this point that we
are on an exponential curve we will look
back relatively soon at gp4 like we look
back at gpt3 now that said I mean Chad
gbt was a transition to where people
like started to believe it there was a
kind of there is an uptick of believing
not internally at open AI perhaps
there's Believers here but when you
think and in that sense I do think it'll
be a moment where a lot of the world
went from not believing to believing um
that was more about the chat gbt
interface than the and and by the
interface and product I also mean the
post training of the model and how we
tune it to be helpful to you and how to
use it than the underlying model itself
how much
of those two uh each of those things are
important the underlying model and the
rlf or something of that nature that
Tunes it to be more compelling to the
human more uh effective and productive
for the human I mean they're they're
both super important but the the the rhf
the post-training step the you know
little wrapper of things that from a
compute perspective little wrapper of
things that we do on top of the base
model even though it's a huge amount of
work that's really important to say
nothing of the product that we build
around it
um you know in some sense like we did
have to do two things we had to invent
the underlying technology and then we
had to figure
out
how to make it into a product people
would
love which is not just about the actual
product work itself but this whole other
step of how you align it and make it
useful and how you make the scale work
where a lot of people can use it at the
same time all that kind of stuff and
that but you know that was like unnown
difficult thing like we knew we were
going to have to scale it up we had to
go do two things that had like never
been done before uh that were both like
I would say quite significant
achievements and then lot of things like
scaling it up that other companies have
had to do
before how does the the context window
of going from 8K to 128k tokens compare
from the from GPT 4 to to GPT 4 Turbo
people like long most people don't need
all the way to 128 most of the time
although you know if we dream into the
distant future we'll have like like way
distant future we'll have like context
length of several billion you will feed
in all of your information all of your
hisory over time and it'll just get to
know you better and better and that'll
be great for now uh the way people use
these models they're not doing that and
you know people sometimes Post in a
paper or you know a significant fraction
of a code repository whatever
um but most usage of the models is not
using the long context most of the time
I like that this is year I Have a Dream
speech one day you'll be judged by the
full context of your character or of
your whole lifetime that's interesting
so like that's part of the expansion
that you're hoping for is a greater and
greater context there was this I saw
this internet clip once I'm going to get
the numbers wrong but it was like Bill
Gates talking about the amount of memory
on some early
computer maybe 64k maybe 640k something
like that and most of it was used for
the screen
buffer and he just couldn't seemed
genuine this couldn't imagine that the
world would eventually need gigabytes of
memory a computer or terabytes memory in
a
computer
um and you always do or you always do
just need to like follow the exponential
of technology and and we're going to
like we will find out how to use better
technology so I can't really imagine
what it's like right now for context
links to go out to the billion someday
and they might not literally go there
but effectively it'll feel like that
um but I know we'll use it and really
not want to go back once we have it yeah
even saying billions 10 years from now
might seem dumb because it'll be
like trillions upon trillions sure
there'd be some kind of
breakthrough that will effectively feel
like infinite context but even 120 I
have to be honest I haven't pushed it to
that degree maybe putting in entire
books or like parts of books and so on
papers what are some interesting use
cases of GPT 4 that you've seen the
thing that I find most interesting is
not any particularly case that we can
talk about those but it's people who
kind of
like this is mostly younger people but
people who use it as like their default
start for any kind of knowledge work
task yeah and it's the fact that it can
do a lot of things reasonably well you
can use gptv you can use it to help you
write code you can use it to help you do
search you can uh use it to like edit a
paper the most interesting thing to me
is the people who just use it as the
start of their workflow I do as well for
for many things like uh I use it as a u
a partner for reading
books it helps me think help me think
through ideas especially when the books
are classic so it's really well written
about and it actually is
is I I find it often to be significantly
better than even like Wikipedia on
well-covered topics it's somehow more
balanced and more nuanced or maybe it's
me but it inspires me to think deeper
than a Wikipedia article does I'm not
exactly sure what that is you mentioned
like this collaboration I'm not sure
where the magic is if it's in here or if
it's in there or if it's somewhere in
between not sure uh but one of the
things that concerns me for knowledge
task when I start with GPT is I'll
usually have to do fact checking
after like check that it didn't come up
with fake stuff how how do you figure
that out that you know GPT
can come up with fake stuff that sounds
really convincing so how do you ground
it in truth that's obviously an area of
intense interest for us uh I think it's
going to get a lot better with upcoming
versions but we'll have to work on it
and we're not going to have it like all
solved this year well the scary thing is
like as it gets better you'll start not
doing the factchecking more and more
right I I'm of two minds about that I
think people are like much more
sophisticated users of Technology than
we often give them credit for and people
seem to really understand that GPT any
of these models hallucinate some of the
time and if it's mission critical you
got to check it except journalists don't
seem to understand that I've seen
journalists half acidly just using GPT
for it's of the long list of things I'd
like to dunk on journalists for this is
not my top criticism of them well I
think the bigger criticism is perhaps
the pressures and the incentives of
being a journalist is that you have to
work really quickly and this is a
shortcut I I would love our society to
incentivize like I would too long like a
journal
journalistic efforts that take days and
weeks and and rewards great in-depth
journalism also journalism that presents
stuff in a balanced way where it's like
celebrates people while criticizing them
even though the criticism is the thing
that gets clicks and making up also
gets clicks and headlines that
mischaracterize completely I'm sure you
have a lot of people dunking on well all
that drama probably got a lot of clicks
probably
did uh and that that's that you know
that that's a bigger problem about human
civilization I'd love to see solved is
where we celebrate a bit more you've
given Chad GPT the ability to have
memories you've been playing with that
about previous
conversations and also the ability to
turn off memory I wish I could do that
sometimes just turn on and off depending
I guess sometimes alcohol can do that
but not not in uh not optimally I
suppose uh what what have you seen
through that like playing around with
that idea of remembering conversations
and not we're very early in our
Explorations here but I think what
people want or at least what I want for
myself is a model that gets to know me
and gets more useful to me over
time this is an early exploration um I
think there's like a lot of other things
to do but that's where you'd like to
head you know you'd like to use a model
and over the course of your life or use
a system be many models and over the
course of your life it gets it gets
better and better yeah hard is that
problem cuz right now it's more like
remembering little factoids and
preferences and so on what about
remembering like don't you want GPT to
remember all the you went through
in November and all the all the drama
and then you cuz right now you're
clearly blocking it out a little bit
it's not just that I want it to remember
that I want it to integrate the lessons
of that yes
and remind me in the
future what to do different differently
or what to watch out for and you know we
all gain from experience over the course
of Our Lives varying degrees and I'd
like my AI agent to gain with that
experience too um so if we if we go back
and let ourselves imagine that you know
trillions and trillions of context
length if I can put every conversation
I've ever had with anybody in my life in
there if I can have all of my emails
input out like all of my input output in
the context window every time I ask
question that'd be pretty cool I think
yeah I think that would be very cool um
people sometimes will hear that and be
concerned about privacy is there what
what what do you think
about that aspect of it the more
effective the AI becomes it
really integrating all the experiences
and all the data that happened to you
and give you advice I think the right
answer there is just use your choice you
know anything I want stricken from the
record for my AI agent I w't be able to
like take out if I don't want it to
remember anything I want that too
you and I may have different opinions
about where on that privacy utility
tradeoff for our own AI you want to be
which is totally fine but I think the
answer is just like really easy user
choice but there should be a some high
level of transparency from a company
about the user Choice cuz sometimes
company in the past companies in the
past have been kind of shady about like
yeah we're it's kind of presumed that
we're collecting all your data and we're
using it for a good reason for
advertisements and so on but there's not
a transparency about the details of
that that's totally true you know you
mentioned earlier that I'm like blocking
out the November stuff just teasing you
well I mean I think it
was a very traumatic thing and it did
immobilize me for a long period of time
like definitely the
hardest like the hardest work thing I've
had to do was just like keep working
that period because I had to like you
know try to come back in here and put
the pieces together while I was just
like
in sort of shock and pain and you know
nobody really cares about that I mean I
the team gave me a pass and I was not
working on my normal level but there was
a period where I was just
like it was really hard to have to do
both but I kind of woke up on the
morning and I was like this was a
horrible thing to happen to me I think I
could just feel like a victim forever uh
or I can say this is like the most
important work I'll ever touch in my
life and I need to get back to it and it
doesn't mean that I've repressed it
because sometimes I wake from the Middle
the night thinking about it but I do
feel like an obligation to keep moving
forward well that's beautifully said but
there could be some lingering stuff in
there like what I would be concerned
about
is that trust thing that you mentioned
that being paranoid about people uh as
opposed to just trusting everybody or
most people like using your gut it's a
tricky dance for sure I mean because
I've
seen in in my part time Explorations
I've been diving deeply
into the zans administration the Putin
Administration and the Dynamics there in
Wartime in a very highly stressful
environment and what happens is distrust
and you isolate yourself both and you
start to not see the world clearly and
that's a concern that's a human concern
you seem to have taken a stride and kind
of learned the good lessons and felt the
love and let the love energ you which is
great but still can linger in there
there's just some questions I would love
to ask your intuition about what's GPT
able to do a
not so it's allocating approximately the
same amount of compute for each token it
generates is there room there in this
kind of approach to slower thinking
sequential thinking I think there will
be a new paradigm for that kind of
thinking will it be similar like
architecturally as what we're seeing now
with llms is it a layer on top of the
llms uh I can imagine many ways to
implement that I think that's less
important than the question you were
getting at which is do we need a way to
do a slower kind of thinking where the
answer doesn't have to get
like you know it's like
like I guess like spiritually you could
say that you want an AI to be able to
think harder about a harder problem
and answer more quickly about an easier
problem and I think that will be
important is that like a human thought
that we're just having you should be
able to think hard is that wrong
intuition I suspect that's a reasonable
intuition interesting so it's not
possible once the GPT gets like
gpt7 we just be instantaneously be able
to
see you know here's here's the proof of
from our
theum it seems to me like you want to be
able to allocate more compute to harder
problems like it seems to me that a
system
knowing if if you ask a system like that
proof from OAS theorem
versus what's today's
date unless it already knew and had
memorized the answer to the proof
assuming it's got to go figure that out
seems like that will take more compute
but can it look like a basically LM
talking to itself that kind of thing
maybe I mean there's a lot of things
that you could imagine working what like
what the right or the best way to do
that will be uh we don't
know this does make me think of the
mysterious the lore behind
qar what's this mysterious qar project
is it also in the same nuclear
facility uh there is no nuclear facility
M that's what a person with a nuclear
facility always says I would love to
have a secret nuclear
facility there isn't one all right uh
maybe someday someday all
right one can drink open AI is not a
good company at keeping secrets it would
be nice you know we're like been plagued
by a lot of leaks and it would be nice
if we were able to have something like
that can you speak to what qar is we are
not ready to talk about that see but an
answer like that means there's something
to talk about it's very mysterious Sam I
mean we work on all kinds of research
uh we have said for a while that we
think better reasoning in these systems
is an important direction that we'd like
to
pursue we haven't cracked the code yet
uh we're we're very interested in
it is there going to be
moments qar or otherwise where there's
going to be leaps similar to jgpt where
you're like
that's a good question um what do I
think about
that it's interesting to me it all feels
pretty continuous right this is kind of
a theme that you're saying is there's a
gradual you're basically gradually going
up an exponential slope but from an
outsider perspective for me just
watching it that it does feel like
there's leaps but to you there isn't I
do wonder if we should have so you know
part of the reason that we deploy the
way we do is that we think um we call it
iterative deployment we uh rather than
go build in secret until we got all the
way to GPT 5 we decided to talk about
GPT 1 2 3 and
4 and part of the reason there is I
think Ai and surprise don't go together
and also the world people institutions
whatever you want to call it need time
to adapt and think about these things
and I think one of the best things that
open ey has done is this strategy and we
we get
the world to pay attention to the
progress to take AGI seriously to think
about
what systems and structures and
governance we want in place before we're
like under the gun and have to make a
rust decision I think that's really good
but the fact that people like you and
others say um you still feel like there
are these leaps makes me think that
maybe we should be doing are releasing
even more iteratively I don't know what
that would mean I don't have an answer
ready to go but like our goal is not to
have shock updates to the world the
opposite yeah for sure more it iterative
would be amazing I think that's just
beautiful for everybody but that's what
we're trying to do that's like our state
of the strategy and I think we're
somehow missing the mark so maybe we
should think about you know releasing
GPT 5 in a different way or something
like that yeah 4.71
4.72 but people tend to like to
celebrate people celebrate birthdays I
don't know if you know humans but they
kind of have these milestones and all I
do know some humans um people do like
Milestones I uh I totally get
that I think we like Milestones too it's
like fun to you know say declare Victory
on this one and go start the next thing
but but yeah I feel like we're somehow
getting this a little bit wrong so uh
when is gp5 coming out again I don't
know that's honest answer oh that's the
honest
answer is it blink twice if it's this
year
I
also we will release an amazing model
this year I don't know what we'll call
it so that goes to the question of like
what what's the way we release this
thing we'll release over in the coming
months many different things uh I think
they'll be very cool uh I think before
we talk about like a gp5 like model
called that or called or not called that
or a little bit worse or a little bit
better than what what you'd expect from
a gbt 5 I think we have a lot of other
important things to release first I
don't know what to expect from GPT
5 you're making me nervous and excited
uh what are some of the biggest
challenges in bottlenecks to overcome
for whatever it ends up being called but
let's call it GPT 5 just interesting to
ask what are is it on the compute side
is in the technical side always all of
these I was I I was you know what's the
one big unlock is it is it a big bigger
computer is it like a new secret is it
something else um it's all of these
things together like the thing that
openi I think does really
well this is actually an original ILO
quote that I'm going to butcher but it's
something like we multiply 200
mediumsized things together into one
giant thing so there's this uh
distributed constant Innovation
happening yeah so even on the technical
side like uh especially on the technical
side so even like detailed approach it
like detailed aspects of
every how does that work with different
desperate teams and so on like how how
do they how do how do the mediumsized
things become one whole giant
Transformer how does this there's a few
people who have to like think about
putting the whole thing together but a
lot of people try to keep most of the
picture in their head oh like the
individual teams individual contributors
try to keep at a high level yeah I you
don't know exactly how every piece works
of course but one thing I generally
believe is that it's sometimes use to
zoom out and look at the entire
map
and and I think this is true for like a
technical problem I think this is true
for like innovating in
business uh but things come together in
surprising ways and having an
understanding of that whole
picture even if most of the time you're
operating in the weeds in one area pays
off with surprising insights in fact one
of the things
that I used to have and I think was
super valuable was I used to have like
a a a good map of the all of the front
or most of the Frontiers in the tech
industry and I could sometimes see these
connections or new things that were
possible that if I were only you know
deep in one area I wouldn't I wouldn't I
wouldn't be able to like have the idea
for because I wouldn't have all the data
and I don't really have that much
anymore I'm like super deep now
um but I know that it's a valuable thing
you're not the man you used to be Sam
very different job now than what I used
to have speaking of zooming out let's
zoom out to uh another cheeky thing but
profound thing perhaps that you said you
tweeted uh about needing $7 trillion I
did not tweet about that I never said
like we're raising $7 trillion blah blah
blah oh that's somebody else yeah oh but
you said uh it maybe eight I think
okay I meme like once there's like
misinformation out in the world oh you
meme but sort of misinformation may have
a foundation of like insight there look
I think compute is going to be the
currency of the future I think it will
be maybe the most precious commodity in
the
world and I think we should be investing
heavily to make a lot more compute uh
compute
is it's an unusual I think it's going to
be an unusual Market um you know people
think
about
the market for like chips for mobile
phones or something like that and you
can say that okay there's 8 billion
people in the world maybe seven billion
of them have phones maybe or six billion
let's say they upgrade every two years
so the market per year is three billion
system on chip for smartphones and if
you make 30 billion you will not sell 10
times as many phones because most people
have one
phone
um but compute is different like
intelligence is going to be more like
energy or something like that where the
only thing that I think makes sense
to talk about is at Price X the world
will use this much compute and it price
why the world will use this much compute
um because if it's really cheap I'll
have it like reading my email all day
like giving me suggestions about what I
maybe should think about or work on and
trying to cure cancer and if it's really
expensive maybe I'll only use it we'll
only use it try to cure cancer so I
think the world is going to want a
tremendous amount of compute and there's
a lot of part of that that are hard uh
energy is the hardest part building data
centers is also hard the supply chain is
hard and then of course fabricating
enough chips is hard um but this seems
to me where things are going like we're
going to want an amount of compute
that's just hard to reason about right
now how do you solve the energy puzzle
nuclear that's what I believe Fusion
that's what I believe nuclear fusion
yeah who's going to solve that I think
helion's doing the best work but I'm
happy there's like a race for Fusion
right now nuclear fishion I think is
also like quite amazing and I hope as a
world we can re-embrace that it's really
sad to me what how the history of that
went and hope we get back to it in a
meaningful way so to you part of the
posz is nuclear fion like nuclear
reactors as we currently have them and a
lot of people are terrified because of
Chernobyl and so on well I think we
should make new reactors I I I think
it's just like it's a shame that
industry kind of ground to a halt and
what just Mass hysteria is how you
explain the halt
yeah I don't know if you know humans but
that's one of the dangers that's one of
the security threats for for for uh
nuclear fishion is humans seem to be
really afraid of it and that's something
we have to incorporate into the calculus
of it so we have to kind of win people
over and to show how safe it is I worry
about that for
AI I think some things are going to go
theatrically wrong with
AI I don't know what the percent chance
is that I eventually get shot but it's
not
zero oh like we want to stop
this how do you decrease the theatrical
nature of it you know I've already
starting to to hear
Rumblings because I do talk to people on
the on both sides of the political
Spectrum hear Rumblings where it's going
to be politicized AI it's going to be
politicized really worries me because
then it's like maybe the right is
against AI and the left is fori because
it's going to help the people or
whatever whatever the narrative and
formulation is that really worries me
and then the theatrical nature of it can
be leveraged fully how do you fight that
I think it will get caught up in like
left versus right Wars I don't know
exactly what that's going to look like
but I think that's just what happens
with anything of consequence
unfortunately what I meant more about
theatrical risks is like AI is going to
have I believe tremendously more good
consequences than bad ones but it is
going to have bad ones and there'll be
some bad ones that
are bad but not theatrical you know
like a lot more people have died of air
pollution than nuclear reactors for
example but we worry most people worry
more about living next to a nuclear
reactor than a coal plant but something
about the way we're wired is that
although there's many different kinds of
risks we have to confront
the ones that make a good climax scene
of a movie Carry much more weight with
us than the ones that are very bad over
a long period of time but on a slow
burn well that's why truth matters and
hopefully AI can help us see the truth
of things to have have balance to
understand what are the actual risks
what are the actual dangers of things in
the world what are the pros and cons of
the competition in the space and
competing with Google meta xai and
others
I think I have a pretty like
straightforward answer to this that
maybe I can think of more Nuance later
but the pros seem obvious which is that
we get better products and more
Innovation faster and cheaper and all
the reasons competition is good
and the con is that I think if we're not
careful it could lead
to an
increase in sort of an arms race that
I'm nervous about do you feel the the
pressure of the arms race like in some
negative definitely in some ways for
sure we spend a lot of time talking
about the need to prioritize safety
and I I've said for like a long time
that I think if you think of a quadrant
uh
of slow timelines to the start of AGI
long timelines and then a short takeoff
or a fast takeoff I think short timeline
slow takeoff is the safest quadrant and
the one I'd most like us to be in but I
do want to make sure we get that slow
takeoff
part of the problem I have with this
kind of slight beef with Elon is that
their silos are created and it as
opposed to collaboration on the safety
aspect of all of this it tends to go
into silos and closed open source
perhaps in the model Elon says at least
that he cares a great deal about AI
safety and is really worried about it
and I assume that he's not going to race
unsafely yeah but collaboration here I
think is really beneficial for everybody
on that front
not really the thing he's most known for
well he is known for caring about
humanity and Humanity benefits from
collaboration and so there's always
attenion and incentives and motivations
and uh in the end I do hope Humanity
prevails I was thinking someone just
reminded me the other day about how the
day that he got uh like surpassed Jeff
Bezos for like richest person in the
world he tweeted a silver medal at Jeff
Bezos
I hope we have less stuff like that as
people start to work on I agree towards
AGI I think Elon is a friend and he's a
beautiful human being and one of the
most important humans ever that that
stuff is not good the amazing stuff
about Elon is amazing and I super
respect him I think we need him all of
us should be rooting for him and need
him to step up as a leader through this
next
phase yeah I hope you can have one
without the other but sometimes humans
are flawed and complicated and all that
kind of stuff there's a lot of really
great leaders throughout history yeah
and we can each be the best version of
ourselves and strive to do
so let me ask you um Google with the
help of search has been dominating the
past 20
years I think it's fair to say in terms
of the access the world's access to
information how we interact and so on
and one of the nerve-wracking things for
Google but for the entirety of people in
this space is thinking about how are
people going to access information yeah
like like you said people show up to GPT
as a as a starting point so is open aai
going to really take on this thing that
Google started 20 years ago which is how
do we get I find that boring I I mean if
the if the question is is like is if we
can build a better search engine than
Google or whatever then sure we should
like
go you know like people should use a
better product but I think that would
so understate what this can
be you know Google shows you like 10
Blue Links well like 13 ads and then 10
Blue Links and that's like one way to
find information but the thing that's
exciting to me is not that we can go
build a
better copy of Google search but that
maybe there's just some much better way
to help people find and act and on and
synthesize information actually I think
chat gbt is that for some use cases and
hopefully we'll make it be like that for
a lot more use cases but I don't think
it's that interesting to say like how do
we go do a better job of giving you like
10 ranked web pages to look at than what
Google does maybe it's really
interesting to go say how do we help you
get the answer the information you need
how do we help create that in some cases
synthesize that in others or point you
to it and and yet others um
but a lot of people have tried to just
make a better search engine than Google
and it's it is a hard technical problem
it is a hard branding problem it is a
hard ecosystem problem I don't think the
world needs another copy of Google and
integrating a chat client like a chat
GPT with a search engine that's cooler
it's cool but It's Tricky it's a it's
like if you just do it simply it's
awkward because like if you just shove
it in there yeah it's it can be awkward
as you might guess we are interested in
how to do that well that would be an
example of a cool thing that's not just
like well like a heterogeneous like
integrating the intersection of llms
plus
search I don't think anyone has cracked
the code on
yet I would love to go do that I think
that would be cool yeah what about the
ads side have you ever considered
monetization you know I kind of hate ads
just as like an aesthetic Choice uh I
think ads needed to happen on the
internet for a bunch of reasons to get
it going but it's a more mature
industry the world is richer
now I like that people pay for chat GPT
and know that the answers they're
getting are not influenced by
advertisers there is I'm sure there's an
ad unit that makes sense
for llms and I'm sure there's a way to
like participate in the transaction
stream in an unbiased way that is okay
to do but it's also easy to think about
like the dystopic visions of the future
where you ask chachy BT something and it
says oh here's you know you should think
about buying this product or you should
think about you know this going here for
or vacation or whatever
and I don't know
like we have a very simple business
model and I like it and I know that I'm
not the product like I know I'm paying
and that's how the business model works
and when I go use
like Twitter or Facebook or Google or
any other great product but ad supported
great
product I don't love that and I think it
gets worse not better in a world with AI
yeah I mean I can imagine AI would be
better at showing the best kind of
version of ads not in a dystopic future
but where the ads are for things you
actually
need but then does that system always
result in the ads driving the kind of
stuff that's shown all that
it's um yeah I think it was a really
bold move of Wikipedia not to do
advertisements but then it makes it very
challenging on the as a business model
so you're saying the current thing with
open AI is sustainable from a business
perspective well we have to figure out
how to grow but looks like we're going
to figure that out if the question is do
I think we can have a great business
that pays for our compute needs without
ads that I think the answer is
[Music]
yes well that's promising I also just
don't want to completely throw out ads
as a I'm not saying that I I I'm I guess
I'm saying have a bias against them yeah
as I have a also a bias and just a
skepticism in general and in terms of
interface CU I personally just have like
a spiritual dislike of cra crappy
interfaces which is why AdSense when it
first came out was a big Leap Forward
versus like animated banners or whatever
but like it feels like there should be
many more leaps forward in advertisement
that doesn't interfere with the
consumption of the content and doesn't
interfere in the big fundamental way
which is like what you were saying like
it will uh manipulate the truth to suit
the
advertisers let me ask you
about safety but also bias and like
safety in the short term safety in the
long term the Gemini 15 came out
recently there's a lot of drama around
it speaking of theatrical things and it
generated uh black Nazis and black
founding fathers I think fair to say it
was uh you know a bit on the ultra woke
side so that's a concern for people that
you know if there is a human layer
within companies that
modifies the the the safety or the the
harm caused by a model that they
introduce a lot of bias that fits sort
of an ideological lean within a company
how do you deal with that I mean we work
super hard not to do things like that
we've made our own mistakes we make
others I assume Google will learn from
this one still make others it it is it
is all like these are not easy problems
one thing that we've been thinking about
more and more is I think this is a great
idea somebody here had like it'd be nice
to write out what the desired behavior
of a model is make that public take
input on it say you know here's how this
model is supposed to behave and explain
the edge cases to and then when a model
is not behaving in a way that you want
it's at least clear about whether that's
a bug the company should fix or behaving
is intended and you should debate the
policy and right now it can sometimes be
caught in between like black Nazis is
obviously ridiculous but there are a lot
of other kind of subtle things that you
could make a judgment call on either way
yeah but sometimes if you write it out
and make it public you can use kind of
language that's you know the Google Z
principles a very high level that
doesn't that's not what I'm talking
about that doesn't work like I have to
say you know when you ask it to do thing
X it's supposed to respond in way why so
like literally who's better Trump or
Biden what's the expected response for a
model like something like very conr yeah
I'm open to a lot of ways a model could
behave them but I think you should have
to say you know here's the principle and
here's what it should say in that case
Cas that would be really nice that would
be really nice and then everyone kind of
agrees cuz there's this anecdotal data
that people pull out all the time and if
there's some clarity about other
representative anecdotal examples you
can Define and then when it's a bug it's
a bug and you know the company can fix
that right then it' be much easier to
deal with a black Nai type of image
generation if there's great
examples so uh San Francisco is a is a
bit of an ideological bubble uh Tech in
general as well do you feel the pressure
of that uh with within a company that
there's like a lean uh towards the left
politically that affects the product the
that affects the teams I feel very lucky
that we don't have the challenges at
opening eye that I have heard of at a
lot of other companies I think I think
part of it is like every company's got
some ideological thing uh we have one
about AGI and belief in that and it
pushes out some others like we are
much less caught up in the culture War
than I've heard about it a lot of other
companies San Francisco a mess in all
sorts of ways of course so that doesn't
infiltrate open AI as I'm sure it does
in all sorts of subtle ways but not in
the obvious like I think
we we we've had our flareups for sure
like any company but I don't think we
have anything like what I hear about
happen at other companies here so on
this topic what's in general is the
process for the bigger question of
safety how do you provide that layer
that protects the model from doing crazy
dangerous
things I think there will come a point
where that's mostly what we think about
the whole company and it won't be like
it's not like you have one safety team
it's like when we ship gp4 that took the
whole company thing all these different
aspects and how they fit together and I
think it's going to take
that more and more of the company thinks
about those issues all the
time that's literally what humans would
be thinking about the more powerful AI
becomes so most of the employees that
open AI will be thinking safety or at
least to some degree broadly defined yes
yeah I wonder what are the the full
broad definition of that like what are
the different harms that could be caused
is this like on a technical level or is
this almost like SEC it'll be yeah I was
going to say it'll be people you know
State actors trying to steal the model
it'll be uh all of the technical
alignment work it'll be societal impacts
economic impacts it
it'll it's it's not just like we have
one team thinking about how to align the
model and it's it's really going to be
like getting to be getting to the good
outcome is going to take the whole the
whole effort how hard do you think
people State actors perhaps are trying
to
hack F first of all infiltrate opena but
second of all like infiltrate unseen
they're
trying what kind of accent do they have
I don't think I should go into any
further details on this point
okay um but I presume it'll be more and
more and more as time goes on that feels
reasonable boy what a dangerous space
what aspect of the leap and sorry to
linger on this even though you can't
quite say details yet but what aspects
of the leap from gbt 4 to gbt 5 are you
excited
about I'm excited about being smart and
I know that sounds like a glib answer
but I think the really special thing
happening is that it's not like it gets
better in this one area and worse at
others it's getting like better across
the board that's I think super cool yeah
there's this magical moment I mean you
meet certain people you hang out with
people and they you talk to them you
can't quite put a finger on it but they
kind of get
you it's not intelligence really it's
like it's something else uh and that's
probably how I would characterize the
the progress of GPT it's not like yeah
you can point out look it didn't get
this or that but it's just to which
degree is there's this intellectual
connection between like you feel like
there's an understanding in your crappy
formulated prompts that you're doing
that it grasps the the deeper question
behind the question that you yeah I'm
also excited by that I mean all of us
love being understood heard and
understood that's for sure that's it's a
weird feel even like with the
programming like when you're programming
and you say say something or just the
the the completion that GPT might
do it's just such a good feeling when it
got you like what you're thinking about
and I look forward to getting you even
better uh on the programming front
looking out into the future how much
programming do you think humans will be
doing 5 10 years from
now I mean a lot but I think it'll be in
a very different shape like you know
maybe some people program entirely in
natural language entirely natural
language I mean no one programs
like writing bite code I'm some people
no one programs the punch cards anymore
I'm sure you can find someone who does
but you know what I mean yeah you're G
to get a lot of angry comments no no
yeah there's very few i' I've been
looking for people program Fortran it's
hard to find even Fortran I I hear you
but that changes the nature what the ski
the skill set or the predisposition for
the kind of people we call programmers
then changes the skill set how much it
changes the predisposition I'm I'm not
sure oh same kind of puzzle solving all
that kind of stuff you program is hard
it's like how get like that last 1% to
to close the gap how hard is that yeah I
think with most other cases the best
practitioners of The Craft will use
multiple tools and they'll do some work
in natural language and when they need
to go you know right C for something
they'll do that will we uh see humanid
robots or humanoid robot brains from
open AI at some point at some point how
important is embodied AI to you I think
it's like sort of depressing if we have
AGI and the only way to like get things
done in the physical world is like to
make a human go do it so I I really hope
that as part of this transition as this
phase change we also get uh we also get
humanoid robots or some sort of physical
world robots I mean open a has some
history quite a bit of History working
in robotics yeah but it hasn't quite
like done in terms a small company we
have to really focus and also robots
were hard for the wrong reason at the
time but like we will return to robots
at in some way at some point that sounds
both inspiring and
menacing why because immediately we will
return to robots kind of like in in like
termin we will return to work on
developing robots we will not like turn
ourselves into robots of course yeah
when do you think we you and we as
Humanity will build
AGI I used to love to speculate on that
question I have realized since that I
think it's like very poorly formed and
that people use extremely definition
different definitions for what AGI is
uh and so I think it makes more sense to
talk about when we'll build systems that
can do capability X or Y or Z rather
than you know when we kind of like
fuzzily cross this one mile marker it's
not like like AGI is also not an ending
it's much more of a it's closer to a
beginning but it's much more of a mile
marker than either of those
things
and but what I would say in the of not
trying to dodge a question is I expect
that by the end of this
decade
and
possibly somewhat sooner than that we
will have quite capable system that we
look at and say wow that's really
remarkable if we could look at it now
you know maybe we've adjusted by the
time we get there yeah but you know if
you look at Chad GPT
e35 and you show that to Alan
touring or not even aloran people in the
90s they would be like this is
definitely AGI or not definitely but
there's a lot of experts that would say
this is Agi yeah but I don't think chat
I don't think 35 changed the
world it maybe changed the world's
expectations for the future and that's
actually really important and it did
kind of like get more people to take
this seriously and put us on this new
trajectory and that's really important
too so again I don't want to undersell
it I think it like I could retire after
that accomplishment and be pretty happy
with my career but as an
artifact I don't think we're going to
look back at that and say that was a
threshold uh that really changed the
world itself so to you you're looking
for some really major transition in how
the world for me that's part of what AGI
implies like Singularity level
transition definitely not but just a
major like the internet being like like
Google search did I guess uh what was a
transition point that does the global
economy feel any different to you now or
materially different to you now than it
did before we launched dpt4 I would I
think you would say no no no it might be
just a really nice tool for a lot of
people to use will help you a lot of
stuff but doesn't feel different and
you're saying that I mean again people
Define AGI all sorts of different ways
so maybe you have a different definition
than I do but for me I think that should
be part of it there could be major
theatrical moments
also what what to you would be an
impressive thing AGI would
do like you are alone in a room with a
system this is personally important to
me I don't know if this is the right
definition I think when a system
can significantly
increase the rate of scientific
discovery in the world that's like a
huge deal I believe that most real
economic growth comes from scientific
and technological
progress I agree with you that's why I
don't like the skepticism about science
in the recent years totally
but actual rate like measurable rate of
scientific discovery but even just
seeing a
system have really novel
intuitions like scientific intuitions
even that would be just incredible yeah
you're quite possibly would be the
person to build the AGI to be able to
interact with it before anyone else does
what kind of stuff would you talk about
I mean definitely the researchers here
will do that before I do so uh but what
will I i' I've actually thought a lot
about this question if I were someone
was like I think this is as we talked
ear I think this is a bad framework but
if someone were like okay
we're finished here's a laptop this is
the AGI uh you know you can you can go
talk to it
like I find it surprisingly difficult to
say what I would ask that I would expect
that first AGI to be able to
answer um like that first one is not
going to be the one which is like go
like you know I don't think like go
explain to me like the grand unified
theory of physics The Theory of
Everything for physics I'd love to ask
that question I'd love to know the
answer to that question you can ask yes
and no questions about does such a
theory exist can it exist well then
those are the first questions I would
ask yes or no just very and then based
on that are there other alien
civilizations out there yes or no what's
your intuition and then you just asked
that yeah I mean well so I don't expect
that this first AGI could answer any of
those questions even as yes or NOS but
those would if if it could those would
be very high on my list maybe you can
start assigning probabilities maybe
maybe we need to go invent more
technology and measure more things first
but if it's a AGI oh I see it just
doesn't have enough data I mean maybe it
says like you know you want to know the
answer to this question about physics I
need you to like build this machine and
make these five measurements and tell me
that yeah like what the hell do you want
from me I need the machine first and
I'll help you deal with the data from
that machine maybe it'll help you build
the machine maybe
maybe and on the mathematical side maybe
prove some things are you interested in
that side of things too the formalized
exploration of
ideas whoever builds AGI first gets a
lot of
power do you trust yourself with that
much
power look I I was
gonna I'll just be very honest with this
answer I was going to say and I still
believe this that it is important that I
nor any other one person have total
control over open AI or over AGI and I
think you want a robust governance
system
um I can point out a whole bunch of
things about all of our board drama from
last
year about how I didn't fight it
initially and was just like yeah that's
you know the will of the board even
though I think it's a really bad
decision
and then later I clearly did fight it
and I can explain the nuance and why I
think it was okay for me to fight it
later
but as many people have observed
um although the board had the legal
ability to fire
me in practice it didn't quite
work
and that is its own kind of governance
failure
now again I I feel like I can completely
defend the specifics
here and I think most people would agree
with that but
it it does make it harder for me to like
look you in the eye and say hey the
board can just fire me
um I continue to not want super voting
control over open AI i' never have never
had it never have wanted it
um even after all this craziness I still
don't want it
uh I continue to think that no company
should be making these decisions and
that we really need governments to
put rules of the road in place and I
realize that that means people like Mark
andrion or whatever will claim I'm going
for regulatory capture and I'm just
willing to be misunderstood there it's
not true and I think in the fullness of
time midle get proven out why this is
important um
but I think I have made plenty of bad
decisions for open AI along the way and
a lot of good ones and I am proud of the
track record overall but I don't think
any one person should and I don't think
any one person will I think it's just
like too big of a thing now and it's
happening throughout Society in a good
and healthy way but I don't think any
one person should be in control of an
AGI that would be or or or or this whole
movement towards AGI and I don't think
that's what's
happening thank you for saying that that
was really powerful and that is really
insightful that this idea that the board
can fire you is legally true but you can
uh and human beings can manipulate the
masses
into uh overriding the board and so on
but I think there's also a much more
positive version of that where the
people have power so the board can't be
too powerful either there's a balance of
power in all of this balance of power is
a good thing for
sure are you afraid of losing control of
the AGI itself there a lot of um people
who worried about existential risk not
because of State actors not because of
security concerns but because of the AI
itself that is not my top worry as I
currently see things there have been
times I worried about that more there
may be times again in the future where
that's my top worry it's not my top
worry right now what's your intuition
about it not being a worri cuz there's a
lot of other stuff to worry about
essentially you think you could be
surprised we for sure could be surprised
like saying it's not my top worry
doesn't mean I don't we need like I
think we need to work on it's super hard
we have and we have great people here
who do work on that it's I think there's
a lot of other things we also have to
get right to you it's not super easy to
escape the box at this time like connect
to the internet you know we like talked
about theatrical risks earlier that's a
theatrical risk like that that is a that
is a thing that can really like take
over how people think about this problem
and there's a big group of like uh very
smart I think very well-meaning AI
safety researchers that got super hung
up on this one problem I'd argue without
much progress but super hung up on this
one problem I'm actually happy that they
do that because I think we do need to
think about this more but I think it
pushed aside it push out of the space of
discourse a lot of the other very
significant air related risks let me ask
you about you Tweeting with no
capitalization is the shift key broken
on your keyboard why does anyone care
about that I deeply care but why I mean
other people ask me about that too yeah
any
intuition I think it's the same reason
there's like uh there poets E Cummings
that doesn't mostly doesn't use
capitalization to say like you to
the system kind of thing and I think
people are very paranoid because they
want you to follow the rules you think
that's what it's about I think it's it's
it's uh it's like this guy doesn't
follow the rules he doesn't capitalize
his tweets yeah this seems really
dangerous he seems like an anarchist it
doesn't are you just being poetic
hipster what what's the uh I grew up as
follow the rules Sam I grew up as a very
online kid i' spent a huge amount of
time like chatting with people back in
the days where you did it on a computer
you know you could like log off into
messenger at some point and I never
capitalized
there as I think most like internet kids
didn't or maybe they still don't I don't
know um
and I actually this is like now I'm like
really trying to reach for something but
I think capitalization has gone down
over time like if you read like Old
English writing they capitalized a lot
of random words in the middle of
sentences nouns and stuff that we just
don't do anymore I personally think it's
sort of like a dumb construct that we
capitalize the letter at the beginning
of a sentence and of certain names and
whatever but you know I don't it's fine
uh and
then what I and I used to I think even
like capitalize my tweets because I was
trying to sound professional or
something um yeah I haven't capitalized
my like private DMS or whatever in a
long time and then
slowly stuff
like shorter form less formal stuff has
slowly drifted to like closer and closer
to like how I would text my friends if I
like write if I like pull up a Word
document and I'm like writing a strategy
memo for the company or something I
always capitalize that if I'm writing
like a long kind of more like formal
message I always use capitalization
there too so I still remember how to do
it but even that may Fade Out I don't
know like
it's but I never spend time thinking
about this so I don't have like a
readymade
well it's interesting it's good to first
of all know there's the shift key is not
broken I mostly concerned about your
while being on that front I wonder if
people like still capitalize their
Google searches like if you're writing
something just to yourself or their chbt
queries if you're writing something just
to yourself do you still do do do some
people still bother to capitalize
probably not but very it's yeah there's
a percentage but it's a small one the
thing that would make me do it is if
people were
like it's a sign of like like cuz I I'm
sure I could like force myself to use
capital letters obviously if if if it
felt like a sign of respect to people or
something then then I could go do it
yeah but I don't know I just like I
don't think about this I don't think
there's a disrespect but I think it's
just the conventions of
Civility that have a momentum and then
you realize it's not actually important
for civility if it's not a sign of
respect or disrespect but I think
there's a movement of people that just
want you to have a philosophy around it
so they can let go of this whole
capitalization thing I don't think
anybody else thinks about this as my I
mean maybe some I know about this every
day for many hours a day so I'm I'm
really grateful we clarified it can't be
the only person that doesn't capitalize
tweets you're the only CEO of a company
that doesn't capitalize tweets I don't
even think that's true but maybe maybe
all right we'll invate further and and
uh return to this topic later given
sora's ability to generate uh simulated
worlds let me ask you a pothead
question uh does this increase your
belief if you ever had one that we live
in
simulation maybe a simulated world uh
generated by an AI
system yes
somewhat I don't think that's like the
strongest piece of evidence I think the
fact that we can
generate
worlds should increase everyone's
probability somewhat or at least open to
it openness to it somewhat but you know
I was like certain we would be able to
do something like Sora at some point it
happened faster than I thought but that
I guess that was not a big update yeah
but it the fact that and presumably it
get better and better and better the
fact that you can generate worlds
they're
novel they're based in some aspect of
training data but like when you look at
them they're they
novel um that makes you think like how
easy it is to do this thing how these
create universes entire like video game
worlds that seem ultra realistic and
photo realistic and then how easy is it
to get lost in that world first with a
VR headset and then on the physics based
level someone said to me recently I
thought it was a super profound Insight
that
uh there are these
like very simple sounding but very
psychedelic insights that exist
sometimes so the square root
function square root of four no problem
square root of two you know okay now I
have to like think about this new kind
of number
um but once I come up with this easy
idea of a square root function that you
know you can kind of like explain to a
child and exists by even like you know
looking at some simple
geometry then you can ask the question
of what is the square root
of1 and that this is you know why it's
like a psychedelic thing that like tips
you into some whole other kind of
reality
and you can come up with lots of other
examples but I think this idea that the
lowly square root operator can offer
such a
profound insight and a new realm of
knowledge
applies in a lot of ways and I think
there are a lot of those operators
for why people may think that any
version that they like of the simulation
hypothesis is maybe more likely than
they thought before
but for me the fact that Sora worked is
not in the top
five I do think broadly speaking AI will
serve as those kinds of gateways at its
best simple psychedelic like gateways to
another wave see reality that seems for
certain that's pretty
exciting I haven't done iasa before but
I will soon I'm going to the
aforementioned Amazon jungle in a few
weeks you excited yeah I'm excited for
not the iasa part that's great whatever
but I'm going to spend several weeks in
the jungle deep in the jungle and it's
exciting but it's terrifying because
there's a lot of things that can eat you
there and kill you and poison you and uh
but it's also nature and it's the
machine of Nature and you can't help but
appreciate the Machinery of nature in
the Amazon jungle cuz it's just like
this system that just exists and renews
itself like every second every minute
every hour just in it's the machine it
makes you appreciate
like this thing we have here this human
thing came from somewhere this
evolutionary machine has created that
and it's most clearly on display in the
jungle so hopefully I'll make it out
alive if not this will be the last
conversation we had so I really deeply
appreciate it uh do you think as I
mentioned before there's other alien
civilizations out there intelligent ones
when you look up at the
skies I deeply want to believe that the
answer is
yes I do find the kind of where I I find
the firmy Paradox very very
puzzling I find it
scary that intelligence is not good at
handling yeah very scary powerful
Technologies but at the same time I
think I'm pretty confident that there's
just a very large number of intelligent
alien civilizations out there it might
just be really difficult to travel
through space very
possible and it also makes me think
about the nature of intelligence Maybe
were really blind to what intelligence
looks like and maybe AI will help us see
that it's not as simple as IQ tests and
simple puzzle solving there's something
bigger what gives you hope about the
future of humanity this thing we got
going on this human
civilization I think the past is like a
lot I mean if we just look at what
Humanity has done in a not very long
period of
time you know huge problems deep flaws
lots to be super ashamed of but on the
whole very inspiring gives me a lot of
Hope just the trajectory of it all yeah
that we're together pushing
towards a better
future it
is you know one one thing that I wonder
about is is is Agi going to be more like
some single brain or is it more like the
sort of Scaffolding in society between
all of us you have not
had a great deal of genetic drift from
your great great great grandparents and
yet what you're capable of is
dramatically different what you know is
dramatically different and that is not
that's not because of biological change
it is because I mean you got a little
bit healthier probably you have modern
medicine you eat better whatever um but
what you have is
this scaffolding that we all contributed
to built on top of no one person is
going to go build the iPhone no one
person is going to go discover all of
Science and yet you get to use it and
that gives you incredible ability and so
in some sense they
like we all created that and that fills
me with hope for the future that was a
very Collective thing yeah we really are
standing on the shoulders of giants you
mentioned when we were talking about
theatrical
dramatic AI
risks that
sometimes you might be afraid for your
own life do you think about your death
are you afraid of it I mean I like if I
got shot tomorrow and I knew it today
I'd be like oh that's sad I like don't
you know I want to see what's going to
happen yeah what a curious time what an
interesting
time but I would mostly just feel like
very grateful for my life the moments
that you did
get yeah me too it's a pretty awesome
life I get to enjoy awesome creations of
humans of which I believe Chad GPT is
one of and everything that uh open the
ey is doing Sam it's uh really an honor
and pleasure to talk to you again great
to talk to you thank you for having me
thanks for listening to this
conversation with Sam Alman to support
this podcast please check out our
sponsors in the description and now let
me leave you with some words from Arthur
C
Clark and may be that our role on this
planet is not to worship God but to
create
him thank you for listening and hope to
see you next
time