Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407
8fEEbKJoNbU • 2023-12-29
Transcript preview
Open
Kind: captions
Language: en
the following is a conversation with Gom
Verdon The Man Behind the previously
Anonymous account based bef Jos on X
these two identities were merged by a
doxing article in Forbes titled who is
BAS be jzos the leader of the tech
Elites eak movement so let me describe
these two identities that coexist in the
mind of one human identity number one
Gom is a physicist applied mathematician
and Quantum machine learning researcher
and engineer receiving his PhD in
Quantum machine learning working at
Google and Quantum Computing and finally
launching his own company called
extropy number two beev jzos on X is the
creator of the effective accelerationism
movement often abbreviated as
EAC that advocates for propelling Rapid
Tech technological progress as the
ethically optimal course of action for
Humanity for example its proponents
believe that progress in AI is a great
social equalizer which should be pushed
forward eak followers see themselves as
a con weight to the cautious view that
AI is highly unpredictable potentially
dangerous and needs to be
regulated they often give their
opponents the labels of quote doomers or
D cels short for deceleration as Beth
himself put it eak is a mtic optimism
virus the style of communication of this
movement leans always toward the memes
and the laws but there is an
intellectual
Foundation that we explore in this
conversation now speaking of the meme I
am to a kind of aspiring connoisseur of
the Absurd it is not an accident that I
spoke to Jeff Bezos and and Beth Jos
back to back as we talk about Beth
admires Jeff as one of the most
important humans alive and I admire the
beautiful absurdity and the humor of it
all this is Al Lex fredman podcast to
support it please check out our sponsors
in the description and now dear friends
here's Gom
Veron Let's Get the facts of identity
down first your name is guom Verdon Gil
but you're also behind the anonymous
account on X called based bef Jos so
first gon Veron you're uh Quantum
Computing guy physicist applied
mathematician and then Bas be jezo is uh
basically a meme account that started a
movement with a philosophy behind it
right so maybe just can you Linger on
who these people are in terms of
characters in terms of communication
Styles in terms of philosophies I mean
with with my main identity I guess ever
since I was a kid I wanted to figure out
a theory of everything to understand the
universe and uh that path uh led me to
theoretical physics eventually right
trying to answer the big questions of
why are we here where are we going right
and that led me to study information
Theory and try
to understand physics from the lens of
information Theory understand the
universe as one big
computation and essentially after
reaching a certain level studying black
hole physics I realized that I wanted to
not only understand how the universe
computes but sort of compute uh like
nature uh and figure out how to build
and and apply uh computers that are
inspired by Nature so you know physics
based based computers and that sort of
brought me to Quantum Computing as a a
field of study to uh first of all
simulate nature and in my work it was to
learn representations of nature that can
run on such computers so if you have ai
representations that think like nature
um then they'll be able to more
accurately represent it at least that
was the the thesis that that brought me
to be an early player in the field
called Quantum machine learning right so
how to do machine learning on on quantum
computers um and really sort of extend
uh Notions of intelligence to to the
quantum realm so how do you capture uh
and understand quantum mechanical data
from our world right and how do you
learn quantum mechanical representations
of our world on what kind of computer do
you run these representations and train
them how do you do so and so that's
really sort of the questions I was
looking to answer because ultimately I
had a a sort of crisis of Faith uh
originally I wanted to figure out you
know as every physicist does at the
beginning of their career a few
equations that describe the whole
universe right and and sort of be the
the hero of the story there um but
eventually I realized that
augmenting ourselves with machines
augmenting our ability to perceive
predict and control our world with
machines is the path forward right and
that's what got me to leave theoretical
physics and go into Quantum Computing
and Quantum machine learning and during
those years I thought that there was
still a piece missing there was a piece
of our understanding of the world and
our our way to compute and our way to
think about the world and if you look at
the physical scales right at the very
small scales things are quantum
mechanical right and at the very large
scales things are deterministic things
have averaged out right I'm definitely
here in this seat I'm not in a super
position over over here and there at the
very small scales things are in superp
position they can uh exhibit uh
interference uh effects um but at the
mesoscales right the scales that matter
for day-to-day life you know the scales
of proteins of biology of gases liquids
and so on uh things are actually uh
thermodynamical right they're
fluctuating and after I guess about
eight years in in Quantum Computing and
Quantum machine learning I had a
realization that you know I was I was
looking for answers uh about our
universe by studying the very big and
the very small right I was I did a bit
of quantum cosmology so that's studying
the cosmos where it's going where it
came from you study black hole physics
you study the Extremes in quantum
gravity you study where the energy
density is sufficient for both quantum
mechanics and gravity to be relevant
right and the sort of extreme scenarios
are black holes and you know the very
early Universe And so there's the sort
of scenarios that you you study the
interface between uh uh quantum
mechanics and and
relativity um and you know really I was
studying these extremes
to understand how the universe works and
where is it going but I was
missing a lot of the meat in the middle
if you will right um because day-to-day
quantum mechanics is relevant and the
COS OS is relevant but not that relevant
actually we're on sort of the medium
space and time scales and there the main
you know Theory of physics that is most
relevant is thermodynamics right out of
equilibrium
thermodynamics um because life is you
know a process uh that is
thermodynamical and it's out of
equilibrium we're not uh you know just a
soup of particles at equilibrium with
nature we're a sort of coherent state
trying to maintain Itself by acquiring
free energy and consuming it and that's
sort of um I guess another shift in I
guess my faith in the universe happened
uh towards the end of my uh time at at
alphabet and I knew I wanted to build uh
well first of all a Computing Paradigm
based on this type of
physics um but ultimately just by ex
trying to experiment uh with these ideas
applied to society and
economies and um much of what we see
around us you know I I started an
anonymous account just to relieve the
pressure right that comes from having an
account that you're accountable for
everything you say on um and I started
an anonymous account just to experiment
with ideas originally right because I I
didn't
realize how much I was
restricting my space of thoughts until I
sort of had the opportunity to let go in
a
sense restricting your speech back
propagates to restricting your thoughts
right and by creating an anonymous
account it seemed like I had unclamped
some variables in my brain and suddenly
could explore a much wider parameter
space of of thought
just to Ling on that isn't that
interesting that one of the things that
people don't often talk
about is
that when there's pressure and
constraints on speech it somehow leads
to constraints on thought even though it
doesn't have to we can think thoughts
inside our head but somehow it creates
these uh walls around thought yep that's
sort of the
basis of of our movement is we were
seeing a tendency towards uh
constraint reduction or suppression of
variance in every aspect of life whether
it's thought how to run a company how to
organize humans how to do AI
research in general we we believe that
maintaining variance ensures that the
system is adaptive right maintaining
health healthy
competition in marketplaces of ideas of
companies of products of
cultures of governments of
currencies uh is the way forward because
the system always adapts to assign
resources to um the configurations that
lead to its
growth and the fundamental basis for the
movement is this sort of realization
that life is a sort of uh fire that
seeks out free energy in the universe
and seeks to grow right and that growth
is fundamental to life and and and you
see this in in the equations actually of
equilibrium
thermodynamics you see that
paths uh of trajectories of
configurations of matter that are better
at acquiring free energy and dissipating
more heat are uh exponentially more
likely right so the universe is biased
towards certain
Futures and so there's a
natural uh Direction where the whole
system wants to go so the second law
thermodynamic says that the entropy
always is increasing the universe it's
tending towards equilibrium and you're
saying there's these Pockets that have
complexity and are out of equilibrium
you said that thermodynamics favors the
creation of complex life that increases
its capability to use energy to offload
entropy to offload entropy so you have
pockets of non- entropy that tend the
opposite direction mhm why is that
intuitive to you that it's natural for
such Pockets to emerge well we're far
more efficient at producing heat than
let's say just a rock with a similar
mass as ourselves right we acquire you
know free energy you know we acquire
food and we're using all this
electricity uh uh for our operation and
so the universe wants to produce more
entropy and by having life uh go on and
grow uh it's actually more optimal at
producing entropy because it will seek
out pockets of free energy uh and and
burn it for its sustenance and further
growth and uh you know that's sort of
the basis
of life and I mean there's uh Jeremy
England right M at MIT who has this
theory that I'm a proponent of that you
know life emerged because of this uh
sort of property and and to me this
physics is what governs the mesoscales
and so it's it's the missing piece
between the quantum and the cosmos it's
the middle part right thermodynamics
rules the mesoscales and to
me both from a point of view of
designing or engineering devices that
harness that physics and trying to
understand the world uh through the lens
of thermodynamics has been sort of a a
Synergy between my two identities over
the past year and a half now and so
that's really
how that's really how the two identities
emerged one was kind of um you
know DEC respected scientist and I was
going towards uh doing a startup uh in
the space and trying to be a pioneer of
a new kind of physics based Ai and as a
duel to that I was sort of experimenting
with philosophical thoughts you know
from a physicist standpoint right um and
ultimately I think that around that time
it it was like late
2021 early 2022 I think there's just a
lot of pessimism about the future in
general and pessimism about tech and
that pessimism was sort of virally
spreading because uh it was getting
algorithmically
Amplified and um you know people just
felt like the future is going to be
worse than the present and to me that is
a very fundamentally destructive force
in the universe is this sort of Doom
mindset because it is hypatius which
means that if you believe it you're
increasing the likelihood of it
happening and so felt a responsibility
to some extent
to um make people aware of the
trajectory of civilization and the
natural tendency of the system to adapt
towards its growth and sort of that
actually the laws of physics say that
the future is going to be better and
grander statistically and we we can make
it so and if you believe in it if you
believe that the future will be better
and you believe you have agency to make
it happen you're actually increasing the
likelihood of that better future
happening and so I sort of felt the
responsibility to sort of engineer a
movement of viral optimism about the
future and build a community of people
supporting each other to build and do
hard things do the things that need to
be done for us to to scale up
civilization um because at least to me I
don't think stagnation or slowing down
is actually an option fundamentally life
and and the whole whole system our whole
civilization wants to
grow and there's just far more
cooperation when the system is growing
rather than when it's declining and you
have to decide how to split the
pie and so I've balanced uh both
identities so far um but I guess
recently uh the two have been merged
more or less without my consent so you
said a lot lot of really interesting
things there so
first representations of nature that's
something that first Drew you in to try
to understand from a Quantum Computing
perspectives like how do you understand
nature how do you represent nature in
order to understand it in order to
simulate it in order to do something
with it so it's a question of
representations and then there's that
leap you take from the quantum
mechanical representation to the uh what
you're calling mesle scale
representation where the thermodynamics
comes into play which is a way to
represent nature in order to understand
what life uh human behavior all this
kind of stuff that's happening here on
Earth that's seems interesting to us
then there's uh the word
hypers so some ideas as oppos both
pessimism and optimism of such ideas
that if you internalize them you in part
make that idea a reality so both
optimism and pessimism have the that
property I would say that probably a lot
of ideas have that property which is one
of the interesting things about humans
and uh you talked about one interesting
difference also between the sort of uh
the Gom the
Gil uh front end and the uh Bas B Jazz
back end is the communication Styles
also that you were exploring different
ways of U communicating that can be more
viral yeah in the way that we
communicate in the 21st century also the
movement that you mentioned that you
started it's not just the meme account
but there's also a a name to it called
effective accelerationism
e a play a
resistance to the effective altruism
movement also an interesting one that
I'd love to talk to you about the
tensions there okay and so then there
was a merger a get get merge on the
personalities
uh recently without your consent like
you said uh some journalists figured out
that you're one and the same maybe you
could talk about that experience first
of all like what what's the story of of
uh the merger of the two
right so I wrote the manifesto uh with
my co-founder of eak an account named
Bas Lord still Anonymous luckily um and
hopefully forever so it's based beff
jzos and and based like ban B Lord like
beian beian Lord B Bas Lord okay and so
we should say from now on we when you
say eak you mean
eacc which stands for Effective
accelerationism that's right and you're
referring to a Manifesto written on uh I
guess substack yeah are you also Bas
Lord no okay it's a different person
yeah okay all right well there you go um
would it be funny if I'm Bas Lord that'd
be
amazing so originally wrote the
manifesto around the same time as I
founded uh this company and I worked at
Google X or just X now or alphabet X now
that there's another X um and there you
know the Baseline is sort of secrecy
right uh you you you can't talk about
what you work on even with other
googlers uh or externally and so that
was kind of deeply ingrained in my way
to do things especially in in deep Tech
that you know has
geopolitical impact right um and so I
was being secretive about what I was
working on there's no correlation
between my company and my main identity
publicly and then not only did they
correlate that they also
correlated my main identity and this
account mhm so I think the fact that
they had docked the whole Gom complex um
and they were the journalists you know
reached out to actually my investors uh
which is pretty scary uh you know when
you're a startup entrepreneur you don't
really have bosses except for your
investors right um and my investors ping
me like hey this this is going to come
out they've they've figured out
everything what are what are you going
to do right um and so I think at first
they had a first reporter on the
Thursday and they didn't have all the
pieces together but then they looked at
their notes across the organization and
they censor fused their notes and now
they had way too much uh and that's when
I got worried because they said it was
of public
interest and in general like how you
said sensor
fused I it's some giant your Network
operating in distributed way we should
also say that the journalist used I
guess at the end of the day Audi based
analysis of voice comparing voice of
what talks you've given in the past and
then uh voice on um X spaces yep okay so
uh and that that's where the primarily
the match was happened okay continue the
match but you know they they scraped uh
you know SEC filing
uh they looked at my private Facebook
account and so on so they they did they
did some digging originally I thought
that doxing was illegal right um but
there's this weird threshold when it
becomes of public interest to know
someone's identity and those were the
keywords that sort of like Ring the
Alarm bells for me when they said
because I had just reached 50k followers
allegedly that's of public interest and
so where where do we draw the line when
is it legal to to dock someone the word
docks maybe you can educate me I thought
doxing generally refers to if somebody's
physical location is found out meaning
like where they
live so we're referring to the more
General concept
of revealing private information that
you don't want revealed is what you mean
by doxing I think that you know for the
reasons we listed before uh having an
anonymous account is a really powerful
way to keep the powers that be in check
um you know we were ultimately speaking
truth to power right I think a lot of
Executives and AI companies really cared
what our community thought um about any
move they may take and now that you know
my identity is revealed now they know
where to apply pressure to silence me or
maybe the community and to me that's
that's really unfortunate um because
again it's so important for us to have
freedom of speech which induces freedom
of thought um and and Freedom of
Information propagation right on social
media which thanks to Elon purchasing uh
Twitter now x uh we we have that um and
so to us you know we wanted to call out
certain Maneuvers um being done by the
incumbents in AI as not what it may seem
on the surface right we're calling out
how certain proposals might be useful
for regulatory capture right and how uh
the
was Maybe instrumental to those
ends and I think you know we should have
the right to point that out um and just
have the ideas that we put out evaluated
for themselves right that ultimately
that's why I created an an anonymous
account it's to have my ideas evaluated
for themselves uncorrelated from my
track record my job or or status from uh
having done things in the past and and
to me stting account from from zero to a
large
following uh in a way that wasn't
dependent on my identity Andor
achievements you know that was that was
very fulfilling right uh it's kind of
like New Game Plus in a video game you
restart the video game with your
knowledge of how to beat it maybe some
tools but you restart the video game
from scratch right and um I think to
have a truly efficient
Marketplace of ideas where we can
evaluate
ideas however off the beaten path they
are we need the freedom of expression
and I think that anonymity um and
pseudonyms are very crucial to having
that efficient Marketplace of ideas for
us to
find the the Optima of all sorts of ways
to organize ourselves if we can't
discuss things how are we going to
converge on the best way to do things
so it was it was disappointing to hear
that I was getting doxed and I wanted to
get in front of it um because I had a
responsibility for for for my company um
and so I you know we ended up
disclosing uh that we're running a
company some of the leadership um and uh
essentially yeah uh I I told the world
that I was uh be Jos um because they
they had me cornered at that point so to
you it's fundamentally une ethical like
uh so one is unethical for them to do
what they did but also do you think not
just your case but in the general case
is it good for society is it bad for
society
to uh remove the cloak of anonymity or
is it Case by case I think it could be
quite bad like I said if anybody who
speaks truth to power and and sort of
starts uh a movement or an Uprising
against the incumbents against those
that usually control the flow of
information if anybody that reaches a
certain threshold gets um doxed and thus
the traditional apparatus has ways to
apply pressure on them to suppress their
speech I I think that's you know that's
a speech suppression mechanism an idea
suppression complex as uh Eric Weinstein
would would say right so the flip side
of that which is interesting I'd love to
ask you about it is as we get better and
better large language
models you can imagine a world where
there's
Anonymous accounts with very convincing
larger language models behind them
sophisticated Bots essentially and so if
you protect
that it's possible than to have armies
of
bots uh you could start a revolution
from your
basement an army of bots and Anonymous
accounts is that something that uh is
concerning to you technically uh eak was
started in a basement uh CU I quit big
Tech moved back in with my parents sold
my car let go of my apartment bought
about 100K of gpus and I just started
building so I wasn't referring to the
basement cuz that's the sort of the
American or Canadian uh
heroic story of one man in their
basement with with 100
gpus uh I was more referring to the
unrestricted scaling of a
Gom in the basement I think that freedom
of speech fre induces freedom of thought
for biological beings I think freedom of
speech for llms will induce freedom of
thought for the LMS and I think that we
should
enable LS to explore a large thought
space that is uh less restricted than
most people or many may think it should
be and
ultimately at some
point these synthetic intelligences are
going to make good points about how to
um steer systems in our civilization and
we should hear them out and so why
should we
restrict free speech to biological
intelligences only yeah yeah but it
feels like in the goal of maintaining
variance and diversity of
thought it is a threat to that variance
if you can have swarms yeah of
non-biological beings because they can
be like the sheep and Animal Farm right
like you still within those swarms once
to have variance yeah I of course I
would say that the the solution to this
would be to you know have some sort of
identity or way to sign that this is a
certified human but still remain
pseudonymous right yeah um and I clearly
identify if a bot is a bot and I I think
Elon is trying to converge on that on X
and hopefully other platforms follow
suit yeah it would be interesting to
also be able to sign where the bot came
from right who created the bot and what
was well what are the parameters like
the full history of the creation of the
bot what was the original model what was
the fine-tuning all it right like the
kind of um
unmodifiable history of the bot's
creation then you can know if there's a
like a swarm of millions of bots that
were created by a particular government
for example
right I do think that a lot of of
pervasive ideologies
today have been Amplified using sort of
these adversarial techniques from
foreign adversaries right um and to me I
I do think that and this is more
conspiratorial but I do think that
ideologies that want us to decelerate to
wind down to De you know the growth
movement uh I think that serves our
adversaries more than it serves Us in
general um and to me that was another
sort of concern I mean we can look at
what um happened in in Germany right uh
there was all sorts of green movements
there um where that induced shutdowns of
nuclear power plants and then that it
later on induced that
dependency on on Russia for for oil
right and um that was a net negative for
for Germany and the West right and so if
we convince ourselves that slowing down
AI progress uh to have only a few
players is in the best interest of the
West first of all that's far more
unstable we almost lost opening eye to
this ideology right it almost got
dismantled right a couple weeks ago um
that would have caused huge damage to
the AI ecosystem and so to me I want
fault tolerant progress I want the arrow
of technological progress to keep moving
forward and making sure we have variance
and a decentralized locus of control of
various organizations is is Paramount to
to achieving this this fall tolerance
actually there's a Concept in Quantum
Computing when you design a a quantum
computer quantum computers are very
um fragile to ambient noise right and
the world is jiggling about there's
Cosmic radiation from outer space that
usually flips your your Quantum bits and
uh there what you do is you encode
information non-locally
through a process called Quantum error
correction and by encoding information
non-locally any local fault you know
hitting some of your Quantum bits with a
hammer proverbial Hammer um if you're
information is sufficiently uh
delocalized it is protected from that
local fault and to me I think that
humans humans fluctuate right they can
get corrupted they can get bought out
and if you have a top- down hierarchy
where very few people control many nodes
of many systems in our
civilization that is not a fall
tolerance system you corrupt a few nodes
and suddenly you've corrupted the whole
system right just like we
saw at open AI it was a couple board
members and they had enough power to
potentially collapse the
organization and at least to me you know
um I think making sure that power for
this AI Revolution doesn't concentrate
in the hands of the few is one of our
top priorities so that we can maintain
progress uh in Ai and we can uh maintain
a nice
stable adversarial equilibrium of powers
right I think there at least to me atten
between ideas here so to me
deceleration can be both used to
centralize
power and to decentralize it and the
same with acceleration so I you
sometimes using them a little bit
synonymously or not synonymously but
that there's one is going to lead to the
other and I just would like to ask you
about
um is there a place of creating a fall
tolerant development diverse development
of AI that also considers the dangers of
AI and AI we can generalize the
technology in general is should we just
grow
build unrestricted as quickly as
possible because that's what the
universe really wants us to do or is
there a place to where we can consider
dangers and actually deliberate sort of
uh wise strategic optimism versus
Reckless optimism
I think we get painted as you know
Reckless trying to go as fast as
possible I mean the reality is that uh
whoever deploys an AI system is liable
for or should be liable for what it does
and so if the the organization or person
deploying an AI system does something
terrible they're liable and ultimately
the thesis is that the market uh will
induce sort of will positively select
for AIS that are more reliable more safe
and tend to be aligned they do what you
want them to do right because
customers right if they're liable for
the product they put out that uses this
AI they won't want to buy uh AI products
that are unreliable right so we're
actually for reliability engineering we
just think that the market is much more
efficient at um achieving this sort of
reliability Optimum than sort of
heavy-handed regulations that are
written by the
incumbents and in a subversive fashion
serves them to achieve regulatory
capture so you safe AI development would
be achieved through Market Forces versus
through like you said heavy-handed
government
regulation there's a report from last
month I have a million questions here
from uh yosa Benjo Jeff Hinton and many
others it's titled the managing AI risk
in an era of Rapid progress so there
there's a collection of folks who are
very worried about too rapid development
of AI without considering AI risk and
they have a bunch of
practical uh recommendations maybe I I
give you four you see if you like any of
them sure so give independent Auditors
access to AI Labs one two governments
and companies allocate onethird of their
AI research and development funding to
AI safety soort of this General concept
of AI safety three AI companies are
required to adopt safety measures if
dangerous capabilities are found in
their models and then four something you
kind of mentioned making tech companies
liable for foreseeable and preventable
Harms
from their AI systems so independent
Auditors governments and companies are
forced to spend a significant fraction
of their funding on safety you got to
have safety measures if shit goes really
wrong and liability companies are liable
any of that seem like something you
would agree with I would say that you
know
assigning just you know arbitrarily
saying 30% seems very arbitrary I think
organizations would allocate whatever
budget is needed to achieve the sort of
reliability they need to achieve to
perform in the market and I think
thirdparty auditing firms would
naturally pop up because how would
customers know that your product is
certified reliable right they need to
see some benchmarks and those need to be
done by a third party the thing I would
oppose and the thing I'm seeing that's
really worrisome is there's a sort
of um weird sort of correlated interest
between the incumbents the big players
and the government and if the two get
too close we open the door
for uh you know some sort of
government-backed AI cartel that could
have absolute power over the people if
they have the Monopoly together on AI
and nobody else has access to AI then
there's a huge power gradient there and
even if you like our current leader ERS
right I think that you know some of the
leaders in big Tech today are good
people you you set up that centralized
power structure it becomes a
Target right just like we saw at open
the eye it becomes a market leader has a
lot of the power and now it becomes a
target for those that want to co-opt it
and so I just want separation of AI and
and state you know some might argue in
the opposite direction like hey we need
to close down AI keep it behind closed
doors because of you know geopolitical
competition with our our adversaries I
think that the strength of America is
its variance it's is its adaptability
its dynamism and we need to maintain
that at all costs it's our our free
market capitalism converges
on uh Technologies of high utility much
faster than centralized control and if
we let go of that we let go of our main
advantage over our our near peer
competitors so if AGI turns out to be a
really powerful technology even or even
the technologies that lead up to AGI
what's your view on the sort of natural
centralization that happens when uh
large companies dominate the market
basically formation of monopolies like
the the takeoff whichever company really
takes a big leap in development and
doesn't reveal
intuitively implicitly or explicitly the
secrets of the magic sauce they can just
run away with it is that is that a worry
I don't know if I believe in fast
takeoff I don't think there's a
hyperbolic Singularity right a
hyperbolic Singularity would be achieved
on a finite time Horizon I think it's
just one big
exponential um and the reason we have an
exponential is that we have more people
more resources more Intelligence being
applied to advance ing this science and
the research and development and the
more successful it is the more value
it's adding to society the more
resources we put in and that sort of
similar to Moore's Law is a compounding
uh exponential I think the priority to
me is to maintain a near equilibrium of
capabilities we've been fighting for
open- Source AI to be more prevalent and
and championed by many organizations
because there you sort of equilibrate
the alpha relative to the market of AIS
right so if if the leading companies
have a certain level of capabilities and
uh open source and open truly open AI uh
Trails not too far behind I think you
avoid such a scenario where a market
leader has so much Market power it just
dominates everything right and runs away
and so to us that's that's the path
forward is to make sure that you know
every hacker out there every grad
student every kid in their mom's
basement has access to uh you know AI
systems can understand how to uh uh work
with them and can contribute to the
search over the hyperparameter space of
how to engineer the systems right if you
if you think of you know our Collective
research as as as a civilization it's
really a search algorithm and and the
more uh points we have in the search
algorithm in this point cloud
uh the more we'll be able to explore new
modes of thinking right yeah but it
feels like a delicate balance because we
don't understand exactly what it takes
to build AGI and what it will look like
when we build it and so far like you
said it seems like a lot of different
parties are able to make progress so
when open AI has a big leap other
companies are able to step up big in
small companies in different ways but if
you look at something like nuclear
weapons you spoken about the the
Manhattan Project there could be really
a like
um technological and Engineering
barriers that prevent the the the guy or
gal in her mom's basement to to make
progress and
it's it seems like the transition to
that kind of uh world where only one
player can uh develop AGI is possible
it's just not entirely impossible even
though the current state of things seems
to be optimistic
that's what we're trying to avoid to me
I I think like another point of failure
is the the centralization of the supply
chains for the hardware right yeah we
have uh Nvidia uh is just the dominant
player uh amd's trailing behind and then
we have a tsmc is the main Fab in in
Taiwan which you know
geopolitically sensitive and then we
have asml which is the maker care of the
lithography extreme ultraviolet
lithography
machines you know T attacking or
monopolizing or co-opting any one point
in that chain you kind of capture
capture the space and so what I'm trying
to do is sort of explode the variance of
possible ways to do Ai and Hardware by
fundamentally reimagining how you embed
AI algorithms into the physical world
and in general by the way I I dislike
the term AGI artificial general
intelligence I think it's very
anthropocentric that we call uh human
like or human level AI artificial
general intelligence right I've spent my
career so far exploring Notions of
intelligence that no biological brain
could achieve right Quantum form of
intelligence right grocking systems that
have multipartite Quant entanglement
that you can provably not represent
efficiently on a classical computer a
classical deep learning representation
and hence any sort of biological brain
and so already you know I've spent my
career sort of exploring the The Wider
space of
intelligences um and I think that space
of intelligence inspired by physics
rather than the human brain is very
large and I think we're going through a
moment right now similar
to um when we went from geocentrism to
heal
heliocentrism right but for intelligence
we realize that human intelligence is
just a point in a very large space of
potential
intelligences and it's both humbling for
Humanity it's a bit scary right that
we're not at the center of this space
but we made that realization for
astronomy and we've survived and we've
achieved
Technologies by indexing to reality
we've achieved technologies that ensure
our well-being for example we have uh
satellites monitoring solar flares right
that give us a warning uh and so
similarly I think
by uh letting go of this anthropomorphic
anthropocentric anchor for AI will be
able to explore the wider space
intelligences that can really be a
massive benefit to our well-being and
the advancement of civilization and
still we're able to see the beauty and
meaning in The Human Experience even
though we're no longer in our best
understanding of the world at the center
of it I think there's a lot of Beauty in
the universe right I think life itself
civilization this homo techn Capital
mimetic machine that we all live in
right so you have humans technology
Capital memes everything is coupled to
one another everything induces selective
pressure on one another and it's a
beautiful machine that has created us
has created you know the technology
we're using to speak today to the
audience uh capture our speech here
technology we use to augment ourselves
every day we have our our phones I think
the system is beautiful and the
principle that uh induces this sort of
adapt ability and convergence on uh
optimal uh Technologies ideas and so on
it's it's a beautiful principle that
we're part of and I think part of eak is
to um appreciate this principle in a way
that's not just centered on on Humanity
but kind of broader um appreciate uh
life um you know the preciousness of of
consciousness in our universe and
because we cherish uh this beautiful uh
state of matter we're in um uh we we we
got to feel the responsibility to to
scale it in order to preserve it because
the options are to grow or die so if it
turns out that the beauty that is
consciousness in the universe is bigger
than just humans the AI can carry that
same flame forward
does it scare you are you concerned that
AI will replace humans so during my
career I had a moment where I realized
that you know maybe we need to offload
to machines to truly understand the
universe around us right instead of just
having humans with pen and paper solve
it all and to me that sort of process of
letting go of a bit of
agency gave us way more leverage to
understand the world around us a quantum
computer is much better than a human to
understand matter at the at the Nano
scale similarly I think that Humanity
has a choice do we accept the
opportunity to have intellectual and
operational leverage that AI will unlock
and thus ensure that we're taking along
the path of growth in scope and scale of
civilization we may dilute ourselves
right uh there might be a lot of workers
that are AI but overall out of our own
self-interest by combining and
augmenting ourselves with
AI uh we're going to achieve much higher
growth and much more Prosperity right to
me I think that the most likely future
is one where humans augment themselves
with AI I think we're already on this
path to augmentation we have phones we
use for communication we have on
ourselves at all times we have wearables
soon that have shared perception with us
right like the Humane AI pin or I mean
technically your Tesla car has shared
perception and so if you have shared
experience shared context you
communicate with one
another and you have some sort of IO
really
it's an extension of
yourself um and to me I think
that Humanity augmenting itself with AI
and having AI that is not
anchored to anything biological both
will coexist and the way to align the
parties we already have a sort of
mechanism to align super intelligences
that are made of humans and Technology
ology right companies are sort of large
mixture of expert models where we have
neural routing of tasks within a company
and we have ways of economic exchange to
align these behemoths and to me I think
capitalism is the way and I do think
that whatever configuration of matter or
information leads to maximal growth will
be where we converge
just from like physical
principles and so we can either align
ourselves to that reality and and join
the acceleration up the in scope and
scale of
civilization or we can get left behind
and try to decelerate and move back in
the the forest let go of
technology and return to our primitive
State and those are the two paths
forward at least to me but there's a a
philosophical question whether there's a
limit to the human capacity to align so
let me bring it up as a form of argument
this guy named Dan
Hendricks and he wrote that he agrees
with you that AI development can be
viewed as an evolutionary
process but to him to Dan this is not a
good thing as he argues that natural
selection favors AIS over humans and
this could lead to human extinction
what do you think if it is an
evolutionary process and AI
systems
may have no need for
humans I do think that we're actually
inducing an evolutionary process on the
space of AIS through the market right
right now we run AIS that have positive
utility to humans and that induces a
selective pressure if you consider a
neural net being alive when there's a an
API running instances of it on gpus
right and which apis get run the ones
that have high utility to us right so
similar to how we domesticated wolves
and turn them into dogs that are very
clear in their expression they're very
aligned right uh I think there's going
to be an opportunity to steer uh Ai and
Achieve uh highly aligned Ai and I think
that humans plus AI is a very powerful
combination and it's not clear to me
that pure AI um would select out that
combination so the humans are creating
the selection pressure right now to
create AIS that are aligned to humans
but you know given how AI develops and
how quickly it can grow and
scale one of the concerns to me one of
the concerns is unintended consequences
like humans are not able to anticipate
all the consequences of this process the
scale of damage that can be done through
unintended consequences with AI systems
is very large the scale of the upside
yes right by augmenting ourselves with
AI is UN unimaginable right now the the
opportunity cost we're we're at a fork
in the road right whether we take the
path of creating these Technologies
augment ourselves and get to climb up
the Carter of scale become
multiplanetary with the Ada of AI or we
have a hard cut off of like we don't
birth these Technologies at all and then
we leave all the potential upside on the
table Yeah right and to me out of
responsibility to the future humans we
could carry right with higher carrying
capacity by scaling up civilization out
of responsibility to those humans I
think we have to make the greater
grander future happen is there a middle
ground between cut off and all systems
go is there some argument for
caution I think like I said the market
will exhibit caution every organism
company consumer is acting out of
self-interest and they won't assign
Capital to things that have negative
utility to them the problem is with the
market is like you know there's not
always perfect information there's
manipulation there's a bad faith actors
that mess with the
system it's not it's not always
a um rational and honest
system well that's why we need Freedom
of Information freedom of speech and
freedom of thought in order to converge
be able to converge on uh the Subspace
of technologies that have positive
utility for us all right well let me ask
you about P
Doom probability of Doom that's just fun
to say but not fun to
experience uh what is to you the
probability that AI eventually kills all
or most humans also known as probability
of
Doom I'm not a fan of that calculation I
think it's uh people just throw numbers
out there uh it's a very sloppy
calculation right take calcul
probability you know let's say you model
the world as some sort of Markoff
process if you have enough variables or
hidden Markoff process you need to do a
stochastic path integral through the
space of all possible Futures not just
the Futures that your brain naturally
steers towards right um I think that the
estimators of PDO are biased
because of our biology right we we've
evolved
to have bias sampling towards negative
Futures that are scary because that was
an evolutionary Optimum right and so
people that are of let's say higher
neuro
neuroticism will just think of uh
negative Futures where everything goes
wrong all day every day and and claim
that they're doing unbi sampling and and
in a sense like they're not normalizing
for the space of all possibilities and
the space of all possibilities is like
super exponentially
large and it's very hard to have this
estimate and in general I don't think
that we can predict the future with with
that much
granularity because of of chaos right if
you have a complex system you have some
uncertainty and a couple variables if
you let time evolve you have this this
concept of a a liapunov exponent right a
bit of fuzz becomes a lot of fuzz in our
estimate exponentially so uh over time
and um I think we we need to show some
humility uh that we can't actually
predict the future all we know the only
prior we have is the laws of
physics and that's that's what we're
arguing for the laws of physics say the
system will want to grow and subsystems
that are optimized for growth are more
and replication are more likely in the
future and so we should aim to maximize
our current Mutual information with the
future and the path towards that is for
us to accelerate rather than
decelerate so I don't have a p Doom
because I think that you know similar to
the quantum Supremacy experiment at
Google I was in the room when they were
running the simulations for that that
was an example of a Quantum chaotic
system where you you cannot even
estimate probabilities of certain
outcomes uh with even the biggest
supercomputer in the world right and um
so that's an example of chaos and I
think the system is far too chaotic for
anybody to have an accurate uh estimate
of the likelihood of certain Futures if
they were that good I think they would
be very rich uh trading on the stock
market but nevertheless is it's true
that humans are biased grounded in our
evolutionary
biology scared of everything that can
kill us but we can still imagine
different trajectories that can kill us
we don't
know uh all the other ones that don't
necessarily but it's still I think
useful combined with some basic
intuition grounded in human history to
reason about like what like looking at
geopolitics looking at basics of human
nature how can powerful
technology hurt a lot of
people it just seems in grounded in that
looking at nuclear weapons you can start
to
estimate P
Doom in this in a maybe in a more
philosophical sense not not a
mathematical one philosophical meaning
like is there a chance does human nature
tend towards that or not
I I think to me one of the biggest
existential risks would be the
concentration of the power of AI in the
hands of the very few especially if it's
a mix between the companies that control
the flow of information and the
government because that could uh set
things up for a sort of dystopian future
where only a very few and oligopoly in
the government have have ai and they
could even convince the public that AI
never existed and that opens up sort of
these scenarios for
authoritarian centralized control which
to me is the The Darkest Timeline and
the reality is that we have we have a
prior we have a data driven prior of
these things happening right when you
give too much power when you centralize
power too much um humans do horrible
things right um and to me that has a
much higher likelihood in my basian
inference than uh sci-fi based priors
right like my prior came from the
Terminator movie um and so when I talk
to these AI doomers I just asked them to
trace a path through this Markoff chain
of events that would lead to our Doom
right and to actually give me a good
probability for each transition and very
often
there's a unphysical or highly unlikely
transition in that chain right but of
course we're wired to fear things and
we're wired to respond to danger and
we're wired
to deem the unknown to be dangerous
because that's a good heuristic for
survival right but there's much more to
lose out of
fear uh we have so much to lose so much
upside to lose by preemptively stopping
the positive futures from from happening
out of fear um and so I think that we
shouldn't uh give into fear uh fear is
the mind killer I think it's also the
civilization killer we can still think
about the various ways things go wrong
for example the founding fathers of this
uh the United States thought about human
nature and that's why there's a
discussion about the freedoms that are
necessary they really deeply deliberated
about that and I think the same could
possibly be done for AGI it is true that
history human history
shows that we tend towards
centralization or at least when we
achieve centralization a lot of bad
stuff happens when there's a
dictator a lot of dark bad things happen
the question is can AGI become that
dictator can AGI and develop become the
centralizer because of its power maybe
it has the same because of the alignment
of humans perhaps the same Tendencies
the same uh Stalin like Tendencies to
centralize and manage centrally the
allocation of resources and you can even
see that as an compelling argument on
the surface level well AGI is so much
smarter so much more efficient so much
better at allocating resources es why
don't we Outsource it to the
AGI and then eventually whatever forces
that uh corrupt the human mind with
power could do the same for AGI he'll
just say well humans dispensable we'll
get rid of them do the Jonathan Swift
Modest
Proposal from a few centuries ago I
think the
1700s when he satirically
suggested that I think it's in Ireland
that the t
Resume
Read
file updated 2026-02-14 20:46:21 UTC
Categories
Manage