File TXT tidak ditemukan.
Transcript
8fEEbKJoNbU • Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0760_8fEEbKJoNbU.txt
Kind: captions
Language: en
the following is a conversation with Gom
Verdon The Man Behind the previously
Anonymous account based bef Jos on X
these two identities were merged by a
doxing article in Forbes titled who is
BAS be jzos the leader of the tech
Elites eak movement so let me describe
these two identities that coexist in the
mind of one human identity number one
Gom is a physicist applied mathematician
and Quantum machine learning researcher
and engineer receiving his PhD in
Quantum machine learning working at
Google and Quantum Computing and finally
launching his own company called
extropy number two beev jzos on X is the
creator of the effective accelerationism
movement often abbreviated as
EAC that advocates for propelling Rapid
Tech technological progress as the
ethically optimal course of action for
Humanity for example its proponents
believe that progress in AI is a great
social equalizer which should be pushed
forward eak followers see themselves as
a con weight to the cautious view that
AI is highly unpredictable potentially
dangerous and needs to be
regulated they often give their
opponents the labels of quote doomers or
D cels short for deceleration as Beth
himself put it eak is a mtic optimism
virus the style of communication of this
movement leans always toward the memes
and the laws but there is an
intellectual
Foundation that we explore in this
conversation now speaking of the meme I
am to a kind of aspiring connoisseur of
the Absurd it is not an accident that I
spoke to Jeff Bezos and and Beth Jos
back to back as we talk about Beth
admires Jeff as one of the most
important humans alive and I admire the
beautiful absurdity and the humor of it
all this is Al Lex fredman podcast to
support it please check out our sponsors
in the description and now dear friends
here's Gom
Veron Let's Get the facts of identity
down first your name is guom Verdon Gil
but you're also behind the anonymous
account on X called based bef Jos so
first gon Veron you're uh Quantum
Computing guy physicist applied
mathematician and then Bas be jezo is uh
basically a meme account that started a
movement with a philosophy behind it
right so maybe just can you Linger on
who these people are in terms of
characters in terms of communication
Styles in terms of philosophies I mean
with with my main identity I guess ever
since I was a kid I wanted to figure out
a theory of everything to understand the
universe and uh that path uh led me to
theoretical physics eventually right
trying to answer the big questions of
why are we here where are we going right
and that led me to study information
Theory and try
to understand physics from the lens of
information Theory understand the
universe as one big
computation and essentially after
reaching a certain level studying black
hole physics I realized that I wanted to
not only understand how the universe
computes but sort of compute uh like
nature uh and figure out how to build
and and apply uh computers that are
inspired by Nature so you know physics
based based computers and that sort of
brought me to Quantum Computing as a a
field of study to uh first of all
simulate nature and in my work it was to
learn representations of nature that can
run on such computers so if you have ai
representations that think like nature
um then they'll be able to more
accurately represent it at least that
was the the thesis that that brought me
to be an early player in the field
called Quantum machine learning right so
how to do machine learning on on quantum
computers um and really sort of extend
uh Notions of intelligence to to the
quantum realm so how do you capture uh
and understand quantum mechanical data
from our world right and how do you
learn quantum mechanical representations
of our world on what kind of computer do
you run these representations and train
them how do you do so and so that's
really sort of the questions I was
looking to answer because ultimately I
had a a sort of crisis of Faith uh
originally I wanted to figure out you
know as every physicist does at the
beginning of their career a few
equations that describe the whole
universe right and and sort of be the
the hero of the story there um but
eventually I realized that
augmenting ourselves with machines
augmenting our ability to perceive
predict and control our world with
machines is the path forward right and
that's what got me to leave theoretical
physics and go into Quantum Computing
and Quantum machine learning and during
those years I thought that there was
still a piece missing there was a piece
of our understanding of the world and
our our way to compute and our way to
think about the world and if you look at
the physical scales right at the very
small scales things are quantum
mechanical right and at the very large
scales things are deterministic things
have averaged out right I'm definitely
here in this seat I'm not in a super
position over over here and there at the
very small scales things are in superp
position they can uh exhibit uh
interference uh effects um but at the
mesoscales right the scales that matter
for day-to-day life you know the scales
of proteins of biology of gases liquids
and so on uh things are actually uh
thermodynamical right they're
fluctuating and after I guess about
eight years in in Quantum Computing and
Quantum machine learning I had a
realization that you know I was I was
looking for answers uh about our
universe by studying the very big and
the very small right I was I did a bit
of quantum cosmology so that's studying
the cosmos where it's going where it
came from you study black hole physics
you study the Extremes in quantum
gravity you study where the energy
density is sufficient for both quantum
mechanics and gravity to be relevant
right and the sort of extreme scenarios
are black holes and you know the very
early Universe And so there's the sort
of scenarios that you you study the
interface between uh uh quantum
mechanics and and
relativity um and you know really I was
studying these extremes
to understand how the universe works and
where is it going but I was
missing a lot of the meat in the middle
if you will right um because day-to-day
quantum mechanics is relevant and the
COS OS is relevant but not that relevant
actually we're on sort of the medium
space and time scales and there the main
you know Theory of physics that is most
relevant is thermodynamics right out of
equilibrium
thermodynamics um because life is you
know a process uh that is
thermodynamical and it's out of
equilibrium we're not uh you know just a
soup of particles at equilibrium with
nature we're a sort of coherent state
trying to maintain Itself by acquiring
free energy and consuming it and that's
sort of um I guess another shift in I
guess my faith in the universe happened
uh towards the end of my uh time at at
alphabet and I knew I wanted to build uh
well first of all a Computing Paradigm
based on this type of
physics um but ultimately just by ex
trying to experiment uh with these ideas
applied to society and
economies and um much of what we see
around us you know I I started an
anonymous account just to relieve the
pressure right that comes from having an
account that you're accountable for
everything you say on um and I started
an anonymous account just to experiment
with ideas originally right because I I
didn't
realize how much I was
restricting my space of thoughts until I
sort of had the opportunity to let go in
a
sense restricting your speech back
propagates to restricting your thoughts
right and by creating an anonymous
account it seemed like I had unclamped
some variables in my brain and suddenly
could explore a much wider parameter
space of of thought
just to Ling on that isn't that
interesting that one of the things that
people don't often talk
about is
that when there's pressure and
constraints on speech it somehow leads
to constraints on thought even though it
doesn't have to we can think thoughts
inside our head but somehow it creates
these uh walls around thought yep that's
sort of the
basis of of our movement is we were
seeing a tendency towards uh
constraint reduction or suppression of
variance in every aspect of life whether
it's thought how to run a company how to
organize humans how to do AI
research in general we we believe that
maintaining variance ensures that the
system is adaptive right maintaining
health healthy
competition in marketplaces of ideas of
companies of products of
cultures of governments of
currencies uh is the way forward because
the system always adapts to assign
resources to um the configurations that
lead to its
growth and the fundamental basis for the
movement is this sort of realization
that life is a sort of uh fire that
seeks out free energy in the universe
and seeks to grow right and that growth
is fundamental to life and and and you
see this in in the equations actually of
equilibrium
thermodynamics you see that
paths uh of trajectories of
configurations of matter that are better
at acquiring free energy and dissipating
more heat are uh exponentially more
likely right so the universe is biased
towards certain
Futures and so there's a
natural uh Direction where the whole
system wants to go so the second law
thermodynamic says that the entropy
always is increasing the universe it's
tending towards equilibrium and you're
saying there's these Pockets that have
complexity and are out of equilibrium
you said that thermodynamics favors the
creation of complex life that increases
its capability to use energy to offload
entropy to offload entropy so you have
pockets of non- entropy that tend the
opposite direction mhm why is that
intuitive to you that it's natural for
such Pockets to emerge well we're far
more efficient at producing heat than
let's say just a rock with a similar
mass as ourselves right we acquire you
know free energy you know we acquire
food and we're using all this
electricity uh uh for our operation and
so the universe wants to produce more
entropy and by having life uh go on and
grow uh it's actually more optimal at
producing entropy because it will seek
out pockets of free energy uh and and
burn it for its sustenance and further
growth and uh you know that's sort of
the basis
of life and I mean there's uh Jeremy
England right M at MIT who has this
theory that I'm a proponent of that you
know life emerged because of this uh
sort of property and and to me this
physics is what governs the mesoscales
and so it's it's the missing piece
between the quantum and the cosmos it's
the middle part right thermodynamics
rules the mesoscales and to
me both from a point of view of
designing or engineering devices that
harness that physics and trying to
understand the world uh through the lens
of thermodynamics has been sort of a a
Synergy between my two identities over
the past year and a half now and so
that's really
how that's really how the two identities
emerged one was kind of um you
know DEC respected scientist and I was
going towards uh doing a startup uh in
the space and trying to be a pioneer of
a new kind of physics based Ai and as a
duel to that I was sort of experimenting
with philosophical thoughts you know
from a physicist standpoint right um and
ultimately I think that around that time
it it was like late
2021 early 2022 I think there's just a
lot of pessimism about the future in
general and pessimism about tech and
that pessimism was sort of virally
spreading because uh it was getting
algorithmically
Amplified and um you know people just
felt like the future is going to be
worse than the present and to me that is
a very fundamentally destructive force
in the universe is this sort of Doom
mindset because it is hypatius which
means that if you believe it you're
increasing the likelihood of it
happening and so felt a responsibility
to some extent
to um make people aware of the
trajectory of civilization and the
natural tendency of the system to adapt
towards its growth and sort of that
actually the laws of physics say that
the future is going to be better and
grander statistically and we we can make
it so and if you believe in it if you
believe that the future will be better
and you believe you have agency to make
it happen you're actually increasing the
likelihood of that better future
happening and so I sort of felt the
responsibility to sort of engineer a
movement of viral optimism about the
future and build a community of people
supporting each other to build and do
hard things do the things that need to
be done for us to to scale up
civilization um because at least to me I
don't think stagnation or slowing down
is actually an option fundamentally life
and and the whole whole system our whole
civilization wants to
grow and there's just far more
cooperation when the system is growing
rather than when it's declining and you
have to decide how to split the
pie and so I've balanced uh both
identities so far um but I guess
recently uh the two have been merged
more or less without my consent so you
said a lot lot of really interesting
things there so
first representations of nature that's
something that first Drew you in to try
to understand from a Quantum Computing
perspectives like how do you understand
nature how do you represent nature in
order to understand it in order to
simulate it in order to do something
with it so it's a question of
representations and then there's that
leap you take from the quantum
mechanical representation to the uh what
you're calling mesle scale
representation where the thermodynamics
comes into play which is a way to
represent nature in order to understand
what life uh human behavior all this
kind of stuff that's happening here on
Earth that's seems interesting to us
then there's uh the word
hypers so some ideas as oppos both
pessimism and optimism of such ideas
that if you internalize them you in part
make that idea a reality so both
optimism and pessimism have the that
property I would say that probably a lot
of ideas have that property which is one
of the interesting things about humans
and uh you talked about one interesting
difference also between the sort of uh
the Gom the
Gil uh front end and the uh Bas B Jazz
back end is the communication Styles
also that you were exploring different
ways of U communicating that can be more
viral yeah in the way that we
communicate in the 21st century also the
movement that you mentioned that you
started it's not just the meme account
but there's also a a name to it called
effective accelerationism
e a play a
resistance to the effective altruism
movement also an interesting one that
I'd love to talk to you about the
tensions there okay and so then there
was a merger a get get merge on the
personalities
uh recently without your consent like
you said uh some journalists figured out
that you're one and the same maybe you
could talk about that experience first
of all like what what's the story of of
uh the merger of the two
right so I wrote the manifesto uh with
my co-founder of eak an account named
Bas Lord still Anonymous luckily um and
hopefully forever so it's based beff
jzos and and based like ban B Lord like
beian beian Lord B Bas Lord okay and so
we should say from now on we when you
say eak you mean
eacc which stands for Effective
accelerationism that's right and you're
referring to a Manifesto written on uh I
guess substack yeah are you also Bas
Lord no okay it's a different person
yeah okay all right well there you go um
would it be funny if I'm Bas Lord that'd
be
amazing so originally wrote the
manifesto around the same time as I
founded uh this company and I worked at
Google X or just X now or alphabet X now
that there's another X um and there you
know the Baseline is sort of secrecy
right uh you you you can't talk about
what you work on even with other
googlers uh or externally and so that
was kind of deeply ingrained in my way
to do things especially in in deep Tech
that you know has
geopolitical impact right um and so I
was being secretive about what I was
working on there's no correlation
between my company and my main identity
publicly and then not only did they
correlate that they also
correlated my main identity and this
account mhm so I think the fact that
they had docked the whole Gom complex um
and they were the journalists you know
reached out to actually my investors uh
which is pretty scary uh you know when
you're a startup entrepreneur you don't
really have bosses except for your
investors right um and my investors ping
me like hey this this is going to come
out they've they've figured out
everything what are what are you going
to do right um and so I think at first
they had a first reporter on the
Thursday and they didn't have all the
pieces together but then they looked at
their notes across the organization and
they censor fused their notes and now
they had way too much uh and that's when
I got worried because they said it was
of public
interest and in general like how you
said sensor
fused I it's some giant your Network
operating in distributed way we should
also say that the journalist used I
guess at the end of the day Audi based
analysis of voice comparing voice of
what talks you've given in the past and
then uh voice on um X spaces yep okay so
uh and that that's where the primarily
the match was happened okay continue the
match but you know they they scraped uh
you know SEC filing
uh they looked at my private Facebook
account and so on so they they did they
did some digging originally I thought
that doxing was illegal right um but
there's this weird threshold when it
becomes of public interest to know
someone's identity and those were the
keywords that sort of like Ring the
Alarm bells for me when they said
because I had just reached 50k followers
allegedly that's of public interest and
so where where do we draw the line when
is it legal to to dock someone the word
docks maybe you can educate me I thought
doxing generally refers to if somebody's
physical location is found out meaning
like where they
live so we're referring to the more
General concept
of revealing private information that
you don't want revealed is what you mean
by doxing I think that you know for the
reasons we listed before uh having an
anonymous account is a really powerful
way to keep the powers that be in check
um you know we were ultimately speaking
truth to power right I think a lot of
Executives and AI companies really cared
what our community thought um about any
move they may take and now that you know
my identity is revealed now they know
where to apply pressure to silence me or
maybe the community and to me that's
that's really unfortunate um because
again it's so important for us to have
freedom of speech which induces freedom
of thought um and and Freedom of
Information propagation right on social
media which thanks to Elon purchasing uh
Twitter now x uh we we have that um and
so to us you know we wanted to call out
certain Maneuvers um being done by the
incumbents in AI as not what it may seem
on the surface right we're calling out
how certain proposals might be useful
for regulatory capture right and how uh
the
was Maybe instrumental to those
ends and I think you know we should have
the right to point that out um and just
have the ideas that we put out evaluated
for themselves right that ultimately
that's why I created an an anonymous
account it's to have my ideas evaluated
for themselves uncorrelated from my
track record my job or or status from uh
having done things in the past and and
to me stting account from from zero to a
large
following uh in a way that wasn't
dependent on my identity Andor
achievements you know that was that was
very fulfilling right uh it's kind of
like New Game Plus in a video game you
restart the video game with your
knowledge of how to beat it maybe some
tools but you restart the video game
from scratch right and um I think to
have a truly efficient
Marketplace of ideas where we can
evaluate
ideas however off the beaten path they
are we need the freedom of expression
and I think that anonymity um and
pseudonyms are very crucial to having
that efficient Marketplace of ideas for
us to
find the the Optima of all sorts of ways
to organize ourselves if we can't
discuss things how are we going to
converge on the best way to do things
so it was it was disappointing to hear
that I was getting doxed and I wanted to
get in front of it um because I had a
responsibility for for for my company um
and so I you know we ended up
disclosing uh that we're running a
company some of the leadership um and uh
essentially yeah uh I I told the world
that I was uh be Jos um because they
they had me cornered at that point so to
you it's fundamentally une ethical like
uh so one is unethical for them to do
what they did but also do you think not
just your case but in the general case
is it good for society is it bad for
society
to uh remove the cloak of anonymity or
is it Case by case I think it could be
quite bad like I said if anybody who
speaks truth to power and and sort of
starts uh a movement or an Uprising
against the incumbents against those
that usually control the flow of
information if anybody that reaches a
certain threshold gets um doxed and thus
the traditional apparatus has ways to
apply pressure on them to suppress their
speech I I think that's you know that's
a speech suppression mechanism an idea
suppression complex as uh Eric Weinstein
would would say right so the flip side
of that which is interesting I'd love to
ask you about it is as we get better and
better large language
models you can imagine a world where
there's
Anonymous accounts with very convincing
larger language models behind them
sophisticated Bots essentially and so if
you protect
that it's possible than to have armies
of
bots uh you could start a revolution
from your
basement an army of bots and Anonymous
accounts is that something that uh is
concerning to you technically uh eak was
started in a basement uh CU I quit big
Tech moved back in with my parents sold
my car let go of my apartment bought
about 100K of gpus and I just started
building so I wasn't referring to the
basement cuz that's the sort of the
American or Canadian uh
heroic story of one man in their
basement with with 100
gpus uh I was more referring to the
unrestricted scaling of a
Gom in the basement I think that freedom
of speech fre induces freedom of thought
for biological beings I think freedom of
speech for llms will induce freedom of
thought for the LMS and I think that we
should
enable LS to explore a large thought
space that is uh less restricted than
most people or many may think it should
be and
ultimately at some
point these synthetic intelligences are
going to make good points about how to
um steer systems in our civilization and
we should hear them out and so why
should we
restrict free speech to biological
intelligences only yeah yeah but it
feels like in the goal of maintaining
variance and diversity of
thought it is a threat to that variance
if you can have swarms yeah of
non-biological beings because they can
be like the sheep and Animal Farm right
like you still within those swarms once
to have variance yeah I of course I
would say that the the solution to this
would be to you know have some sort of
identity or way to sign that this is a
certified human but still remain
pseudonymous right yeah um and I clearly
identify if a bot is a bot and I I think
Elon is trying to converge on that on X
and hopefully other platforms follow
suit yeah it would be interesting to
also be able to sign where the bot came
from right who created the bot and what
was well what are the parameters like
the full history of the creation of the
bot what was the original model what was
the fine-tuning all it right like the
kind of um
unmodifiable history of the bot's
creation then you can know if there's a
like a swarm of millions of bots that
were created by a particular government
for example
right I do think that a lot of of
pervasive ideologies
today have been Amplified using sort of
these adversarial techniques from
foreign adversaries right um and to me I
I do think that and this is more
conspiratorial but I do think that
ideologies that want us to decelerate to
wind down to De you know the growth
movement uh I think that serves our
adversaries more than it serves Us in
general um and to me that was another
sort of concern I mean we can look at
what um happened in in Germany right uh
there was all sorts of green movements
there um where that induced shutdowns of
nuclear power plants and then that it
later on induced that
dependency on on Russia for for oil
right and um that was a net negative for
for Germany and the West right and so if
we convince ourselves that slowing down
AI progress uh to have only a few
players is in the best interest of the
West first of all that's far more
unstable we almost lost opening eye to
this ideology right it almost got
dismantled right a couple weeks ago um
that would have caused huge damage to
the AI ecosystem and so to me I want
fault tolerant progress I want the arrow
of technological progress to keep moving
forward and making sure we have variance
and a decentralized locus of control of
various organizations is is Paramount to
to achieving this this fall tolerance
actually there's a Concept in Quantum
Computing when you design a a quantum
computer quantum computers are very
um fragile to ambient noise right and
the world is jiggling about there's
Cosmic radiation from outer space that
usually flips your your Quantum bits and
uh there what you do is you encode
information non-locally
through a process called Quantum error
correction and by encoding information
non-locally any local fault you know
hitting some of your Quantum bits with a
hammer proverbial Hammer um if you're
information is sufficiently uh
delocalized it is protected from that
local fault and to me I think that
humans humans fluctuate right they can
get corrupted they can get bought out
and if you have a top- down hierarchy
where very few people control many nodes
of many systems in our
civilization that is not a fall
tolerance system you corrupt a few nodes
and suddenly you've corrupted the whole
system right just like we
saw at open AI it was a couple board
members and they had enough power to
potentially collapse the
organization and at least to me you know
um I think making sure that power for
this AI Revolution doesn't concentrate
in the hands of the few is one of our
top priorities so that we can maintain
progress uh in Ai and we can uh maintain
a nice
stable adversarial equilibrium of powers
right I think there at least to me atten
between ideas here so to me
deceleration can be both used to
centralize
power and to decentralize it and the
same with acceleration so I you
sometimes using them a little bit
synonymously or not synonymously but
that there's one is going to lead to the
other and I just would like to ask you
about
um is there a place of creating a fall
tolerant development diverse development
of AI that also considers the dangers of
AI and AI we can generalize the
technology in general is should we just
grow
build unrestricted as quickly as
possible because that's what the
universe really wants us to do or is
there a place to where we can consider
dangers and actually deliberate sort of
uh wise strategic optimism versus
Reckless optimism
I think we get painted as you know
Reckless trying to go as fast as
possible I mean the reality is that uh
whoever deploys an AI system is liable
for or should be liable for what it does
and so if the the organization or person
deploying an AI system does something
terrible they're liable and ultimately
the thesis is that the market uh will
induce sort of will positively select
for AIS that are more reliable more safe
and tend to be aligned they do what you
want them to do right because
customers right if they're liable for
the product they put out that uses this
AI they won't want to buy uh AI products
that are unreliable right so we're
actually for reliability engineering we
just think that the market is much more
efficient at um achieving this sort of
reliability Optimum than sort of
heavy-handed regulations that are
written by the
incumbents and in a subversive fashion
serves them to achieve regulatory
capture so you safe AI development would
be achieved through Market Forces versus
through like you said heavy-handed
government
regulation there's a report from last
month I have a million questions here
from uh yosa Benjo Jeff Hinton and many
others it's titled the managing AI risk
in an era of Rapid progress so there
there's a collection of folks who are
very worried about too rapid development
of AI without considering AI risk and
they have a bunch of
practical uh recommendations maybe I I
give you four you see if you like any of
them sure so give independent Auditors
access to AI Labs one two governments
and companies allocate onethird of their
AI research and development funding to
AI safety soort of this General concept
of AI safety three AI companies are
required to adopt safety measures if
dangerous capabilities are found in
their models and then four something you
kind of mentioned making tech companies
liable for foreseeable and preventable
Harms
from their AI systems so independent
Auditors governments and companies are
forced to spend a significant fraction
of their funding on safety you got to
have safety measures if shit goes really
wrong and liability companies are liable
any of that seem like something you
would agree with I would say that you
know
assigning just you know arbitrarily
saying 30% seems very arbitrary I think
organizations would allocate whatever
budget is needed to achieve the sort of
reliability they need to achieve to
perform in the market and I think
thirdparty auditing firms would
naturally pop up because how would
customers know that your product is
certified reliable right they need to
see some benchmarks and those need to be
done by a third party the thing I would
oppose and the thing I'm seeing that's
really worrisome is there's a sort
of um weird sort of correlated interest
between the incumbents the big players
and the government and if the two get
too close we open the door
for uh you know some sort of
government-backed AI cartel that could
have absolute power over the people if
they have the Monopoly together on AI
and nobody else has access to AI then
there's a huge power gradient there and
even if you like our current leader ERS
right I think that you know some of the
leaders in big Tech today are good
people you you set up that centralized
power structure it becomes a
Target right just like we saw at open
the eye it becomes a market leader has a
lot of the power and now it becomes a
target for those that want to co-opt it
and so I just want separation of AI and
and state you know some might argue in
the opposite direction like hey we need
to close down AI keep it behind closed
doors because of you know geopolitical
competition with our our adversaries I
think that the strength of America is
its variance it's is its adaptability
its dynamism and we need to maintain
that at all costs it's our our free
market capitalism converges
on uh Technologies of high utility much
faster than centralized control and if
we let go of that we let go of our main
advantage over our our near peer
competitors so if AGI turns out to be a
really powerful technology even or even
the technologies that lead up to AGI
what's your view on the sort of natural
centralization that happens when uh
large companies dominate the market
basically formation of monopolies like
the the takeoff whichever company really
takes a big leap in development and
doesn't reveal
intuitively implicitly or explicitly the
secrets of the magic sauce they can just
run away with it is that is that a worry
I don't know if I believe in fast
takeoff I don't think there's a
hyperbolic Singularity right a
hyperbolic Singularity would be achieved
on a finite time Horizon I think it's
just one big
exponential um and the reason we have an
exponential is that we have more people
more resources more Intelligence being
applied to advance ing this science and
the research and development and the
more successful it is the more value
it's adding to society the more
resources we put in and that sort of
similar to Moore's Law is a compounding
uh exponential I think the priority to
me is to maintain a near equilibrium of
capabilities we've been fighting for
open- Source AI to be more prevalent and
and championed by many organizations
because there you sort of equilibrate
the alpha relative to the market of AIS
right so if if the leading companies
have a certain level of capabilities and
uh open source and open truly open AI uh
Trails not too far behind I think you
avoid such a scenario where a market
leader has so much Market power it just
dominates everything right and runs away
and so to us that's that's the path
forward is to make sure that you know
every hacker out there every grad
student every kid in their mom's
basement has access to uh you know AI
systems can understand how to uh uh work
with them and can contribute to the
search over the hyperparameter space of
how to engineer the systems right if you
if you think of you know our Collective
research as as as a civilization it's
really a search algorithm and and the
more uh points we have in the search
algorithm in this point cloud
uh the more we'll be able to explore new
modes of thinking right yeah but it
feels like a delicate balance because we
don't understand exactly what it takes
to build AGI and what it will look like
when we build it and so far like you
said it seems like a lot of different
parties are able to make progress so
when open AI has a big leap other
companies are able to step up big in
small companies in different ways but if
you look at something like nuclear
weapons you spoken about the the
Manhattan Project there could be really
a like
um technological and Engineering
barriers that prevent the the the guy or
gal in her mom's basement to to make
progress and
it's it seems like the transition to
that kind of uh world where only one
player can uh develop AGI is possible
it's just not entirely impossible even
though the current state of things seems
to be optimistic
that's what we're trying to avoid to me
I I think like another point of failure
is the the centralization of the supply
chains for the hardware right yeah we
have uh Nvidia uh is just the dominant
player uh amd's trailing behind and then
we have a tsmc is the main Fab in in
Taiwan which you know
geopolitically sensitive and then we
have asml which is the maker care of the
lithography extreme ultraviolet
lithography
machines you know T attacking or
monopolizing or co-opting any one point
in that chain you kind of capture
capture the space and so what I'm trying
to do is sort of explode the variance of
possible ways to do Ai and Hardware by
fundamentally reimagining how you embed
AI algorithms into the physical world
and in general by the way I I dislike
the term AGI artificial general
intelligence I think it's very
anthropocentric that we call uh human
like or human level AI artificial
general intelligence right I've spent my
career so far exploring Notions of
intelligence that no biological brain
could achieve right Quantum form of
intelligence right grocking systems that
have multipartite Quant entanglement
that you can provably not represent
efficiently on a classical computer a
classical deep learning representation
and hence any sort of biological brain
and so already you know I've spent my
career sort of exploring the The Wider
space of
intelligences um and I think that space
of intelligence inspired by physics
rather than the human brain is very
large and I think we're going through a
moment right now similar
to um when we went from geocentrism to
heal
heliocentrism right but for intelligence
we realize that human intelligence is
just a point in a very large space of
potential
intelligences and it's both humbling for
Humanity it's a bit scary right that
we're not at the center of this space
but we made that realization for
astronomy and we've survived and we've
achieved
Technologies by indexing to reality
we've achieved technologies that ensure
our well-being for example we have uh
satellites monitoring solar flares right
that give us a warning uh and so
similarly I think
by uh letting go of this anthropomorphic
anthropocentric anchor for AI will be
able to explore the wider space
intelligences that can really be a
massive benefit to our well-being and
the advancement of civilization and
still we're able to see the beauty and
meaning in The Human Experience even
though we're no longer in our best
understanding of the world at the center
of it I think there's a lot of Beauty in
the universe right I think life itself
civilization this homo techn Capital
mimetic machine that we all live in
right so you have humans technology
Capital memes everything is coupled to
one another everything induces selective
pressure on one another and it's a
beautiful machine that has created us
has created you know the technology
we're using to speak today to the
audience uh capture our speech here
technology we use to augment ourselves
every day we have our our phones I think
the system is beautiful and the
principle that uh induces this sort of
adapt ability and convergence on uh
optimal uh Technologies ideas and so on
it's it's a beautiful principle that
we're part of and I think part of eak is
to um appreciate this principle in a way
that's not just centered on on Humanity
but kind of broader um appreciate uh
life um you know the preciousness of of
consciousness in our universe and
because we cherish uh this beautiful uh
state of matter we're in um uh we we we
got to feel the responsibility to to
scale it in order to preserve it because
the options are to grow or die so if it
turns out that the beauty that is
consciousness in the universe is bigger
than just humans the AI can carry that
same flame forward
does it scare you are you concerned that
AI will replace humans so during my
career I had a moment where I realized
that you know maybe we need to offload
to machines to truly understand the
universe around us right instead of just
having humans with pen and paper solve
it all and to me that sort of process of
letting go of a bit of
agency gave us way more leverage to
understand the world around us a quantum
computer is much better than a human to
understand matter at the at the Nano
scale similarly I think that Humanity
has a choice do we accept the
opportunity to have intellectual and
operational leverage that AI will unlock
and thus ensure that we're taking along
the path of growth in scope and scale of
civilization we may dilute ourselves
right uh there might be a lot of workers
that are AI but overall out of our own
self-interest by combining and
augmenting ourselves with
AI uh we're going to achieve much higher
growth and much more Prosperity right to
me I think that the most likely future
is one where humans augment themselves
with AI I think we're already on this
path to augmentation we have phones we
use for communication we have on
ourselves at all times we have wearables
soon that have shared perception with us
right like the Humane AI pin or I mean
technically your Tesla car has shared
perception and so if you have shared
experience shared context you
communicate with one
another and you have some sort of IO
really
it's an extension of
yourself um and to me I think
that Humanity augmenting itself with AI
and having AI that is not
anchored to anything biological both
will coexist and the way to align the
parties we already have a sort of
mechanism to align super intelligences
that are made of humans and Technology
ology right companies are sort of large
mixture of expert models where we have
neural routing of tasks within a company
and we have ways of economic exchange to
align these behemoths and to me I think
capitalism is the way and I do think
that whatever configuration of matter or
information leads to maximal growth will
be where we converge
just from like physical
principles and so we can either align
ourselves to that reality and and join
the acceleration up the in scope and
scale of
civilization or we can get left behind
and try to decelerate and move back in
the the forest let go of
technology and return to our primitive
State and those are the two paths
forward at least to me but there's a a
philosophical question whether there's a
limit to the human capacity to align so
let me bring it up as a form of argument
this guy named Dan
Hendricks and he wrote that he agrees
with you that AI development can be
viewed as an evolutionary
process but to him to Dan this is not a
good thing as he argues that natural
selection favors AIS over humans and
this could lead to human extinction
what do you think if it is an
evolutionary process and AI
systems
may have no need for
humans I do think that we're actually
inducing an evolutionary process on the
space of AIS through the market right
right now we run AIS that have positive
utility to humans and that induces a
selective pressure if you consider a
neural net being alive when there's a an
API running instances of it on gpus
right and which apis get run the ones
that have high utility to us right so
similar to how we domesticated wolves
and turn them into dogs that are very
clear in their expression they're very
aligned right uh I think there's going
to be an opportunity to steer uh Ai and
Achieve uh highly aligned Ai and I think
that humans plus AI is a very powerful
combination and it's not clear to me
that pure AI um would select out that
combination so the humans are creating
the selection pressure right now to
create AIS that are aligned to humans
but you know given how AI develops and
how quickly it can grow and
scale one of the concerns to me one of
the concerns is unintended consequences
like humans are not able to anticipate
all the consequences of this process the
scale of damage that can be done through
unintended consequences with AI systems
is very large the scale of the upside
yes right by augmenting ourselves with
AI is UN unimaginable right now the the
opportunity cost we're we're at a fork
in the road right whether we take the
path of creating these Technologies
augment ourselves and get to climb up
the Carter of scale become
multiplanetary with the Ada of AI or we
have a hard cut off of like we don't
birth these Technologies at all and then
we leave all the potential upside on the
table Yeah right and to me out of
responsibility to the future humans we
could carry right with higher carrying
capacity by scaling up civilization out
of responsibility to those humans I
think we have to make the greater
grander future happen is there a middle
ground between cut off and all systems
go is there some argument for
caution I think like I said the market
will exhibit caution every organism
company consumer is acting out of
self-interest and they won't assign
Capital to things that have negative
utility to them the problem is with the
market is like you know there's not
always perfect information there's
manipulation there's a bad faith actors
that mess with the
system it's not it's not always
a um rational and honest
system well that's why we need Freedom
of Information freedom of speech and
freedom of thought in order to converge
be able to converge on uh the Subspace
of technologies that have positive
utility for us all right well let me ask
you about P
Doom probability of Doom that's just fun
to say but not fun to
experience uh what is to you the
probability that AI eventually kills all
or most humans also known as probability
of
Doom I'm not a fan of that calculation I
think it's uh people just throw numbers
out there uh it's a very sloppy
calculation right take calcul
probability you know let's say you model
the world as some sort of Markoff
process if you have enough variables or
hidden Markoff process you need to do a
stochastic path integral through the
space of all possible Futures not just
the Futures that your brain naturally
steers towards right um I think that the
estimators of PDO are biased
because of our biology right we we've
evolved
to have bias sampling towards negative
Futures that are scary because that was
an evolutionary Optimum right and so
people that are of let's say higher
neuro
neuroticism will just think of uh
negative Futures where everything goes
wrong all day every day and and claim
that they're doing unbi sampling and and
in a sense like they're not normalizing
for the space of all possibilities and
the space of all possibilities is like
super exponentially
large and it's very hard to have this
estimate and in general I don't think
that we can predict the future with with
that much
granularity because of of chaos right if
you have a complex system you have some
uncertainty and a couple variables if
you let time evolve you have this this
concept of a a liapunov exponent right a
bit of fuzz becomes a lot of fuzz in our
estimate exponentially so uh over time
and um I think we we need to show some
humility uh that we can't actually
predict the future all we know the only
prior we have is the laws of
physics and that's that's what we're
arguing for the laws of physics say the
system will want to grow and subsystems
that are optimized for growth are more
and replication are more likely in the
future and so we should aim to maximize
our current Mutual information with the
future and the path towards that is for
us to accelerate rather than
decelerate so I don't have a p Doom
because I think that you know similar to
the quantum Supremacy experiment at
Google I was in the room when they were
running the simulations for that that
was an example of a Quantum chaotic
system where you you cannot even
estimate probabilities of certain
outcomes uh with even the biggest
supercomputer in the world right and um
so that's an example of chaos and I
think the system is far too chaotic for
anybody to have an accurate uh estimate
of the likelihood of certain Futures if
they were that good I think they would
be very rich uh trading on the stock
market but nevertheless is it's true
that humans are biased grounded in our
evolutionary
biology scared of everything that can
kill us but we can still imagine
different trajectories that can kill us
we don't
know uh all the other ones that don't
necessarily but it's still I think
useful combined with some basic
intuition grounded in human history to
reason about like what like looking at
geopolitics looking at basics of human
nature how can powerful
technology hurt a lot of
people it just seems in grounded in that
looking at nuclear weapons you can start
to
estimate P
Doom in this in a maybe in a more
philosophical sense not not a
mathematical one philosophical meaning
like is there a chance does human nature
tend towards that or not
I I think to me one of the biggest
existential risks would be the
concentration of the power of AI in the
hands of the very few especially if it's
a mix between the companies that control
the flow of information and the
government because that could uh set
things up for a sort of dystopian future
where only a very few and oligopoly in
the government have have ai and they
could even convince the public that AI
never existed and that opens up sort of
these scenarios for
authoritarian centralized control which
to me is the The Darkest Timeline and
the reality is that we have we have a
prior we have a data driven prior of
these things happening right when you
give too much power when you centralize
power too much um humans do horrible
things right um and to me that has a
much higher likelihood in my basian
inference than uh sci-fi based priors
right like my prior came from the
Terminator movie um and so when I talk
to these AI doomers I just asked them to
trace a path through this Markoff chain
of events that would lead to our Doom
right and to actually give me a good
probability for each transition and very
often
there's a unphysical or highly unlikely
transition in that chain right but of
course we're wired to fear things and
we're wired to respond to danger and
we're wired
to deem the unknown to be dangerous
because that's a good heuristic for
survival right but there's much more to
lose out of
fear uh we have so much to lose so much
upside to lose by preemptively stopping
the positive futures from from happening
out of fear um and so I think that we
shouldn't uh give into fear uh fear is
the mind killer I think it's also the
civilization killer we can still think
about the various ways things go wrong
for example the founding fathers of this
uh the United States thought about human
nature and that's why there's a
discussion about the freedoms that are
necessary they really deeply deliberated
about that and I think the same could
possibly be done for AGI it is true that
history human history
shows that we tend towards
centralization or at least when we
achieve centralization a lot of bad
stuff happens when there's a
dictator a lot of dark bad things happen
the question is can AGI become that
dictator can AGI and develop become the
centralizer because of its power maybe
it has the same because of the alignment
of humans perhaps the same Tendencies
the same uh Stalin like Tendencies to
centralize and manage centrally the
allocation of resources and you can even
see that as an compelling argument on
the surface level well AGI is so much
smarter so much more efficient so much
better at allocating resources es why
don't we Outsource it to the
AGI and then eventually whatever forces
that uh corrupt the human mind with
power could do the same for AGI he'll
just say well humans dispensable we'll
get rid of them do the Jonathan Swift
Modest
Proposal from a few centuries ago I
think the
1700s when he satirically
suggested that I think it's in Ireland
that the the children of poor people are
fed as food to the rich people and that
would be a good idea because it
decreases the amount of poor people and
gives extra income to the poor people so
it's on several accounts decreases the
amount of poor people therefore more
people become
rich uh of course it misses a
fundamental piece here that's hard to
put into a mathematical equation of the
basic value of human
life so all of that to say are you
concerned about AGI being the very
centralizer of power that you just
talked
about I do think that um right now
there's a bias towards over
centralization of AI because of compute
density and centralization
centralization of data and how we're
training um models um I think over time
we're going to run out of data to scrape
over the internet and I think that well
actually I'm working on increasing the
compute density so that compute can be
everywhere and acquire information and
test hypothesis in the environment in a
distributed uh fashion I think that
fundamentally centralized cybernetic
control so having one intelligence that
is massive that you know fuses many
sensors
and is trying to perceive the world
accurately predict it accurately predict
many many variables and control it right
enact its will upon the world I think
that's just never been the optimum right
like let's say you have uh a company you
know if you have a company I don't know
of 10,000 people that all report to the
CEO even if that CEO is an AI I think it
would struggle to fuse all the
information that is coming to it and
then predict the whole system and then
to enact its its will what has emerged
in nature and in corporations and all
sorts of systems is a notion of sort of
hierarchical cybernetic control right
you have uh you know in a company would
be you have like the individual
contributors they're self-interested and
and and uh they're trying to achieve
their their tasks and they they have a a
fine in terms of time and space if you
will control Loop and and field of
perception right um they have their code
base let's say you're in a software
company they have their code base they
iterate it on it uh intraday right and
then the management maybe checks in it
has a wider scope it has let's say five
reports right and then it samples each U
person's update once per week and then
you can go up the chain and you have
larger time scale and greater scope and
that seems to have emerged as sort of
the the optimal way to to control
systems and and really that's what
capitalism gives us right you have these
these hierarchies and you can even have
like parent companies and so on and so
that is far more fault tolerant in
Quantum Computing that's my feeli I came
from we have a a concept of of this
fault tolerance and Quantum air
correction right Quantum air correction
is detecting a fault that came from
noise predicting how it's propagated
through the system and then and then
correcting it right so it's a cybernetic
Loop and it turns out that uh decoders
uh that are hierarchical and at each
level the hierarchy are local uh perform
the best by far and are far more fall
tolerant and the reason is if you have a
non-local decoder then you have one
fault at at this uh control node and the
whole system sort of crashes similarly
to if you have uh you know uh one CEO
that everybody reports to and that CEO
goes on vacation the whole company comes
to crawl right um and so to me I think
that yes we're seeing a tendency towards
centralization of AI but I think there's
going to be a correction over time where
intelligence is going to go closer to
the perception and we're going to we're
going to break up AI into um um smaller
subsystems that communicate with one
another and form a sort of meta uh
system so if you look at the hierarchies
there in the world today there's Nations
and those all hierarchical but in
relation to each other nations are
anarchic so it's an
anarchy would you do you foresee a world
like this where there's not a
over what you call it a centralized
cybernetic control centralized locus of
control yeah so like that's suboptimally
you're saying yeah so it would be always
a state of competition at the very top
level yeah just like you know in a
company you may have like uh two units
working on similar technology and
competing with one other and you you
prune the one that performs not as well
right and that's a sort of selection
process for a tree or a product gets
killed right and then a hole or gets
fired and that's this process of of
trying new things and and and shedding
old things that didn't work is this it's
what gives us adaptability and helps us
Converge on uh you know the Technologies
and things to do that are most good I
just hope there's not a failure mode
that's unique to AGI versus Humans
because you're describing human systems
mostly right now right I just hope when
there's a monopoly on AGI in one company
that we'll see the same thing we see
with humans which is another company
will spring up and start competing I
mean that's been the case so far right
we have open AI we have anthropic now we
have
xai uh you know we had Meta Even for
open source was and now we have mistol
right which is highly competitive and so
that's the beauty of capitalism you
don't have to trust any one party too
much cuz we're kind of always hedging
our bets at every level there's always
competition and that's the most um
beautiful thing to me at least is that
the whole system is always shifting
always adapting and maintaining that
dynamism is how we avoid tyranny right M
making sure that um everyone has access
to to these tools to to these models and
can contribute to the research uh um
avoids a sort of neural tyranny where
very few p PE people have control over
AI for the world and and use it to
oppress uh those around
them you were talking about intelligence
you mentioned multipartite quantum
entanglement mhm so high level question
first is what do you think is
intelligence when you think about
quantum mechanical systems and You
observe some kind of computation
happening in
them what do you
think is intelligent about the kind of
computation the universe is able to do a
small small inkling of which is the kind
of competition the human brain is able
to
do I I would say like intelligence and
computation aren't quite the same thing
I think that the universe is very much
you know doing a a Quantum computation
if you had access to all the degrees of
freedom you could and a very very very
large quantum computer with many many
many cubits uh let's say a few cubits
per uh plank volume right um which is
more or less the pixels we have uh then
you you'd be able to simulate the whole
universe right uh on a on a sufficiently
large quantum computer assuming you're
looking at a finite volume of course of
the Universe
um I think that at least to me
intelligence is the you know I go back
to cybernetics right the ability to
perceive predict and control our world
but really it's nowadays it seems like a
lot of
intelligence uh we use is more about
compression right it's about um it's
about operationalizing information
Theory right in information Theory you
have the notion of entropy of a
distribution or a system and entropy
tells you that you need this many bits
uh to encode this distribution or this
subsystem if you have the most optimal
code MH and AI at least the way we you
we do it today for llms and for Quantum
uh is is very
much Trying to minimize uh relative
entropy between our models of the world
and the world distributions from the
world and so we're learning we're
searching over the space of computations
to process the world to find that
compressed
representation that has distilled all
the variance and noise and entropy right
um and
originally I I came to Quantum machine
learning from the study of black holes
because the entropy of black holes is
very interesting in a sense they're
physically the most dense objects in the
universe you can't pack more information
spatially any more densely than in black
hole and so I was wondering how do black
ho
actually encode information what is
their compression code and so that got
me into the space of algorithms to
search over space of quantum uh codes um
and uh it got me actually into also how
do you acquire Quantum information the
from the world right so something I've
worked on uh this is public now is
quantum analog digital conversion so how
do you capture information from The Real
World in superposition and not destroy
the superposition but digitize for a
quantum mechanical
computer uh information from The Real
World um and so if you have an ability
to capture Quantum information and
search over learn representations of it
now you can learn compressed
representations that may have some
useful information in their latent
representation right um and I think that
many of the problems facing our
civilization are actually Beyond this
this complexity barrier right I mean the
greenhouse effect is a quantum
mechanical effect right chemistry is
quantum
mechanical um you know Nuclear Physics
is quantum mechanical a lot of biology
and and and and protein folding and so
on is affected by quantum mechanics and
so unlocking an ability to augment human
intellect with quantum mechanical
computers and quantum mechanical AI
seemed to me like a fundamental
capability for civilization that we we
needed to develop um so I spent several
years doing that um but over time I kind
of grew weary of the the timelines that
were starting to look like nuclear
fusion so one high Lev question I can
ask is Maybe by way of definition by way
of explanation
what is a quantum computer and what is
uh Quantum machine
learning so a quantum computer really is
a quantum mechanical system over which
we have sufficient control and it can
maintain its quantum mechanical State
and quantum mechanics is how nature
behaves at the very small scales when
things are very small or very cold and
it's actually more fundamental than
probability Theory so we're used to
things being this or
that uh but we're not used to thinking
in superpositions because uh well our
brains can't can't do that so we we have
to translate the quantum mechanical
world to say linear algebra to
grocket unfortunately that translation
is exponentially inefficient on average
you have to represent things with very
large matrices
but really you can make a quantum
computer out of many things right and
we've seen all sorts of players you know
from neutral atoms trapped ions super
conducting metal um photons at different
frequencies I think you could make a
quantum computer out of many things but
to to me the thing that was really
interesting was both Quantum machine
learning was about understanding the
quantum mechanical world with quantum
computers so embedding the physical
world into AI representations and
quantum computer engineering was
embedding AI
algorithms into the physical world so
this bidirectionality of embedding
physical world into ai ai into the
physical world the S symbiosis between
physics and AI really that's the sort of
core
of my quest really uh even to this day
after Quantum Computing it's still in
this sort of
um journey to merge really physics and
AI fundamentally so Quantum machine
learning is a way to do machine learning
on a uh representation of nature that is
you know stays true to the quantum
mechanical aspect of nature yeah it's
learning quantum mechanical
representations that would be Quantum
deep learning um alternatively you can
try to do classical machine learning on
a quantum computer I wouldn't advise it
because um you may have some speedups
but very often the speedups come with
huge costs using a quantum computer is
very expensive why is that because you
assume the computer is operating at zero
temperature which no physical system in
the universe can achieve that
temperature so what you have to do is
what I've been mentioning this Quantum a
correction process which is really an
algorithm M fridge right is trying to
pump entropy out of the system trying to
get it closer to zero temperature and
when you do the calculations of how many
resources it would take to say do deep
learning on a quantum computer classical
deep
learning uh it's there's just such a
huge overhead it's not worth it it's
like thinking about shipping something
across a city using a rocket and going
to orbit and back it doesn't make sense
just use uh an you know delivery truck
right right what kind of stuff can you
figure out can you predict can you
understand with Quantum deep learning
that you can't with deep learning so
incorporating quantum mechanical systems
into the into the learning process I
think that's a great question I mean
fundamentally it's any system that has
sufficient uh quantum mechanical uh
correlations that are very hard to
capture for classical representations
then there should be an advantage for a
quantum mechanical represent over a
purely classical one the question is
which systems have sufficient
correlations that are very Quantum U but
is also uh which systems are still
relevant to Industry that's a big
question you know people are leaning
towards chemistry uh Nuclear
Physics uh um I've worked on actually
processing inputs from Quantum sensors
right if you have a network of quantum
sensors they've captured a quantum
mechanical image of the world and how to
post-process that that becomes a sort of
quantum form of machine perception and
so for example uh fery laab has a
project exploring detecting dark matter
with these Quantum sensors and to me uh
that's in alignment with my quest to
understand the universe ever since I was
a child and so someday I hope that we
can have very large networks of quantum
sensors that help us um peer into the
earliest part of the the universe right
for example the ligo is a Quantum sensor
right it's just a very large one um so
uh yeah I would say Quantum machine
perception uh simulations right grocking
Quantum simulations so similar to Alpha
fold right Alpha fold understood the
probability distribution over
configurations of proteins you can
understand Quantum distributions over
configurations of electrons uh more
efficiently with Quantum machine
learning
you co-authored a paper titled a
universal training algorithm for Quantum
deep learning uh that involves back prop
with a Q very well done sir very well
done how does it work is it is there
some interesting aspects you can just
mentioned uh on how kind
of you know back propop and some of
these things we know for classical
machine learning transfer over to the
the quantum machine learning yeah that
was that was a that was a funky paper
that was one of my first papers and in
Quantum deep learning everybody was
saying oh I think deep learning is going
to be sped up by quantum computers and I
was like well the best way to predict
the future is to invent it so here's a
100 page paper have fun um
essentially you you know Quantum
Computing is usually you embed uh
reversible operations into a Quantum
computation and so the trick there was
to do a feedforward operation and do
what we call a phase kick but really
it's just a force kick you just kick the
system uh with a certain force that is
you know proportional to your loss
function that you you wish to optimize
and then by performing uncomp
computation you start with a
superpositions over a superposition over
parameters right which is pretty funky
now you're not just you don't have just
a point for parameters you have a superp
position over many potential parameters
right mhm and our goal is is using phas
kicks somehow right to adjust parameters
cuz phas kicks
emulate uh having uh the parameter space
be like a a particle in N dimensions and
you're trying to get the shringer
equation shringer Dynamics in the Lost
landscape of the neural network right
and so you do an algorithm to induce the
space kick which you know involves a
feed forward a kick and then when you
uncomp compute the feed forward then all
the errors and these phase kicks and
these forces back propagate and hit each
one of the parameters throughout the
layers and if you alternate this with an
emulation of kinetic energy then it's
kind of like a particle moving in N
dimensions a Quantum particle um and the
advantage in principle would be that it
can tunnel through the landscape um and
find new Optima that would have been
difficult for stochastic
optimizers um but again this kind of a
theoretical thing and in practice uh
with at least the current architectures
for quantum computers that we have
planned uh you know such algorithms
would be extremely expensive to run so
maybe this is a good place to ask the
difference between the different fields
that you've had a tow in so mathematics
physics
engineering and also you know
entrepreneurship like different layers
of the stack I think a lot of the stuff
you're talking about here is a little
bit on the math side maybe physics
almost working in theory what's the
difference between math physics
engineering and uh you know make making
a product for Quantum Computing for
Quantum machine learning yeah I mean you
know some of the original team uh for
the tensorflow quantum project which we
started you know in school at University
of water
uh there was myself uh you know
initially I was a a physicist apply
matician mathematician we had a computer
scientist uh we had mechanical engineer
and then we had a physicist that was
experimental primarily and
so putting together teams that are very
cross-disciplinary and figuring out how
to communicate and and share knowledge
is really the key to doing this sort of
indis interdisciplinary uh engineering
work um I mean there is there is a big
uh difference you know in mathematics
you can explore mathematics for
mathematics sake in physics you're
applying mathematics to understand uh
the world around us uh and in engineer
you're trying to you're trying to hack
the world right you're trying to find
how to apply the physics that I know my
knowledge of the world to to to do
things well in Quantum Computing in
particular I think there's a just a lot
of limits to engineering it just seems
to be extremely hard yeah so there's a
lot of value to be uh exploring
Quantum Computing Quantum machine
learning in uh theory in with with with
with
math so I guess one question is why is
it so hard to build a quantum computer
what are uh what's your view of
timelines in bringing these ideas to
life right I I think that um you know an
overall theme of my company is uh that
we have folks that are uh you know
there's a sort of Exodus from Quantum
Computing and we're going to broader
physics based AI that is not Quantum so
that gives you a hint and um so we
should say the name of your company is
extr Tropic
xtropicalovex
and the way to do that is by encoding
information you encode a code within a
code within a code within a code and so
there's a lot of redundancy needed to do
this error correction But ultimately
it's a sort of um algorithmic
refrigerator really it's just pumping
out entropy out of the S the subsystem
that is virtual and and delocalized that
represents your quote unquote logical
cubit aka the the payload Quantum bits
in which you actually want to uh do run
your quantum mechanical program it's
very difficult because in order to scale
up your quantum computer you need each
component to be of sufficient quality
for it to be worth it because if you try
to do this error correction this Quantum
error correction process and each
Quantum bit and your control over them
is if it's insufficient um uh it's not
worth scaling up your actually adding
more errors than you remove and so
there's this notion of a threshold where
if your Quantum bits are of sufficient
quality in terms of your control over
them it's actually worth scaling up and
actually in recent years people have
been crossing the threshold and it's
starting to be worth it and so it's just
a very long slog of engineering But
ultimately it's really crazy to me how
much Exquisite level of control we have
over these systems it's actually quite
crazy uh and and we're people are
crossing you know they're achieving
Milestones it's just you know in general
the media always gets ahead right of
where the technology is there's a bit
too much hype it's good for fundraising
but sometimes you know it causes Winters
right it's the hype cycle I'm bullish on
Quantum Computing on a 10 15 year time
scale uh personally but I think there's
other quests that can be done uh in the
meantime I think it's in good hands
right now
well let me just explore different
beautiful ideas large or small in
Quantum Computing that might jump out at
you from memory so when you co-authored
a paper titled ASM totically Limitless
Quantum energy teleportation via qit
probes so just uh out of curiosity uh
can you explain what a qit is verus a
cubit yeah it's a it's a dstate uh Cubit
it's a multi-dimensional
multi-dimensional right so it's like uh
well you know can you have a notion of
like an an integer floating point that
is quantum mechanical that's something
I've had to think about um I think that
research was a precursor to later work
on Quantum analog digital conversion ah
there there was interesting because
during my masters I was trying to
understand the energy and entanglement
of the vacuum right of emptiness
emptiness has energy which is very weird
to say and our equations of
cosmology don't match our calculations
for the amount of quantum energy there
are there is in the
fluctuations and so I was trying to hack
the energy of the vacuum right and the
reality is that you can't just directly
hack it it doesn't it's not technically
free energy your lack of knowledge of
the fluctuations means you can't extract
the energy but just like you know in the
stock market if you have a stock that's
correlated over time the vacuum is
actually correlated so if you measured
the vacuum at one point you acquired
information if you communicated that
information to another point you can
infer uh what configuration the vacuum
is in to some precision and
statistically extract on average some
energy there so you've quote unquote
teleported
energy to me that was interesting
because you could create ET of negative
energy density which is energy density
that is below the
vacuum which is very weird because H we
don't understand how the vacuum
gravitates um and there are theories
where the vacuum or the canvas of
SpaceTime itself is really a a canvas
made out of quantum entanglement and I
was studying how decreasing energy of
the vacuum locally
increases quantum entanglement which is
very
funky um and so the thing there is that
you know uh if you're
into you know weird theories about you
know uaps and whatnot you know uh you
could try to imagine that they're
they're around and and how would they
Propel themselves right how would they
um go faster than the speed of light you
would need a sort of negative energy
density and to me I gave it the old
College try trying to hack the energy of
the vacuum and hit the limits allowable
by the laws of physics but um there's
all sorts of caveat there where uh you
can't extract more than you've you've
put in obviously but you're saying it's
possible to teleport the energy because
you
can extract information one
place and then make based on that some
kind of prediction about another place
I'm not sure what I make of that yeah I
mean it's it's uh allowable by the laws
of physics the the reality though is
that the correlations Decay with
distance and so you're going to have to
pay the price not too far away from
where you extract it sure right the
Precision decreases in terms of your
ability but still uh but since you
mentioned uh uaps uh we talked about
intelligence and I forgot to ask
what's your view on the other possible
intelligences that are out there at the
uh the meso scale do you think there's
other intelligent alien civilizations is
that useful to think about how often do
you think about it I think it's I think
it's useful to think about it's useful
to think about because we got to ensure
we're
antifragile and we're you know trying to
increase our capabilities as fast as
possible because we could get disrupted
like there's no laws of physics against
their uh being life elsewhere that could
evolve and become an advanced
civilization and and eventually come to
us uh do I think they're here now I'm
not sure uh I mean I've I've I've read
what most people have read on the the
topic um I think it's interesting to
consider and to me it's a useful thought
experiment to instill a sense of urgency
in developing Technologies and
increasing our capabilities to make sure
we don't get disrupted right whether
it's a form of of AI that disrupts us or
a foreign intelligence from a different
planet like either way like increasing
our capabilities and becoming formid
formidable as
humans um I think that's that's really
important so that we're robust against
whatever the universe throws at us but
to me it's also interest an interesting
Challenge and thought experiment on how
to perceive intelligence this has to do
with quantum mechanical systems it has
to do with with any kind of system
that's not like
humans so to me the thought experiment
is say the aliens are here or they are
directly observable we're just too
blind too self-
centered um don't have the right
sensors or don't have the right process
ing of the sensor data to see the
obvious intelligence that's all around
us well that's why we work on Quantum
sensors right they can sense gravity
yeah gra but there could be so that's a
good one but there could be other stuff
that's not even in the uh currently
known forces of physics right there
could be some other
stuff and the most entertaining thought
experiment to me is that it's other
stuff that's obvious it's not like we
don't we lack the sensors it's all
around us you know you know the the
Consciousness being one possible one but
there could be stuff that's just like
obviously there that once you know it
it's like oh right right that's that's
that the thing we thought is somehow
emergent from the laws of physics we
understand them is actually a
fundamental
part of the universe and can be
incorporated in physics most
understood statistically speaking right
if we observed some sort of alien life
it would most likely be some sort of
virally
self-replicating Von noyman like Pro
System right and and it's
possible that there you know there are
such systems that I don't know what
they're doing at the bottom of the ocean
allegedly but maybe they're you know
collecting minerals uh from the bottom
of the ocean yeah um but that wouldn't
VI violate any of my priors but am I
certain that these systems are here and
it it'd be difficult for me to say so
right I only have secondhand information
about there being data about the bottom
of the ocean yeah but you know could it
be things like memes could it be
thoughts and
ideas could they be operating in that
Medium could aliens be the very thoughts
that come into my
head like what do you have you H how do
you know that how do you know that the
what's the origin of ideas in your mind
when idea comes to your head show me
where it
originates I mean
frankly uh when I had the idea for the
type of computer I'm building now I
think it was eight years ago now it
really felt like it was being beamed
from
space just I was in bed just shaking
just thinking it through and I don't
know uh but do I believe that
legitimately I don't think so I but you
know I I think that um alien life could
take many forms and I think the notion
of intelligence and the notion of Life
needs to be expanded uh much more
broadly uh to be less
anthropocentric or
biocentric just to linger a little
longer on on quantum mechanics what's uh
through all your explorations of quantum
Computing what's the coolest most
beautiful idea that you've come across
that has been solved or has not yet been
solved I think the journey to understand
something called ads CFT so the journey
to understand quantum gravity through
this picture where a hologram of lesser
Dimension is actually dual or exactly
corresponding to a bulk uh theory of
quantum gravity of an extra Dimension
and the fact that this sort of Duality
comes from trying to
learn deep learning like representations
of the
boundary and so at least part of my
journey someday uh on my bucket list is
to apply Quantum machine learning to uh
these sorts of systems these cfts or uh
they're called syk models um and learn
an emergent geometry from from the
boundary Theory and so we can have a
form of machine learning
helps us to help
us understand quantum gravity right
which is you know still a holy grill
that I would like to hit before I leave
this
earth what do you think is going on with
black holes as
information
storing and processing units what do you
think is going on with black holes black
holes are really fascinating objects
they're at the inter interface between
quantum mechanics and gravity and so
they help us test all sorts of ideas um
I think that you know for many decades
now there's been sort of this black hole
information Paradox that things that
fall into the black hole seem to we've
seem to have lost their
information now I think there's this uh
firewall Paradox that has been allegedly
resolved in recent years by um you know
a former peer of mine uh who's now
professor at Berkeley um and uh there it
seems like there is as information falls
into a black hole it's sort of there's
sort of a sedimentation right as you as
you get closer and closer to the Horizon
from the point of view of the Observer
on the outside the object slows down
infinitely as it gets closer and closer
and so everything that is falling to a
black hole from our perspective gift s
sort of sedimented and tacked on to the
near Horizon and at some point it gets
so close to the Horizon it's
in the proximity or the scale which in
which Quantum effects and Quantum
fluctuations matter and there some that
and falling matter could interfere with
sort of the traditional pictures that it
could interfere with the creation and
annihilation of particles and
antiparticles in the vacuum and through
this interference
uh one of the particles gets entangled
with the infalling information and one
of them is now free and escapes and
that's how there's sort of mutual
information between the outgoing
radiation and the infalling matter uh
but getting that calculation right I
think we're only just starting to uh put
the pieces together um there's a few
pothead like questions I want to ask you
sure so one does it terrify you that
there's a giant black hole the center of
our galaxy I don't know I I I just want
to you know set up shop near it to to
fast forward you know meet the meet a
future civilization right like if we
have a limited lifetime if you could go
orbit a black hole and emerge uh so if
you were like if there was a special
mission that could take you to Black
Hole would you volunteer to go travel to
orbit and not obviously not fall into it
that's that's obvious so it's obvious to
you that everything's destroyed inside a
black hole like all the information
makes up Gom is destroyed um maybe on
the other side bef jil's emerges and
it's all like is tied together in some
deeply mimaal way yeah I mean that's a
great question we we have to
answer what black holes are are they are
we punching a hole through SpaceTime and
creating a pocket Universe it's possible
right then then that would mean that if
we Ascend the cev scale to you know
beyond CF type 3 we could engineer black
holes with specific hyperparameters to
transmit information to new universes we
create and so we can have progyny right
uh that are new universes and so we even
though our universe may reach a heat
death we may have a way to have a legacy
right so we don't know yet we need to
ascend the cter shf scale to answer
these questions right to appear into
that regime of higher energy physics and
maybe you can uh speak to the CF scale
for people who don't know so one one of
the uh sort of mem likee principles and
goals of uh the eak movement is to
ascend The Carter shf scale what is the
cter chef scale and when do we want to
ascend it the Carter Chef scale is a
measure of our energy production and
consumption um and really it's a
logarithmic scale and CEF type one is a
milestone where we are producing the
equivalent wattage to all the energy
that is incident on earth from the sun
cter Chef type 2 would be harnessing all
the energy that is output by the Sun and
I think type three is like the whole
galaxy galy I think level yeah yeah yeah
and then some people have some crazy
type four and five but I don't know if I
believe in those but um to me it seems
like from the first principles of
thermodynamics that again this there's
this concept of
thermodynamic uh driven dissipative
adaptation where you know life evolved
on Earth because we have this sort of
energetic drive from the sun right we
have incident energy and life evolved on
Earth to
capture figure out ways to best capture
that free energy to maintain itself and
and grow and I think that that principle
it's not special to our Earth Sun System
we can extend life well beyond and we
kind of have a responsibility to do so
because that's the process that brought
us here so we don't even know what it
has in store for us in the future it
could be something of beauty we can't
even imagine today right mhm so this is
probably a good place to talk a bit
about the eak movement in a substack
blog post titled what the fuck is eak or
actually what the fstar is eak you write
strategically speaking we need to work
towards several overarching civilization
goals that are all
interdependent and the four goals are
increase the amount of energy we can
harness as a
species climb the cesf gradient in the
short term this almost certainly means
nuclear fion
increase human flourishing via prop
population growth policies and pro eonic
growth
policies create artificial general
intelligence the single greatest Force
multiplier in human history and finally
develop interplanetary and Interstellar
transport so that Humanity can spread
beyond the Earth could you build on top
of that to maybe
say what to you is the eak movement what
are the goals what are the
principles the goal is for the human
techn capital mimetic machine to become
self-aware and to hypers engineer its
own growth so let's let's de each of
those uh words so you have humans you
have technology you have capital and
then you have you have memes information
right and all of those systems are
coupled with one another right humans
work at companies they acquire inh
helicate capital and humans communicate
via memes and information propagation um
and our goal was to have a sort of viral
optimistic movement that is aware of how
the system works uh fundamentally it
seeks to grow and we simply want to lean
into the natural tendencies of the
system to adapt for its own growth um so
in that way you write the is literally a
mimetic optimism virus that is
constantly drifting mutating and
propagating in a decentralized fashion
so mimetic optimism virus so you do want
it to be a virus to to maximize the
spread and uh is Hypes therefore the
optimism will incentivize its
growth we see eak as a sort of um metah
huris a a sort of very thin uh cultural
framework from which you can have much
more opinionated Forks right
fundamentally we just say that it's good
the SI what got us here is
this adaptation of the whole system you
know based on thermodynamics and that
process is good and we should keep it
going right that is the core thesis
everything else is okay how how do we
ensure that uh we maintain this
malleability and adaptability well
clearly not suppressing variance uh and
and and maintaining uh free speech
freedom of thought Freedom of
Information
propagation and freedom to do AI
research is important for us to converge
the fastest on the space of uh
Technologies ideas and whatnot that lead
to this growth um and so ultimately you
know there's been quite a few Forks some
are just memes but some are more serious
right vitalic butterin recently made a
diac fork he has his own sort of fine
tunings of eak does anything jump out to
memory of the unique characteristic of
that fork from vitalic I would say that
it's it's trying to find a middle ground
between eak and sort of EA and AI safety
to me like having a movement that is
opposite to what was the mainstream
narrative that was taking over Sil value
was important to sort of shift the
dynamic range of opinions and you know
it's it's like the balance between
centralization and decentralization the
real Optimum is always somewhere in the
middle right uh but but for eak we're
pushing for
entropy novelty disruption malleability
speed uh rather than being like sort of
conservative suppressing thought
suppressing speech adding constraints
adding too many regulations slowing
things down and so it's kind of we're
trying to bring balance to the force
right
systems balance to the force human
civilization yeah it's literally the
forces of constraints versus the
entropic force that makes us explore
right systems are optimal when they're
at the edge of criticality between Order
and Chaos right between
constraints um energy minimization and
entropy right systems want to
equilibrate balance these two things and
so I thought that the balance was
lacking and so we created this movement
to to bring balance well I like how I I
like the sort of visual of the lscape of
ideas evolving through Forks soort of
kind
of thinking on the other part of
History uh thinking of um Marxism as the
original repository and then Soviet
communism is a fork of that and then the
maoism is a fork of the of Marxism and
communism so those are all Forks they're
exploring different ideas thinking of
culture almost like code right nowadays
I mean you're what you prompt and the LM
or what you put in the constitution of
an LM is is is basically its cultural
framework what it believes right um and
uh you can sh on GitHub nowadays so
starting trying to take inspiration from
what has worked in the sort of uh
machine of uh software uh to adapt over
the space of code could we apply that to
culture and our goal is to not say you
should live your life this way XYZ is to
H is to set up a process where people
are always searching over subcultures
and competing for M here and I think
creating this malleability of culture is
super important for us to converge onto
the cultures and the heuristics about
how to live one's life that are updated
to to modern times because there's
really been a a sort of vacuum of of
spirituality and culture people don't
feel like they belong to any one group
and there's been parasitic ideologies
that have taken up opportunity to to
populate this peachy dish of of Minds
right uh Elon calls it the Mind virus um
we call it the the D cell M virus
complex which is a decelerative that is
kind of the the overall pattern between
all of them there's many variants as
well um and so you know if there's a
sort of viral pessimism decelerative
Movement we needed to have not only one
movement uh but you know many many
variants so it's very hard to pinpoint
and stop but the overarching thing is
nevertheless a kind
of um mimetic optimism
pandemic so uh I mean okay let me ask
you do you think iak to some degree is a
cult Define
cult I think a lot of human progress is
made when you have independent thought
so you have individuals that are able to
think freely
and very
powerful uh mtic system can kind of lead
to group think there's something in
human nature that leads to like Mass
hypnosis Mass hysteria we start to think
alike whenever there's a sexy idea that
captures our minds and so it's actually
hard to like break us apart like pull us
apart diversify thought so I'm to that
degree to to which degree is everybody
kind of chanting eak eak like the sheep
and Animal Farm well first of all it's
fun it's rebellious right like
you know
many um I I I think we lean into there
there there's this concept of sort of
meta irony right of of sort of being on
the boundary of like we're not sure if
they're serious or not and it's it's
much more playful and much more fun
right like um for example we talk about
thermodynamics being our God right um
and sometimes we do cult-like things but
there's no like ceremony and and robes
and whatnot uh so not yet
uh but ultimately yeah I I mean I
totally agree that it seems to me that
humans want to feel like they're part of
a group so they naturally try to agree
with their neighbors and and find common
ground and and that leads to sort of
mode collapse in the space of ideas
right we used to have sort of one
cultural uh Island that was allowed it
was a typical Subspace of thought and
anything that was diverting from that
Subspace of thought was suppressed or
you were canceled right now we've
created a new mode but the whole point
is that we're not trying to have very
restricted space of thought there's not
just one way to think about eak and its
many forks and and the point is that
there are many forks and there can be
many clusters and many islands and I
shouldn't be in control of it uh in any
way uh I mean there's no formal org uh
whatsoever uh I just put out uh tweets
and and uh certain blog posts and people
are free to defect and Fork if there's
an aspect they don't like and so that
makes it so that there should be a sort
of deter deterritorialization in the
space of ideas so that we don't end up
in one cluster that's very cult-like um
and so Cults usually the they don't they
don't allow people to defect or start
competing Forks whereas we encourage it
right do you think just the humor
the pros and cons of humor and
meme so in some sense
meme there's like a wisdom to
memes um what is it the Magic Theater
what book is that from Harman has a um
steppen wolf I think but there there's
a there's a kind of embracing of the
absurdity that seems to get to the truth
of things but at the same time it can
also decrease the quality and the rigor
of the discourse yeah do you feel the
tension of that yeah so initially I
think what allowed us to grow under the
radar was because it was camouflaged as
sort of meta ironic right we would sneak
in you know I deep truths within a
package of humor and humor and memes and
what are called shit posts right um and
I think that was purposefully a sort of
camouflage against those that um seek
status and do not want to it's very hard
to argue with a cartoon frog or
a a a cartoon of an Intergalactic Jeff
Bezos um and take yourself seriously
yeah and so that allowed allowed us to
grow uh pretty rapidly in the early days
but um of course like that's
you know essentially people get steered
their notion of the truth comes from the
data they see from from the information
they're fed and the information people
are fed is determined by algorithms
right and really what we've been doing
is sort of uh engineering what we call
High mimetic Fitness uh packets of
information so that they can spread
effectively and carry a message right so
it's it's kind of a
a vector to Spread spread the message
and and yes we've been using sort of
techniques that are optimal for for
today's algorithmically Amplified
information Landscapes um but I think
we're reaching the point of you know
scale where we can have serious debates
and serious conversations and um you
know that's why we're considering doing
a bunch of uh debates and having more
serious long form discussions because I
don't think that
um the timeline is optimal for sort of
very serious thoughtful discussions and
get you get rewarded for sort of uh
polarization right and so um even though
we started a movement that is literally
trying to polarize the the tech
ecosystem um at the end of the day so
that we can have a conversation and find
an Optimum together I mean that's kind
of what I try to do with this podcast
given the landscape of things to still
have long form conversations but there
is a degree to which uh absurdity is
fully embraced in fact this very
conversation is
multi- uh level absurd so first of all I
should say that I just very recently had
a conversation with Jeff
Bezos and uh I would love to hear
your Beth
Jos opinions of Jeff Bezos speaking of
inter Galactic Jeff Bezos uh what do you
think of that particular individual whom
your name is inspired yeah I mean I
think Jeff is really great I mean he's
built one of the most epic companies of
all time he's leveraged the techn
capital machine and techn capital
acceleration
to give us what we wanted right we want
uh quick delivery very convenient at
home low prices right he
understood how the machine worked and
how to harness it right like running the
company uh not trying to take profits
too early putting it back putting
letting the system compound and keep
improving and you know arguably I think
Amazon's invested some of the most
amount of capital and Robotics out there
um and certainly with the birth of AWS
kind of um enabled the sort of tech boom
we've seen today that has paid the
salaries of you know I guess myself all
of our friends to some extent and so I I
think we can all be grateful to you know
Jeff and he's one of the great
entrepreneurs uh out there one of the
best of all time unarguably uh and of
course the the work at Blue origin
similar to the work at SpaceX is trying
to make humans and multiplanetary
species
which seems almost like a bigger thing
than the capitalist machine or it's a
capitalist machine at a different time
scale perhaps yeah I I think that um you
know companies they tend to optimize you
know quarter over quarter maybe a few
years out but individuals that want to
leave a legacy can think on a
multi-decadal or multi-century time
scale and so the fact that some
individuals are such good Capital
allocators that they unlock the ability
to allocate capitals to goals that take
us much further or are much further
looking you know elon's doing this
with SpaceX putting all this Capital
towards getting us to Mars um uh Jeff is
trying to build blue origin and I think
he wants to build O'Neal cylinders and
get industry off Planet uh which I think
is brilliant um I think you know just
overall I'm I'm for billionaires I know
this controversial statement sometimes
but I think that uh in a sense it's kind
of a a proof of stake voting right like
if you've acquire if you've allocated
Capital efficiently you get you unlock
more Capital to allocate just because
clearly you know how to allocate Capital
more efficiently which is in contrast to
politicians that get elected because
they speak the best on TV right not
because they have a proven track record
of allocating taxpayer Capital most
efficiently and so that's why I'm for uh
capitalism uh over say giving all our
money to the government and letting them
figure out how to allocate it so yeah
why do you think
it's a viral and a popular meme to
criticize billionaires since you
mentioned billionaires why do you think
there's quite a
widespread
criticism of of people with wealth
especially those in the public eye like
Jeff and Elon and Mark Zuckerberg and uh
who else Bill Gates yeah I think a lot
of people would instead of trying to
understand how the techn capit capital
machine works and realizing they have
much more agency than they think they'd
rather have the sort of victim mindset
I'm just subjected to this machine it is
oppressing me um and and the successful
players clearly must be must must be
evil because they've been successful at
this game that I'm not successful at but
you know I've managed to get some people
that were in that mindset and make them
realize how the the techn capital
machine works and how you can harness it
uh uh for your own good and and for the
good of others and by creating value you
capture some of value you create for the
world and and that sort of positive sum
mindset shift is so potent and really
that's what that's what we're trying to
do by scaling eak is sort of unlocking
that higher level of agency like
actually have you're far more in control
of the future than you think you have
agency to change the world go out and do
it you have per here's permission each
individual has agency
uh the motto keep building is often
heard right what does that mean to you
and what does it have to do with Diet
Coke well by the way thank you so much
for the Red Bull it's it's working
pretty well I'm feeling pretty good
awesome um well so so building
Technologies and and building it doesn't
have to be Technologies just building in
general um means you know having agency
trying to change the world by creating
let's say a company which is a self-
sustaining
organism uh uh that you know
accomplishes a function in the broader
techn Capital machine to us that's the
way to achieve change in the world that
you'd like to see rather than say
pressuring polititions or creating
nonprofits that you know nonprofits once
they run out of money their function can
no longer be accomplished you're kind of
deforming the market artificially
compared to sort of subverting or
chusing the market to or or dancing with
the market to to convince it that
actually this function is important adds
value and here it is right and so I
think you know this is sort of the way
between the sort of degrowth like ESG
approach versus say Elon right the
degrowth approach is like we're going to
manage our way out of a climate crisis
and Elon is like I'm going to build a
company that is self- sustaining
profitable and growing and it's we're
going to innovate our way out of this
dilemma right and and we're trying to
get people to do the latter rather than
the former at all
scales Elon is an interesting case so
you are a proponent you celebrate Elon
but he's also somebody who has for long
time warned about uh the dangers the
potential dangers existential risks of
artificial intelligence how do you
square the two is that a contradiction
to you it is somewhat because he's very
much against regulation in many aspects
but uh for AI he's he's definitely um
you know a proponent of of regulations I
think I think overall you know he saw
the dangers of say open AI you know
cornering the market and then getting to
have the Monopoly over uh the cultural
priors that you can embed in these llms
that then you know as as llms now become
the source of Truth for people then you
can shape the culture of the people and
so you can control people by controlling
LMS and he saw that just like it was the
case for social media uh if you shape
the function of information propagation
you can shape people's opinions he's he
he sought to make it competitor so at
least like I think we're very aligned
there that you know they the way to a
good future is to maintain sort of
adversarial equilibria between the
various AI players um I love to talk to
him to understand sort of his thinking
about uh how to make you know how to
advance AI going forwards I mean he's
also hedging his bets I would say you
know with neuralink right I think if he
can't stop the progress of AI you know
he's building the technology to merge so
you know uh look at the actions uh not
just the words but well I mean there's
some degree where being concerned maybe
using human psychology being concerned
about threats all around us as a
motivator like uh it's an encouraging
thing I operate much better when there's
a deadline the fear of the deadline like
and I for myself create artificial
things like I I want to create in myself
this kind of anxiety as if something
really horrible will happen if I miss
the deadline I think there's some degree
of of that here because creating AI
That's aligned with humans has a lot of
potential benefits and so a different
way to reframe that is if you don't
you're we're all going to die it just
seems to be a very powerful
psychological
formulation of the goal of creating
human allign AI I think that anxiety is
good I think like I said I want the the
free market to to create aligned AIS um
that are reliable uh and I think that's
what he's trying to do with uh xai um so
I'm all for it what what I am against is
sort of
um stopping let's say the open source
ecosystem from Tri thriving right by
let's say in the executive order
claiming that op Source LMS or dual use
Technologies and should be government
controlled then everybody needs to
register their GPU and their big
matrices with the government and I think
that extra uh friction will dissuade a
lot of Hackers from contributing hackers
that could later become the the
researchers that uh make key discoveries
that push us forward right uh including
discoveries for AI safety and so I think
I I just want to maintain ubiquity of
opportunity to contribute to Ai and and
to own a piece of the future right it
can't just be
legislated uh you know behind some wall
where only a few players get to play the
game I mean so the eak movement is often
sort of
caricatured to mean sort of uh progress
and Innovation at all at all costs
doesn't matter how unsafe it is doesn't
matter if it cause a lot of damage you
just build build cool shit as fast as
possible stay up all night with a Diet
Coke uh whatever it
takes I think I guess I don't know if
there's a question in there but how
important to you and what you've seen
the different formulations of eak is
safety is AI
safety I think again I think like if
there was no one working on it I think I
would be a proponent sure of it I think
again our goal is to sort of bring
balance and obviously a sense of urgency
is a useful tool right to make uh
progress it hacks our dopaminergic
systems and gives us energy to to work
late into the night I think also having
a higher purpose you're contributing to
right at the end of the day it's like
what am I contributing to I'm
contributing to the growth of this
beautiful machine so that we
seek to the Stars that's really
inspiring that's also a sort of uh neuro
uh hack so you're saying AI safety is
important to you but right now the
landscape of ideas you see is AI safety
as a topic is used more often to gain
centralized control so in that sense
you're resisting it as a proxy for
centralized gaining centralized control
yeah I I I I just think we we have to be
careful because um you know safety is
just the perfect cover for sort of
centralization of power and covering up
eventually corruption I don't I'm not
saying it's corrupted now but it could
be uh down the line and
really if you if you let the argument
run like there's no amount of sort of
centralization of control that will be
enough to ensure your safety there's
always more nine n nines of PE safety
that you can gain you know
99.99999% safe maybe you want another
nine oh please give us full access to
everything you do full surveillance and
and and frankly those that are
proponents of AI safety have proposed
like having a global
panopticon right where you have
centralized perception of everything
going on and to me that just opens up
the door wide open for a sort of Big
Brother 1984 like scenario and that's
not a future I want to live in cuz we
know we have some examples throughout
history when that did not lead to a good
outcome right you mentioned you founded
a company ex
Tropic uh that recently announced a 14.1
million seed round uh what's the goal of
the company you're talking about a lot
of interesting physics things so what
what are you up to over there that you
can talk
about yeah I mean you know originally we
weren't going to announce last week uh
but I think with uh the doxing in
disclosure we got our our hand forced so
we we had to disclose roughly what we
we're doing but really ex stopic was
born from my dissatisfaction and and
that of my colleagues with uh the
quantum Computing uh road map right
Quantum Computing was sort of the first
path to physics-based
Computing that you know was trying to
commercially scale and I was working on
physics based AI that runs on these
physics based computers but ultimately
our greatest enemy was this noise this
pervasive problem of noise that you know
as I mention you have to constantly pump
out the noise out of the system to
maintain this pristine environment where
quantum mechanics can take effect and
that constraint was just too much it's
too costly to do
that and so we were wondering right as
um general of AI is sort of eating the
world more and more of the world's
computational workloads are focused on
geni how could we use physics to
engineer the ultimate physical substrate
for gener of AI right from first
principles of physics of information
theory of computation and ultimately of
thermodynamics right and so what we're
seeking to build is a physics-based
Computing system and physics based AI
algorithms um that are inspired by uh
out of equilibrium thermodynamics or
harness it uh directly uh to do machine
learning as a physical
process so what does that mean machine
learning is a physical process is that
Hardware is it software is it both is it
trying to do the full stack in some kind
of unique way yes it it it is full stack
and so we're folks that have built uh
you know differentiable programming into
the quantum Computing ecosystem with
tensorflow Quantum one of my co-founders
of tensorflow quantum is the CTO Trevor
mccort um we have some of the best
quantum computer Architects those that
have designed ibms and aws's systems
they've left Quantum Computing to help
us build uh what we call actually a
thermodynamic computer a thermodynamic
computer well actually it's thinking on
tensor floor quantum what lessons have
you learned from tensorflow Quantum
maybe you can speak to like
what what it takes to create essentially
what like a software API to a quantum
computer right I mean that was that was
a challenge uh to build uh to invent to
build and then to get to run on the real
devices can you actually speak to what
it is yeah so tensorflow Quantum was an
attempt at well I mean I guess we
succeeded at combining deep learning or
differentiable classical programming
with uh Quantum Computing and turn
Quantum Computing into uh or have types
of programs that are differentiable in
Quantum Computing and you know Andre
Andre carpat um calls differentiable
programming software 2.0 right it's like
gradi and descent is a better programmer
than you and the idea was that in the
early days of Quantum Computing you can
only run short Quantum programs and so
which Quantum programs should you run
well just let gradient descend find
those programs instead and so we built
sort of the first
infrastructure uh to not only run
differentiable Quantum programs but
combine them as part of broader deep
learning uh graphs uh
incorporating deep neural networks you
know the ones you know and love with
what are called called Quantum neural
networks um and uh ultimately it was a
very cross-disciplinary effort we had to
invent all sorts of ways to
differentiate to back propagate uh
through the graph the hybrid graph um
but ultimately it taught me that you the
way to program matter and to program
physics is by differentiating through
control parameters if you have
parameters that affects the physics of
the system you you can and you can
evaluate some loss function you can
optimize the system to accomplish a task
whatever that task may be and that's a
very sort of
universal meta framework for how to
program physics based computers to try
to parameterize everything make those
parameters differentiable MH and then
optimize yes okay so is there some more
practical engineering lessons from
tensorflow
Quantum just organizationally too like
the humans involved and how to get to a
product how to uh create good
documentation how to have I don't know
all these little subtle things you may
people might not think about I think
like working across disciplinary
boundaries is always a challenge and you
have to be extremely patient and
teaching one another right I learned a
lot of software engineering through the
process um uh my colleagues learned a
lot of quantum physics and some learned
machine learning through the process of
of building this uh system and I think
if you get some smart people that are
passionate and trust each other in a
room and you have a small team and you
teach each other your Specialties
suddenly you're kind of forming this
sort of model soup of expertise and
something special comes out of that
right it's like combining genes but for
your knowledge bases and uh sometimes
special products come out of that and so
I I think like even though it's very
high friction initially to work in an
interdisciplinary team I think the
product at the end of the day is is is
worth it and so learned a lot trying to
bridge the gap there and I mean it's
still a challenge to this day you know
we hire folks that are have an AI
background folks folks that have a pure
physics background and somehow we we
have to make them talk to one another
right is there magic is there some
science and art to the hiring process to
building a team that uh can create Magic
together yeah it's it's really hard to
pinpoint that that genen sequa right the
I didn't know you speak French it's very
nice um yeah I'm I'm I'm actually French
Canadian so oh you are legitimately
French can I am I thought you were just
doing that for the for the for the cred
no no no I'm truly French Canadian from
from Montreal um um but yeah essentially
we look for people with very high fluid
intelligence that aren't over
specialized because they're going to
have to get out of their comfort zone
they're going to have to incorporate
Concepts that they've never seen before
and very quickly get comfortable with
them right or or learn to work in a team
and so that's sort of uh what we look
for when we hire we can't hire you know
people that are just like you know
optimizing this sub system for the past
three or four years we need like really
General sort of broader uh intelligence
and Specialty uh and people that are
that are open-minded really because if
you're pioneering a new approach from
scratch there there is no textbook
there's no reference it's just us and
and people that are hungry to learn so
we have to teach each other we have to
learn the literature we have to share
knowledge bases collaborate in order to
push the boundary of knowledge further
together right uh and so people that are
used to just getting prescribed uh what
to do uh you know at this stage when
you're at the pioneering stage that's
not necessarily uh who you want to hire
you so you mentioned with ext Tropic
you're trying to build the physical
substrate for generative AI mhm what's
the difference between that and the AGI
AI itself so is it possible that in the
halls of your company a will be created
or will AGI just be using this as a
substrate I think um our goal is to both
run human like AI or anthropomorphic AI
sorry for use of the term AGI I know
it's triggering for you we think that
the future is actually physics-based
AI uh combined with
anthropomorphic AI so you can imagine I
have a sort of world modeling engine
through physics based AI physics based
di is better at representing the world
at all scales because it can be quantum
mechanical thermodynamic
deterministic hybrid representations of
the world just like our world at
different scales has different regimes
of physics if you inspire yourself from
that in the ways you learn
representations of nature you can have
much more accurate representations of
nature so you can have very accurate
World models at all scales MH right and
so you have the world modeling engine
and then you have the sort of
anthropomorphic AI that is humanlike so
you can have the
science the the the playground to test
your ideas and you can have the
synthetic scientist and to us that joint
system of a physics based Ai and an
anthropomorphic AI is the closest thing
to a fully General artificially
intelligent system so you you can get
closer to Truth by
grounding the the AI to physics right
but you can also still have a
anthropomorphic interface to us humans
that like to talk to other humans or
humanlike systems so on that topic what
do you I suppose that is one of the big
limitations of current large language
models to you is that they're not
they're good
bullshitters they're not really grounded
to truth necessarily MH is that would
that be fair to say yeah no I I you
wouldn't you know try to extrapolate the
stock market with an LM trained on text
from the internet right it's not going
to be a very accurate model it's not
going to model its priors or its
uncertainties about the world very
accurately right so you need you need a
different type of AI to complement sort
of this this text extrapolation AI yeah
uh you mentioned Singularity earlier
what how far away are we from a
singularity I don't know if I believe in
a finite time Singularity as a single
point in time I think it's going to be
asymptotic and and sort of a diagonal
sort of ASM toote like you know we we
have the L cone we have the limits of
physics restricting our ability to grow
so obviously can't fully diverge on a on
a finite time um I I I think my priors
are that you know I think a lot of a lot
of uh people on the other side of the
aisle um think that once we reach human
level AI there's going to be an
inflection point and a sudden like like
f like suddenly AI is going to Gro how
to you know manipulate matter at the
Nano scale and assemble Nanobots and
having uh worked you know for nearly a
decade in applying AI to engineer matter
it's much harder than they think and in
reality you need a lot of samples from
either a simulation of nature that's
very accurate and costly or nature
itself and that keeps your ability to
control the world around us in check
there's a sort of minimal
uh cost computationally and
thermodynamically to acquiring
information about the world in order to
be able to predict and control it uh and
that keeps things in check it's funny
you mentioned the other side of the
aisle so in the poll I posted about P
Doom yesterday what's the probability of
Doom there seems to be a nice like
division between people think it's very
likely and very
unlikely I wonder if in the future
they'll be the actual Republicans versus
Democrats division blue versus red is
the AI doomers versus the eakers yak
yeah so this movement you know is not
right-wing or leftwing fundamentally
it's more like up versus down in terms
of the up civilization right all right
um but it seems to be like there is a
sort of case of alignment of the
existing political parties where those
that are for more centralization of
power control and more
regulations uh are Alig with sort of
aligning themselves with the doomers
because that sort of uh instilling fear
in people is a great way to for them to
give up more control and give the
government more power but fundamentally
we're not left versus right I think
there's we've done polls of um people's
alignment with an EAC I think it's
pretty balanced um so it's a it's a new
fundamental issue of our time it's not
just centralizing ation versus
decentralization it's kind of do we go
it's like Tech progressivism versus
techn conservativism
right so iak is as a movement is often
formulated in contrast to EA effective
altruism
MH what do you think are the pros and
cons of effective altruism what's
interesting insightful to you about them
and what uh is
negative right I think I think like
people trying to do good from first
principles is is is good we should
actually say and sorry to interrupt we
should probably say that and you can
correct me if I'm wrong but effective
altruism
is is a kind of movement that's trying
to do good optimally where good is
probably measured something like the
amount of suffering in the world you
want to minimize
it and uh there's ways that that can go
wrong as any optimization can and so
it's interesting to
explore
uh like how things can go wrong we're
we're both trying to do good to some
extent and we're we're both trying we're
we're arguing for which loss function we
should use right yes their loss function
is sort of Hons right uh units of
Hedonism like how how like how good do
you feel and for how much time right and
so suffering would be negative heatons
um and they're trying to minimize that
but to us that seems like uh that loss
function has sort of spirous Minima
right you can um you know start
minimizing shrimp farm pain right which
seems not that productive to me um uh or
you can end up with wireheading where
you just you know either install a
neural link or you scroll Tik Tok
forever and you feel good on the
shortterm time scale because your
neurochemistry but on long-term time
scale it causes Decay and and death
right uh because you're not being
productive whereas sort of eak measuring
progress of
civilization uh not in terms of a
subjective loss function like H
Hedonism um um but rather an objective
measure a quantity that cannot be gained
that is physical energy right it's very
objective right and and and and there's
not many ways to game it right if you if
you did it in terms of like GDP or a
currency that's pinned to a certain
value that's moving right and so that's
not a good way to measure our progress
and so uh but the thing is we're both
trying to make progress and ensure
humanity flourishes and and gets to grow
we just have different uh loss functions
and different ways of going about uh
doing it is there a degree maybe you can
educate me correct me I I get a little
bit skeptical when there's an equation
involved trying to reduce all of the
human
civilization Human Experience to an
equation is there a degree that we
should be skeptical of the tyranny of an
equation a of a loss function over which
to optimize like having a kind of
intellectual humility about optimizing
over loss
functions yeah so so so this particular
loss function it's not it's not stiff
it's it's kind of an average of averages
right it's
like distributions of states in the
future are going to follow a certain
distribution so it's not um
deterministic it's not like we're not on
like stiff rails right it's just a
statistical uh statement uh about the
future but at the end of the day you
know you can believe in gravity or not
you know but it's not necessarily an
option to obey it right uh and some some
people try to test that that and that
goes not so well so similarly you know I
think I think
thermodynamics is there whether we like
it or not and we're just trying to point
out uh what is and and try to orient
ourselves and and and chart a path
forward given given this fundamental
truth but there's still some uncertainty
there's still a lack of information
humans tend to fill the gap of the lack
of information with narratives and so
how they
interpret
you know even physics is up to
interpretation when there's uncertainty
involved um and humans T to use that MH
to further their own means so it's
always whenever there's an equation it
just seems like until we have really
perfect understanding of the
universe humans will do what humans do
and they try to
uh use The Narrative of doing
good MH
to fool the populace into doing bad y I
just I guess that this is something that
should be skeptical about in all
movements that's
right so we invite skepticism right do
you have an understanding of what might
uh to a degree that went wrong what do
you think may have gone wrong with
effective
altruism that might also go wrong with
effective
accelerationism yeah I mean I think you
know um I think it provided initially a
sense of community for you know
engineers and intellectuals and
rationalists in the early days and um
seems like the community was very
healthy but then you know they formed
all sorts of organizations and started
routing capital and having actual power
right they have real power they
influence the government the influence
most AI orgs now I mean they're
literally controlling the board of openi
right and and look over to anthropic I
think they all have some control over
that too and so I think you know the
Assumption of eak is more like
capitalism is that every agent organism
and metaorganism is going to act it own
in its own interest and we should
maintain sort of adversarial equilibrium
or or adversarial competition to keep
each other in check at all times at all
scales um I think that yeah ultimately
it was the perfect cover to acquire
tons of power and capital and
unfortunately sometimes that uh that
that corrupts people over time what does
a perfectly productive day since
building is important what is a
perfectly productive day in the life of
Gom Verdon look like how how much
caffeine do you consume like what what's
the perfect
day okay uh so I have a particular
regimen um I would say my my favorite
days are
12:00 p.m. to 4:00 a.m. um and I would
have meetings in the early afternoon
usually external meetings some internal
meetings because I'm I'm CEO I have to
interface with the outside world whether
it's customers or investors or we
interviewing potential
candidates um and usually I'll have uh
ketones uh exogenous ketones um so are
you on a keto on a keto diet is just
I've done keto before for uh football
and whatnot um but I I I like to uh have
a meal after sort of part of my day is
is done and so I can just have extreme
Focus you do the social interactions
earlier in the day right without food
front load them yeah yeah like right now
I'm on ketones and and Red Bull yeah uh
and it just gives you a Clarity of
thought that is that is really Next
Level cuz then when you eat you're
actually allocating some of your energy
that could be going to neural energy to
to your digestion after I eat maybe I
take a break an hour or so an hour and a
half and then uh usually it's like
ideally one meal a day like steak and
eggs and vegetables uh animal based
primarily so fruit and and meat and then
uh and then I do a second wind usually
that's deep work right uh because I'm
you know I am a CEO but I'm still
technical I'm contributing to most
patents and um there I'll just stay up
late into the night and and work with
Engineers on very technical problems so
it's like the the 900 p.m. to 4:00 a.m.
whatever the that range of time yeah
yeah that's the perfect time the the
emails that the things that are on fire
stop trickling in you can you can focus
and then you have your second wind um
and uh you know I I think Demis aabis
has a similar work data to some extent
um so I think that that's definitely
inspired my workday um but yeah that I
started this work day when I was at at
Google and had to manage a bit of the
product during the day and have meetings
and then and then do technical work at
night exercise sleep those kinds of
things yeah said football used to play
football yeah I used to play American
football uh I've done all sorts of
sports growing up and then I was into
powerlifting for a while um so when I
was a studying U mathematics in grad
school I would just you know do math and
lift take caffeine and that was my day
it was very pure the the purest of Monk
modes um but it's really interesting how
in powerlifting you're trying to cause
neural adaptation by having certain
driving signals and you're trying to
engineer neuroplasticity through all
sorts of
supplements um and you know you have all
sorts of you know brain derived
neurotrophic factors that get secreted
when you when you lift so it's it's
funny to me how I was trying to engineer
uh um um neural adaptation in my uh
nervous system more broadly not just my
brain while learning mathematics uh I
think you can learn much
faster uh if you really care if you
convince yourself to care a lot about
what you're learning and you have some
sort of assistance let's say uh caffeine
or some col energic supplement to
increase
neuroplasticity I should chat with
Andrew huberman point he's the expert
but uh uh yeah at least to me it's like
you know you can try to input more
tokens into your brain if you will and
you can try to increase the learning
rate so that you can learn much faster
on a shorter time scale so I've learned
a lot of things I followed my curiosity
you're naturally if you're passionate
about what you're doing you're going to
learn faster you're going to become
smarter
faster um and if you follow your
curiosity you're always going to be
interested and so I advise people to
follow their curiosity and don't respect
the boundaries of certain fields or what
you've been allocated in terms of Lane
of what you're working on uh just go out
and explore and follow your nose and try
to uh acquire and compress as much
information as you can into your brain
anything that you find interesting and
caring about a thing and like you said
which is interesting it does it works
for me really well it's like tricking
yourself that you care about a thing yes
and then you start to really care about
it yep so it's funny the
motivation is a really good Catalyst for
learning right and so at least part part
part of my character uh as bef Jos is
kind of like yeah yeah the hype man yeah
just hype but I'm like hyping myself up
but then I just tweet about it yeah and
it's just when I'm trying to get really
hyped up and in like an altered state of
consciousness where I'm like Ultra
focused in the flow wired trying to
invent something that's never existed I
need to get to like un real levels of
like excitement but your brain has these
levels of of cognition that you can
unlock with like higher levels of
adrenaline and and whatnot and I mean
I've learned that in powerlifting that
actually you can engineer a mental
switch to like increase your strength
right like if you can engineer a switch
maybe you have a prompt like a certain
song or some music where suddenly you're
like fully primed then you're at Max
maximum strength right and I've
engineered that that switch through
years of lifting if you're going to get
under 500 lb and it could crush you if
you don't have that switch to be wired
in you might die so that that'll wake
you right up and and that sort of skill
I've carried over to like research when
it's when it's go time when the stakes
are high somehow I just reach another
level of neural performance so be Jos is
your sort of embodiment representation
of your intellect ual
Hulk productivity Hulk they just turn on
what have you learned about the the
nature of identity from having these two
identities I think it's interesting for
people to be able to put on those two
hats so explicitly I think it was
interesting in the early days I think in
the early days I thought it was truly
compartmentalized like oh yeah this is a
character you know I'm Gom Beth is just
the the character I like I like take my
thoughts and then I extrapolate them to
a bit more extreme
but you know over time it's kind of like
both identities were starting to merge
mentally and people are like no you are
I met you you are Beth you are not just
Gom MH and I was like wait am I and now
it's like fully merged but it was
already before the docs it was already
starting mentally that you know I'm I am
this character um it's part of me would
you recommend people sort of have an Al
absolutely like young people would you
recommend them to explore different
identities by having alts alt accounts
it's it's fun it's like it's like
writing an essay and taking a position
right it's like you do this in debate
it's it's like you can have experimental
thoughts and and by having by the stakes
being so low because you're an Anon
account with I don't know 20 followers
or something you can experiment with
your thoughts and and and in a low
stakes environment I feel like we've
lost that in the era of everything being
under your main name everything being
attributable to you people just are
afraid to speak explore ideas that
aren't fully formed right and and I feel
like we've lost something there so I I I
hope you know platforms like X and
others like really help support people
trying to stay snonymous or Anonymous um
because it's really important for for
people to share thoughts that aren't
fully formed and converge onto maybe
Hidden Truths that were hard to converge
upon if it was just through open um
conversation with real names yeah I
really believe in like not radical but
rigorous empathy it's like really
considering what it's like to be a
person of a certain Viewpoint and like
taking that as a thought experiment
farther and farther and farther and one
way of doing that is an Al to
account uh that's a that's a that's a
fun interesting way to really explore
what it's like to be a person that
believes a set of beliefs and uh taking
that across the span of several days
weeks
months of course there's always the
danger of becoming that that's uh that's
the N gaze long into the
abyss the abyss gazes into you you have
to be careful breaking bef yeah right
breaking bef yeah you wake up with a
shaved head one day just like who who am
I what have I become uh so you've
mentioned quite a bit of advice already
but what advice would did you give to
young
people of how to in this interesting
World we're in how to have a career and
how to have a life they can be proud
of I think to me the reason I went to
theoretical physics was that I had to
learn the base of the stack that was
going to stick around no matter yeah how
the technology changes right um and to
me that was the foundation upon which
then I later built engineering uh skills
and other skills and to me the laws of
physics you know it may seem like the
landscape right now is changing so fast
it's disorienting but certain things
like fundamental mathematics and physics
aren't going to change and if you have
that knowledge uh and and knowledge
about complex systems and adaptive
systems I think that's going to carry
you very far and so uh not everybody has
to study mathematics but I think it's
really a huge cognitive unlock to to
learn math and and some physics and
Engineering get as close to the base of
the stack as possible yeah that's right
because because the base of the stack
doesn't change everything else you know
your knowledge might become not as
relevant in a few years of course
there's a sort of transfer learning you
can do but then you have to always
transfer learn constantly I guess the
closer you are to the base of the stack
the easier the the the the easier the
transfer learning the shorter the jump
right right and you'd be surprised like
once you've learned Concepts and many
physical scenarios how they can carry
over to understanding other systems that
are necessarily physics and I guess like
the eak writings you know the the
principles and tenet post uh that was
based on physics that was kind of my
experimentation with applying some of
the thinking from out of eum
thermodynamics to understanding the
world around us and it's led to to eak
and this this movement if you look at
your one Cog in the machine in the
capitalist machine one
human and if you look at yourself do you
think mortality is a feature or a bug
like would you want to be immortal no I
I I think uh fundamentally in uh
thermodynamic uh dissipative adaptation
there's the word dissipation dissipation
is important death is important right we
we have a saying in physics physics
progresses one funeral at a time yeah I
think the same is true for capitalism
companies uh Empires um uh people
everything everything must die at some
point I think that we should probably
extend our lifespan because uh we need a
longer period of of training because the
world is more and more complex right we
have more and more data to really be
able to predict and understand the world
and if we have a finite window of higher
neuroplasticity
then then we have sort of a hard cap in
how much we can understand about our
world so you know I think I am for death
because again I think it's important you
know if if you had like a king that
would never die that would be a problem
right like it would the system wouldn't
be constantly adapting right uh you need
novelty you need youth you need
disruption to make sure the system's
always uh adapting and and and malleable
otherwise if things
are Immortal you know if you have let's
say corporations that are there forever
and they have the Monopoly they get
calcified they they become not as
optimal not as high Fitness in a
changing time varying landscape right
and so uh death gives space for uh Youth
and Novelty to to take its place and I
think it's an important part of every
system and nature so yeah I am for I am
for death but I do think that longer
lifespan and longer time for
neuroplasticity bigger brains which
should be something we should strive for
well in that uh Jeff Bezos and Beth jzos
agree that all companies die and for
Jeff the the goal is to try to he calls
it day one thinking try to constantly
for as long as possible
reinvent sort of extend the life of the
company but eventually it too will die
cuz it's so damn difficult to keep
Reinventing um are you afraid of your
own
death um I think I have ideas and things
I'd like to achieve in this world before
I have to go but I don't think I'm
necessarily afraid of death so you're
not attached to the this particular body
and mind that you got no I I think I'm
sure there's going to be better versions
of of myself in in the the future or
forks forks right genetic Forks or or
other right I I I truly I truly believe
that I I I think there's a sort of uh
evolutionary like algorithm happening at
at every bit or or G in in the world is
sort of adapting through this process
that we describe in in eak and and uh I
think maintaining this adaptation
malleability is how we have constant
optimization of the whole machine and so
I don't think I'm particularly you know
an Optimum that needs to stick around
forever I think there's going to be
greater Optima in many ways what do you
think is the meaning of it all what's
the why of the machine the eak
machine the why well the why is
thermodynamics it's it's it's why we're
here it's what has led to the formation
of life and of civilization of evolution
of Technologies and growth of
civilization but why do we have
thermodynamics why do we have our
particular Universe why do we have these
particular hyperparameters the constants
of
nature well then you get into the
anthropic principle right in the
landscape of potential universes right
we're in the universe that allows for
Life uh and then why is there
potentially many
universes I don't know I don't know that
part but could we potentially and
engineer new universes or create pocket
universes uh and set the hyperparameters
so there is some mutual information
between our existence and that universe
and we'd be somewhat its parents I think
that's really I don't know that'd be
very poetic it's purely conjecture but
um again this is why figuring out
quantum gravity would allow us to
understand if we can do that and above
that why do it all seem so beautiful and
exciting the the quest of freaking
quantum
gravity seems so exciting why why is
that why are we drawn to that why are we
pulled towards that just just that
puzzle solving creative force that
underpins all of it it seems like I
think we seek just like an llm seeks to
minimize cross entropy between its
internal model in the world we seek to
minimize yeah the statistical Divergence
between our predictions the world and
and the world itself and you know having
regimes of energy scales or physical
scales in which we have no visibility no
ability to predict or perceive um you
know that's kind of an insult to us and
we we want to we want to be able to
understand the world better in order to
best steer steer it or steer us through
it um and in general it's a capability
that has evolved because the better you
can predict the world the better you can
capture utility your free energy towards
your own sustenance and growth and I
think quantum gravity again is kind of
the the final boss uh in terms of
knowledge acquisition um because once
we've mastered that then we
can do a lot potentially but between
here and there I think there's a lot to
learn in the mesos scales there's a lot
of information to acquire about our
world and a lot of engineering
perception prediction and control to be
done
uh to climb up the cter shift scale and
to us it's that's the great challenge of
our times and when you're not sure where
to go let the meme pave the
way uh Gom uh Beth thank you for talking
today thank you for the work you're
doing thank you for the humor and the
wisdom you put into the world this was
awesome thank you so much for having me
Lux it's a pleasure thank you for
listening to this conversation with guon
Verdon to support this podcast please
check out our sponsors in the
description and now let me leave you
with some words from Albert Einstein if
at first the idea is not absurd then
there is no hope for it thank you for
listening I hope to see you next
time