Transcript
nXJBccSwtB8 • Biggest World Threat In 2024: AI Warfare, Media Companies, Ukraine & China Conflict | Ian Bremmer
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/TomBilyeu/.shards/text-0001.zst#text/0984_nXJBccSwtB8.txt
Kind: captions
Language: en
you said these are dangerous times the
world order is Shifting before our eyes
we also both know that with Hyper
disruptive Technologies like AI on the
horizon a good outcome is not guaranteed
why do you think big Tech will become
the third superpower and what are the
dangers and opportunities if it does big
Tech is essentially Sovereign over the
digital world the fact that former
president Trump was de-platformed from
Facebook and from Twitter uh when he was
president you know most powerful
political figure on the planet and he's
just taken off of those networks and as
a consequence hundreds of millions of
people that would be regularly engaging
with him in real time suddenly can't see
it that wasn't a decision that was made
by a government it wasn't a decision
made by a a judge or by a regulatory
Authority or even by a multi-national
organization
um like you know the U.N it was made by
individuals uh that own tech companies
um the same thing is true in the
decision to help Ukraine uh in the war
in the early days the U.S didn't provide
much Military Support most of the
military capacity and the Cyber defenses
the ability to communicate on the ground
uh was stood up by some tech companies
that they're not allies of NATO they're
under no obligation to do that they've
got shareholders right but they still
decided to do it
um I think that whether we're talking
about Society or the economy or even
National Security if it touches the
digital space technology companies
basically act with dominion and that
didn't matter much when the internet was
first founded because the importance of
the internet for those things was pretty
small but as the importance of the
digital world drives a bigger and bigger
piece of the global economy a bigger and
bigger piece of Civil Society a bit
bigger and bigger piece of National
Security and even increasingly defines
who we are as people how we interact
with other human beings what we see what
we decide what we feel
um how we emote uh that that is an
astonishing amount of power in the hands
of these tech companies and yes there
are some efforts to rein them in to
break them up um to regulate them but
when I look at artificial intelligence
in particular
um I see these technology companies and
their Technologies vastly outstripping
the capacity of governments to regulate
in that space so does that mean that
suddenly you're not going to be citizens
of the US you're going to be citizens of
a tech company no I'm not going that far
but certainly in terms of who wields the
most power over us as human beings
increasingly you would put those
companies in that category and that none
of us even five years ago were thinking
about this seriously and certainly when
I I was studying as a political
scientist this is my entire career you
know the geopolitical space is
determined by governments right like
them or hate them and some of them are
powerful some of them are weak some of
them are rich some are poor some are
open some are closed some are
dictatorships right some are democracy
some are functional some are
dysfunctional but they're in charge
and that increasingly is not true as you
look at that potential or not potential
as you look at that growing reality how
does that play out does this become uh
the one thing when I look at that that I
really start getting paranoid about is
that
AI especially Quantum Computing I'm
maybe less familiar with but sort of
lingers in the back of my mind become
one of two things either weapons used by
governments
um even even if it's not against their
own people though I do especially with
authoritarian governments I get very
paranoid about that but even if they're
just used as Warfare against other
countries that sort of quiet invisible
battle freaks me out and then also I
worry very much about this becoming the
new battlefield for a cold war between
the US and China specifically do you see
us as moving towards that because the
tech will make that increasingly easy to
fight an invisible War I I do think of
course that all of these Technologies
are both enabling and destructive and it
all depends on the intention of the user
and in some cases
um you know it's someone who's just a
tinkerer that makes a mistake or that's
playing around and you know it explodes
I'm not particularly worried that the
robots are going to take over I'm not
particularly worried that we're on the
cusp of developing a superhuman
intelligence and that we're suddenly
irrelevant or we're you know held
hostage to it that's in other words I I
mean I know that you love the Matrix we
talked about that a little bit before
the show this is this is not my 5-10
year concern
um but the idea that this technology is
going to proliferate explosively I mean
vastly beyond anything we ever were
concerned about with nuclear weapons
we're 80 years on it's still just just a
handful of countries and no corporations
no terrorist groups no individuals to
have access to those nukes no no AI with
both its productive and destructive
capacities will not just be in the hands
of Rogue States but will also be in the
hands of people
and and terrorists
um and corporations and and they'll have
Cutting Edge access to that so I mean it
would be easier to deal with if it was
just about the United States and China
and we can talk about the United States
and China and how they think about that
technology differently and how we're
fighting over it and how it it has
become a technology Cold War I think
that we can say that that exists right
now not a cold war overall but a
technology Cold War I think that exists
um but I think the dangers of AI are far
greater than that it is precisely the
fact that non-governments will act as
principles in determining the future of
Digi of the digital world and of society
and National Security as a consequence
and governments right now governments
still seem to think that they're going
to be the ones that will drive all this
regulation and in the most recent days
the United States is taking just a few
baby steps to show that maybe they were
recognize that that's not the case
um but ultimately either we're going to
have to govern in new institutions with
technology companies as partners as
signatories
or they're not going to be regulated and
I think that that that reality is not
yet appreciated by citizens it's not yet
appreciated by governments okay so tell
me more about that what does the world
look like where this technology is
proliferating like that and is not
regulated
um well if it's not regulated at all um
that means that everyone has access to
it so let's look at the good side first
let's be let's be positive and
optimistic because I am a I'm a Believer
in this technology I think it does all
sorts of incredible things and I'm not
just talking about chat GPT I'm talking
about the ability to take any
proprietary data set and be maximally
efficient in extracting uh value from it
um helping allowing workers to become AI
adjacent in ways that will make them
more productive and effective I look at
my own firm Erasure group we've got
about 250 employees and I we did a town
hall with them the other day we do one
every order and we were talking about Ai
and I said I don't think there's anyone
in any of these offices globally that
will be displaced by AI in the next
three to five years not one of my
knowledge workers but I said all of you
will be AI adjacent and if you're not if
you're not learning how to use AI to
dramatically improve your work whether
you are an analyst or whether you're on
the business side or you're in finance
or you're you know in on the it help
desk or you're a graphics person an
editor whatever it is you will become
much less productive than other
employees that are doing that and that
will be a problem for you so we need to
get you the tools and you need to learn
so I and I think that that's that's true
in almost every industry imaginable it's
true in education it's true in health
care and for new Pharma and vaccines
it's true for new energy and critical
infrastructure and what's so amazing
about it one of the reasons why it's
taking us so long to respond to climate
change even now that we we all agree
that it's happening we all agree this
420 parts per million of carbon in the
atmosphere we all agree there's 1.2
degrees Centigrade of warming like
that's that's no longer in dispute and
yet it's really taking us a long time to
to get to the point that we can reduce
our carbon emissions and the reason for
that is because you need to change the
critical infrastructure right you need
to move from one entire supply chain
oriented around carbon to another one
oriented around something new whether
that's solar or you know Green hydrogen
or you name it right
um when you're talking about AI
you're talking about CR first and
foremost creating efficiencies using
your existing critical infrastructure
which means you have no vested
corporations that are saying we don't
want that no every corporation is saying
how can we invest in that to create
greater profitability everyone every
every oil company is going to use AI
just like every post fossil fuel company
is going to use it every bank is going
to use it
um every pharmaceutical company whether
they're using whether they're an mRNA or
they're in traditional uh uh you know uh
vaccines that are that are developed as
we have over decades now I I think that
we truly underestimate the impact that
will have in unlocking wealth in
unlocking human capital and it's going
to happen fast it's not decades as it
took with globalization to open markets
and get goods and services to to move
across the world it's years in some
cases it's months and that that to me is
very very exciting so that's the
positive side and uh frankly that's what
the positive side looks like without
regulation too because I mean look there
are trillions of dollars being spent on
this rollout and it's being spent by a
lot of people who are hyper smart they
are hyper competitive they want to get
there first before other companies that
are in that space and they don't need
any further incentive to ensure that
they can roll that out as fast as
possible so you and I can we can say
whatever we want but it's not you know
further subsidies are not required right
like that is just going to happen that
is going to happen
um but what they're not doing and I'm
sure what you want to spend more time on
with me is not the everything's going to
be great or you know what they call this
e Dash act the you know sort of
exponential accelerationists who just
believe that if we just put all this
money in it then we're gonna we're gonna
all become a greater species and it's
just gonna happen but they're going to
be a lot of negative externalities
and we know this from from globalization
I mean the miracle of your and my
lifetimes thus far before AI the miracle
was we managed to unlock access to the
global Marketplace for now 8 billion
people
trade and goods and capital and
investment and and the labor force the
workforce and that created dislocations
it meant that there were a whole bunch
of people that were more expensive in
the west that lost their jobs as
inexpensive labor that was very talented
in China and India gained jobs but but
that led to unprecedented growth for 50
years
there were also negative externalities
and those negative externalities played
out over many decades but it's when you
take all of this inexpensive coal and
oil and gas out of the ground and you
don't realize that you're actually using
a limited resource and you're affecting
the climate and so decades later we all
figure out oh wait a second
this is a really huge cost on humanity
and on all of these other species many
of which are already extinct and no
one's bothered to pay for them well with
AI the negative externalities will
happen basically simultaneously with all
the positive stuff I just talked about
and just like with climate
none of the people that are driving AI
are spending their time or resource
figuring out how to deal with those
those problems they're spending all
their time trying to figure out how to
save Humanity how to accelerate this
technology so if we don't talk about
those negative externalities they're
just gonna happen and they won't be
mitigated they won't be regulated and
there's a lot of them and you know we
can talk through what they are but I
mean there's you know just just to put
in everyone's head here that kind of
like climate change right we all wanted
globalization I'm a huge fan of
globalization we all hate climate change
we wish it hadn't happened you cannot
have one without the other
and you know the fact that we were so
focused on growth and that all of the
powerful forces are let's have more
stuff let's get more GDP let's extend
our lifespans let's improve our
education let's take people out of
abject poverty all of which are you know
laudable goals some more some less but
things that we all like
but there were there were consequences
that no one wanted no one dealt with no
one cared as much about because they're
not as directly relevant to us as the
shiny Apple that's right in front and
that that is what is about to happen ex
an exponential fashion with artificial
intelligence all right so we've got the
shiny object syndrome myself included I
am I am deploying AI in my company as
fast as I can but at the same time I am
very worried about how this plays out uh
you've already touched on job loss
you're not super worried about that in
the three to five year time Horizon I
may be a little more worried about that
than you but I gave a same uh a similar
speech to my company which is I have
literally zero intention to get rid of
anybody uh but I do have the expectation
that all of you are going to be learning
how to use Ai and I know that that is is
going to mean I'm going to get
efficiencies out of my current Workforce
which means I won't be hiring additional
people so while the people I have are
safe yep uh it certainly creates
instability in people uh in terms of
looking for a new job the the kind of
Mobility I don't think people are going
to be scaling as quickly as possible but
my real question for you is given that
you have a Global Perspective which
which I've come to late in the game and
for long time viewers of mine I will
just say the reason I become so obsessed
with this you and I were talking about
this before we started rolling I come at
everything from the perspective of the
individual and I think that that culture
and all these knock-on effects are all
Downstream of the individual and if we
want a good Society we have to be good
individuals but we have to take the time
to say what is that like what are we
aiming towards what's our North Star
what are we trying to get out of this so
for me the punch line is human
flourishing I don't spend time in this
interview defining what that means
certainly my listeners have heard me
talk about that before but what do you
think about I I assume you will roughly
given the the talk that you just gave
will roughly say something similar we
want good things we want to pull people
out of poverty we want to clean up the
environment there's going to be a lot a
lot of things we want to do that I think
more or less are about human flourishing
what then is the Collision of a new
technology like AI becoming so
ubiquitous in an unregulated fashion
that gives you pause is it us China is
it a rogue actor making bio weapons like
what's the thing that when you look near
term we'll say the three to five year
time Horizon
what gives you pause
so I there are a few things
um I and I I don't even though I said I
don't think I'm going to um fire anyone
because of AI I I do worry that the same
populous trends that we have experienced
in the developed World in particular
over the last 20 years can grow faster
if you are
um a rural
um you know living in a rural area or
you're undereducated
um and uh you know you're not going to
become AI adjacent in the next five
years ten years in the United States in
Europe and those people will be left
farther behind by the knowledge workers
that have that opportunity
um and so I'm not saying they're going
to have massive unemployment but I worry
about that what do you think about like
picking fruit and stuff like that with
robots that make your radar for anything
near-term again not so much so I again I
would say no let me tell you why I say
no about that because when I think about
what
CEOs do with their workforces generally
they take those productivity gains they
pocket them
um you know they pay out good bonuses to
themselves to their shareholders maybe
they invest more in growth but as long
as growth is moving they're not getting
rid of a whole bunch of people they like
the people that they have they want
they're always thinking the trees are
going to grow you know sort of to the
heavens and then when they face a sudden
contraction a recession or even worse a
depression then suddenly they look at
everything around them and say okay
where can we cut costs and if we've
suddenly if those workers if a lot of
those workers aren't as efficient as
they used to be and you get new
technologies suddenly it's not like
you're incrementally getting rid of
people every year it's that you've taken
a huge swath out of the workplace so I
don't think that that's going to happen
suddenly
um in the next few years because we're
coming out of a mild narrow slowdown
right now and the next few years should
look better
um I I more think about what happens the
next time we're in a major cyclical
downturn and and combining that with
where we've gotten to with the AI
productivity build up at that point but
I but I still think that in the interim
you're gonna have people that aren't
gaining the productivity benefits from
AI inside Western economies and those
are the same people that have been hit
by the fentanyl crisis those are the
same people that haven't had good
investments in their Educational Systems
than around the world the people the
digital Have Nots the people that aren't
even online so they won't be able to use
these new AI tools to be a to improve
their knowledge to have access to better
doctors so they'll be left behind this
new turbocharged globalization and
that's a lot of sub-Saharan Africa first
and foremost so I do think that there
are two groups of people that even in
the next five years that will suffer
comparatively and will be angry
politically and we'll create social
discontent so I didn't mean to imply
that I didn't care about that or that I
thought it was off the screen it was
more that I don't see that as a firm of
literally 250 people like we're tiny and
if you tell me that we're going to have
a lot more efficiency I I wouldn't
actually hire less I'd hire more because
I want to get to 500 people faster like
there's just more things that I want to
do without taking any outside investment
um but but that's a tiny tiny issue
compared to the other stuff we're
talking about the things that I'm
probably most worried about in the near
term three years let's say I'd say or
three buckets
um the first is the disinformation
bucket the fact that inside
democracies
increasingly especially with AI we as
Citizens cannot agree on what is true we
can't agree on facts and that that
delegitimizes the media it delegitimizes
our leaders and both political parties
or the many political parties that exist
in other developed countries it
delegitimizes our Judicial System rule
of law it even delegitimizes our
scientists and you can't really have an
effective democracy if there is no
longer a fact space I mean we're seeing
it right now in a tiny way with all of
these indictments of trump and it
doesn't matter what the indictments are
doesn't matter how many they are it
doesn't matter what he's being indicted
for what matters more to the political
outcome is whether or not you favor
Trump political if you do then this is
politicized It's a Witch Hunt and you
know Biden should be indicted and if you
don't
um then Trump is unfit and every
indictment doesn't matter what it is
before you even get a result of it uh
then you know he's guilty and and that
with AI becomes turbocharged
you can reboot your life your health
even your career anything you want all
you need is discipline I can teach you
the tactics that I learned while growing
a billion dollar business that will
allow you to see your goals through
whether you want better health stronger
relationships a more successful career
any of that is possible with the mindset
and business programs and impact Theory
University join the thousands of
students who have already accomplished
amazing things tap now for a free trial
and get started today
I want to get into why that happens so
my first question on that is
pre it's definitely pre-ai because I
think this started breaking down with
social media great
um how prior to social media do you
think that we were able to come to a
consensus on truth
um well couple reasons one uh is that a
lot of people got their media from
either the same Source or from
overlapping and adjacent sources so you
had more commonality to talk about
politics to the extent that you talked
about politics second it was mostly long
form
so you would read a newspaper article
you would listen to a radio show you
would watch a television show you
weren't just getting the headline
because today if you go on CNN or Fox
News on their website and don't look at
the headlines just look at the pieces
the pieces actually overlap a fair
amount if you look at the headlines and
then if you look at what headlines
you're being filtered to then the news
that you're getting is completely
different so I think that's a reason too
um and and of course the fact that
people are spending so much more time
intermediated by algorithms means
they're spending less time randomly just
meeting their fellow other and that's
even true with the rise of things like
um dating apps right I mean as opposed
to Just Happening to date someone you
were in high school with or in college
with or you know you meet at a bar I
mean if you're meeting that person
through a dating app you're already
being sorted in ways that will reduce
the randomness of the the views that
you're exposed to so in all sorts of
tiny ways that add up that are mostly
technologically driven we become much
more sorted
short head not sorted those sorted
probably too
um as as a population
um and and then you put AI into this and
and suddenly this is being Max so let me
get another example you'll remember that
I think it was David ogleby who the
great advertising uh entrepreneur who
once said that we know that 50 of
advertising dollars
um are you know are useful fifty percent
are useless we just don't know what
fifty percent and of course now we know
how to micro Target now we know that
when we're spending money we are
spending it to get the eyeballs of the
people who are going to be affected by
our message they will be angered by it
they will be titillated by it they will
be engaged by it they will spend money
they will become more Addicted by it all
of those things and when you do that you
more effectively sort the population as
opposed to throwing a message at the
wall but everybody gets the message and
so it is not the intention to destroy
democracy it is not the intention to rip
apart Civil Society it is merely an
unintended secondary effect of the fact
that we've been become so good at micro
targeting and sorting that people no
longer are together as a nation or as a
community an AI perfects that
AI allows you to take large language
models and predict with uncanny capacity
um what the next thing is and the next
thing for an advertising company is how
I can effectively Target and reach that
person and not the other person who who
doesn't care about mine yeah and keep
them engaged so let me give you my
thesis on this this I think is uh one of
the most important things for us to all
wrap our heads around I thought a lot
about why is there a sudden breakdown in
in truth
and the more I thought about okay what
is true how can we go about proving it
the reality is that so much of what we
perceive to be true is merely
um your interpretation of something so
you're gonna get a perspective on
something built around what I call your
frame of reference so your frame of
reference is basically it's your beliefs
and your values that you've cobbled
together sort of unknowingly throughout
the course of your life it becomes a
lens through which you view everything
but it is a very distorted lens that is
not making an effort to give you what is
true it's making an effort to conform to
the things you already believe are or
ought to be and so when people confuse
that for objective reality then you have
a problem and so when you introduce AI
what well one when you introduce
algorithms you get massive fragmentation
so now I can serve you just the things
you're interested in so like if you go
to my feed you're going to Niche down
into like really weird things around uh
video game creation which is something
that I'm very passionate about that
somebody else isn't going to see and so
you get already that fragmentation you
layer that on top of your perspective
which you're coming with those those
pre-distortions then you layer that on
top of the algorithm has an agenda that
may not match your agenda and now all of
a sudden you get into these Echo
chambers that are feeding back to you
your same perspective they're
eliminating nuance
by giving you like you were talking
about headlines earlier by giving you
like this is the talking point and so
now you start everything becomes
predictable if I know you're on the left
I know what you're you know on a basket
of
um Concepts I know where you're going to
fall if you're on the right same basket
of Concepts I know where you're going to
fall and so once you get rid of that
Nuance now all of a sudden again we're
not optimized for truth we're optimized
for party line and because that then
feeds into a sense of tribe and I belong
and ease of thought quite frankly which
is one of the things that scares me the
most is like oh I don't have to think
through that issue myself I just need to
know what my party line is cool got it
and and now I go and as we get more and
more fragmented
now it becomes okay I know what my party
line is in my very deep fragment here
but I don't know what's true and I no
longer even know how to assess what's
true in fact I probably think again
because that Distortion reads to me as
objective reality so I think it is true
and so now you have all these people who
are like this is true like there's not
there's nothing you could tell me that
will make me think any different because
I believe this to be true and so now the
question becomes if I'm right that truth
is perspective and interpretation and
and the you're you're soaked in the the
perspective and interpretation of others
so they they reinforce so it becomes
perspective interpretation and
reinforcement and so that becomes quote
unquote truth
outside of
science for lack no because even science
we run into the same problem so what do
we do you know the same problem science
yes so so in a world where uh the only
way I can think to get on the other side
of this Quagmire is to go I want to
achieve this thing and I'm going to
State this is my achieve my my um
desired outcome this is the metric by
which I will determine whether I have
achieved said outcome and then instead
of asking what's true I just ask what
moved me closer to my goal
is there any way else around that that
you see or is this just a one-way Street
to fragmented catastrophe no there are
lots of ways out of it we're just not
heading towards any of them uh I mean no
you look at your Twitter feed or your ex
feed and you've got
um the people you're following and if
you're willing to spend the time you can
curate a following feed that has people
of all sorts of different backgrounds
inclinations from all over the world and
I do that
um and but it takes a lot of time and
effort and you need expertise to be able
to do that you have to be able to
research and figure out who those people
are you have to know some people in the
field most people don't do that
um but of course the four you feed is
much more titillating before you feed is
very entertaining it engages you it
angers you
um and and it and it soothes you at the
same time you want more of that and that
of course is driving you exactly in the
direction you just suggested now a lot
of people will say well okay you watch
CNN all the time you should watch some
Fox as well no that's not the answer the
answer is not watching Fox because you
will just hate watch Fox because you've
already been programmed to realize that
everything that the people on the other
side saying is false and so they're all
evil and so all that's doing is
validating your existing truth no what
you really need to I tell I tell you
young people this all the time you
really want to understand and get
outside what's happening in the United
States ecosystem watch the CBC
or Al Jazeera or deutschevela or NHK in
Japan just watch their English language
news once a week for half an hour an
hour it's not for exciting but it's like
a completely external view of what the
hell is going on in the United States
and the rest of the world and that
forced you first of all it's long form
right it's not the headlines beating you
down and secondly it's like you don't
actually have your anchor of all of the
things that are stirring you up they're
not even playing with that they're just
kind of reporting on the best they can
tell what the hell is going on and then
they're occasionally talking to people
like that are locals and whatnot but
from every side that that's very
valuable but the thing that worries me
about AI
I don't believe that AI is becoming much
more like human beings they're not
faking us out by by just being by being
able to replicate me I think what's
actually happening is technology
companies are teaching us more
effectively how to engage like computers
I mean you and I in person in a
conversation in a relationship
um a work relationship a friend
relationship a sexual relationship
whatever it is
there's nothing a computer can do that
can tear us away from that but if we
spend our time increasingly in the
digital world
where we are driven by where all of our
inputs are our algorithmic well
computers can replicate that very easily
and so if they can only make us more
like computers then no it's not like the
Matrix where you want to feed off Us in
terms of fuel it's much more that we're
very valuable in driving the economy if
you give us all of your attention and
data
and and that is the way that you create
right a a maximal AI economy it also
happens to be completely dehumanized
because we all know that human beings
are social animals we know if you stick
us in a room or you stick us in a desert
island we're gonna like engage with each
other talk to each other figure out
things about each other doesn't matter
what color we are what sexual
orientation we are we will figure it out
if we're stuck if we have no choice but
if you if you take us and you and you
use our most base most reptilian
impulses and you and you monetize those
so that we're the product
oh no no then then you lose everything
we built as human beings all the
governance all the community all the
social organizations the churches the
things the family the things that matter
to us that we're losing
that we're losing the things that make
us rooted and make us sane and make us
care and make us love I mean flourishing
flourishing starts right here it starts
at home it doesn't start online
flourishing start those are tools that
we need to use to create wealth but you
can't flourish if you don't have real
relationships that takes away strips
away the essence of who we are as people
and yet we are all running headlong away
from flourishing
yeah so that
um
the only thing I'll take exception with
there is the sense that we're we're
running away from it I think we're there
being pulled a natural exactly that that
feels more right to me that's right
that's a better term important I agree
one of the things that I feel like is is
really falling apart and this is the
thing I don't have a good solution for
this uh is shared narratives so
um you've all know what Harari talked
about this very eloquently and he said
you know look there are other species
that can coordinate in massive groups
um as big if not bigger than the way
that humans can do but we're the only
ones that can coordinate in these huge
groups flexibly and he said the way that
we create that flexibility is through
shared narratives now they have
historically come most compellingly
through religion and as religion changes
I I resonate with the language that you
know God is dead Nietzsche's sort of
interpretation of that that can Hackle
some people so I'll just say that that
the tenor of it has changed teaching
that in a world where I think a lot of
people have alternate belief systems or
things they gravitate towards or not
even necessarily thinking about religion
I think there's a god-shaped hole in all
of us and and I am not a Believer as my
longtime listeners will know but I
acknowledge that I have a god-shaped
hole in me that I need to fill with
meaning and purpose and
as we fragment so going back to this
idea as we fragment this gets very scary
because we don't have shared narratives
anymore and so now we're not necessarily
cooperating in as large groups where at
least before we would have the The
Narrative of the nation and so we had
something that we could Galvanize around
um but obviously with the rise of
populism cyclically throughout history
it's not like just now
um but whenever that rears its ugly head
then some very dark things can happen
um but on the flip side of and so I'll
say that's like a hyper
um shared narrative right something has
an injustice has been done to me and the
other person did it and we need to rise
up against okay cool shared narrative
can get dark but you can also have on
the other side where there is no shared
narrative you are now to your point
about you're being pulled in a direction
that doesn't unite us but only fragments
us for further
and I'll plug into that the reason that
I don't look at that and go oh we just
need to then come up with a shared
narrative in fact I'm going to put this
in the the framing of your book you open
your book The Power of crisis with the
story of Reagan and Gorbachev and Reagan
says the Gorbachev hey if I if the U.S
this is like at the height of the Cold
War if the U.S were invaded by an alien
would you help us and Gorbachev said yes
absolutely and that idea of okay there
are things that we could rally around
that take us out of our smaller
narrative into a larger narrative hence
the the title of the book The Power of
Crisis there is a thing that that can
bring us together and give us that
shared narrative but what scares me is
if you plug in AI bias into this
equation you can't go now I yeah now I'm
like whoa like one who gets to decide
what the ai's value system is what the
ai's belief system is how the AI
interprets truth what the AI reinforces
and then if there are a lot of AI which
which is probably the thing that
protects us from an authoritarian answer
but at the same time then you have all
this competing reinforcement that again
just brings us back to fragmentation so
as you look at that Suite of uh
unnerving potential problems
what do you see as our path to the other
side of this to doing it well
yeah
um so President Biden just uh two weeks
ago
had a group of seven uh AI Founders
slash CEOs the most powerful companies
in this space as of right now that will
not be true in a year or two they'll be
vastly more some of them are
hyperscalers some of them are a large
language model uh creators and some are
both
um and uh it was very interesting
because those seven companies basically
agreed on a set of voluntary principles
that included things like
um watermarks on AI
um and uh you know it was reporting on
vulnerabilities
uh sharing best practices uh on on
testing the models all of this stuff and
the stuff that if you looked at it
carefully you'd say those are all things
we want those are things that will help
protect us from the worst successes of
of AI um proliferation now on the one
hand they are not only were they
voluntary but they were super undefined
in ways that every company that was
there could already say we're doing all
of those things we don't need to spend
any more money on them
um but
um I am told those seven companies are
planning on creating an institution that
will meet together
um and will work on more advanced on
advancing those standards and defining
them more clearly uh we'll see uh where
that goes but also I mean as more
companies get in the space you're
creating an expectation in the media in
the government in the population that
these are things that they're committing
to and so increasingly other companies
will also want to show that they're
doing that and maybe there will be some
some backlash if they're not effective
at doing so but but you know what was
interesting to me about that initial
meeting is the White House convened it
but they didn't actually set the agenda
really at all because they don't have
the expertise they don't have the
technology they don't know what these
tools do I mean they're trying out get
up to speed and hire people as fast as
they can but they they're not going to
be anywhere close to these companies and
what I think needs to happen in short
order
is that you're going to need to create
an approach that marries these things
you'll need the tech companies to have
these institutions that that they are
you know involved in standing up but the
governments are going to need to work
with them
and and they're going to need to have
carrots and sticks they'll need to be
licensing regimes like we see for
financial institutions
um there's going to need to be uh
deterrence penalties that need to be
responsible for what's on their
platforms and if they're used in
nefarious ways there's going to have to
be penalties that could include shutting
them down
um and uh you know there's also some
carrots that they should have as this
becomes a field of thousands and
thousands of companies there's
proprietary data sets that the US
government and American universities
have access to that can you can drive
massive wealth with AI and maybe those
will become public data sets that any AI
company that's licensed can potentially
use I mean all of this needs to be
created
but we are nowhere on this right now and
and the AI like what that we've been
hearing about for 40 years but suddenly
it's exponential and exponential is not
like Moore's Law exponential it's not
like a doubling every 18 months it's
like 10x in terms of the size and the
impact of the data sets every year
so we don't have years on this
um and that that's why the urgency
that's why I mean I've completely
retooled you know our knowledge set to
focus on what's the impact of AI on
geopolitics I mean in the last year uh
because I've never seen anything that's
had so much dramatic impact on how I
think about the world and how
geopolitics actually plays out and so
far you and I have only talked about the
disinformation piece and a little bit of
the job piece we haven't talked about
what's probably the most dangerous piece
which is the proliferation piece of
things like hackers and you know
developing bio weapons and you know
viruses that can kill I mean I don't I'm
sure you've heard this I've heard from
friends of mine that are coders
um that in past weeks that they cannot
imagine coding without using the most
advanced AI tools right now because it's
just like it's just a world changer for
them and how much they can do I I don't
know any hackers
um but I'm sure that criminal malware
developers are saying I can't imagine
developing criminal malware or
spearfishing without using these new AI
tools because I mean it's just going to
allow them to Target in such an
extraordinary and pinpoint way and also
to send out so much more you know sort
of capable malware that will elicit so
much more engagement and therefore you
know bring so much more money to them or
shut down so many more servers and give
them so much more illicit data and so
much of the illicit data that they've
already collected from the hacks on you
know all of these companies that you've
heard about Target for example other
firms I mean so much of that so far is
just oh we're just selling that for
people that want to like use the credit
cards no now you're going to sell it to
people that are empowered with AI that
can generate malware against that data
and that again and that's that's like
we're going to develop all these new
vaccines and new Pharmaceuticals that'll
deal with uh Alzheimer's and deal with
Cancers and it's going to be an
incredible time for medicine but we'll
also be able to develop new bio weapons
that will kill people
um and that's not going to be just in
the hands of North Koreans or Russians
in the lab it's going to be in the hands
of small number of people that are
intelligence agencies are not yet
prepared to effectively track right
there's a reason why we don't have
nuclear weapons everywhere it's because
it's expensive it's dangerous it's
really hard I mean imagine the
biohackers thinking back to the days
when oh my God you know how hard it was
like you know you'd have to actually mix
this stuff in a lab you could you could
die yourself I mean now we can do all
this on the computer the quaint old days
you know so yeah I I worry deeply about
the the proliferation of these
incredible tools used in dangerous ways
and we are not going to be able to to
allow the slippage
that we have had
um around cyber tools that we have had
around uh terrorism and their
capabilities we're going to need to get
like you know our net our filter is
going to have to be incredibly
incredibly uh robust
do you have a sense of how we pull that
filter off
well
um part of it is as I say a hybrid
organization
um so there have been some people that
have spoken about an international
atomic energy agency model so it'd be an
international AI uh agency uh model
um I I think that won't work because
that implies a state agency with
inspectors that have a small number of
targets that they're engaging in those
inspections on I don't think that works
I think what you're going to need is an
agency that involves the tech companies
themselves and so you know if you're
developing an AI
um capacity in your garage if you want
to use that anywhere it's going to have
to be licensed
if you've got software that's going to
run AI it's going to have to be licensed
and and the tech companies that are
running these models are going to have
to police that in conjunction with
governments so this is I think this is a
new governance model I don't think it
will work with the governments by
themselves because they won't have the
ability to understand what the
capabilities of these algorithms are how
fast they can because they can
proliferate what they can do how they
can be used dangerously
um but the governments are the ones that
are going to be able to impose penalties
they will have the effective deterrent
measure I mean Microsoft Google Facebook
meta you know these these companies are
not what are they going to do they'll
throw you off their platform no no that
can't be the penalty for developing
um you know a bio weapon
um you're going to need to be working
together around this and and together
not just in the company hands over the
information to the government the
agencies are going to need to be much
more integrated so here's one thing that
I've been thinking a lot about be very
curious to get your feedback on this so
um I am definitely somebody who is a big
believer in um Bitcoin and what's going
on in cryptocurrency
but as I look at it I'm like oh like
this is definitely if we have it the the
thing that makes me believe in Bitcoin
specifically is that it's the closest
thing to a digital Recreation of an
exploding star so for people that
understand uh for people that understand
how gold has become across a bunch of
cultures throughout time the thing is
because it uh it doesn't mold it doesn't
rot and it it could only be generated
from an exploding star so there's no way
to fake it there's no way to make more I
see and so yeah so you you have this
thing
um that's very good about carrying
wealth across time and space it isn't
that it is
um inherently like people say oh but you
can make jewelry and stuff yeah but if
we don't care about jewelry then that
never becomes a thing and there's no
reason that we should care about gold
jewelry yeah industrial uses of gold are
utterly marginal to its utility as a
currency I agree exactly so Along Comes
Bitcoin which same idea there is a
finite amount of it you can never make
more it's the sort of computer
equivalent of the exploding star and
it's better about going across space so
maybe it's equal to gold in terms of
across time but it's certainly much
easier in terms of going across space so
I'm like okay cool I really believe in
that but as you create that you now have
alternatives to government Fiat
currencies right and that is this slight
weakening of their power they're gonna
obviously push back on that and so we'll
see how that sort of plays out from a
regulatory perspective whether they just
get in on it and start buying it or
whether they're they get very anti-it I
think that yet to be determined
um but when I think about the the things
that will weaken the government's hold
on things the next thing that comes into
the picture is just the government's
absolute inability to stay on top of AI
and so now you've got oh we're already
having to lean on these these companies
and so if it becomes the most powerful
tool the most dangerous tool and it's
not controllable by governments in the
way that nuclear weapons is that's
another weakening of the power and so
now you start getting into this two
paths before you you get bologies if I
don't know if you know homology is but
you get his idea of the network state
where it's a
non-geographically bound grouping so
going back to that idea of shared
narratives so people share narratives
from all over the world they come
together they have digital currency they
can sort of make their own rules and
laws and then the other one is the
authoritarian version where it's like we
just grab a hold of all of this it is
top down and you're going to adhere or
life is going to be brutal obviously
that would be China's take but both of
those
aren't ideal for me as a child of the
80s where it's just like oh this is so
stable and wonderful so
um one do you think that
are are those the sort of two
most likely polls or is there something
in the middle that's more likely yeah um
so I I agree with you that um you know
Bitcoin and crypto represent a similar
kind of proliferated decentralized
threat to governments as AI having said
that crypto the amount of crypto you
know in in
um in in
existence compare and being used
compared to
um Fiat currencies is de minimis
and I do not think that there is any
plausible uh threat of scale against
Fiat currencies in the next say five
years
um and if I do believe that if it became
a threat of scale every government in
the world that matters would do
everything they could to ensure that
they continue to have a regulatory
environment that maintains fiat currency
is dominant and they'll lean into stable
coins they'll lean into the technology
but they want they will want to have
control over it China obvious I mean
you've got you know WeChat and lots of
digital currencies that are that are
work but you can only you have to use
the digital RMB
um you know that they they refuse to
have currency that they don't have
control over because they want the
information set they want the political
stability in the United States it's also
the importance of having the dominant
Reserve currency globally which matters
immensely to America's ability to
project power
um to maintain you know our level of
indebtedness uh all of these things so
um to to weaponize finance to you know
to declare sanctions and tariffs to get
cut other countries to do what we want
uh to align with us so given that I
think the timeline for AI being
fundamentally transformative in
governance is minimum two to three years
maximum five to ten I only see one thing
here I'm an even climate change which is
huge and in front of us and trillions
and trillions of dollars of impact and
changing the way everybody thinks about
spending money and governance and where
they live and all of that uh climate
change in many ways is slower moving and
slower impact than what we're going to
see from AI like I think AI is going to
have much more geopolitical impact in
the next five to ten years than even
climate will and that was you know what
was one of the things that when I wrote
the book The Power of Crisis and that
was before AI really took off for me
each of the crises I was talking about
were becoming larger and more
existential and I started with the
pandemic because I was writing kind of
in the middle of it and then I moved to
climate and then I moved to disruptive
Technologies and Ai and people were
saying how could you not put climate you
know as the big one I'm like well
because climate like is first of all
it's not existential like we are
actually on a path to responding to to
climate it's just going to cause a lot
of damage
um and we're going to end up at like 2.5
degrees 2.7 degrees warming and it's
also going to happen like over the next
75 years and will probably be at Peak
carbon in the atmosphere at around 2045
and then a majority of the world's
energy you know starts coming from our
Peak carbon energy use Excuse me and
then a majority of the world's Energy
starts coming from renewable sources and
that's a that that's an exciting place
to be where with AI like we don't have
50 years for AI we don't have 30 years
for AI but you know we have five ten
years to figure out if we're going to be
able to regulate this or not and if it's
going to look more techno utopian or if
we're not here anymore like I mean I I
mean honestly I don't I haven't really
said this publicly but we're having a
broad enough discussion like I'm how old
are you
47. okay I'm 53.
um I think that knock on wood I don't
think that either of us are likely to
die of natural causes
um I think at our age we are probably
either going to blow ourselves up uh you
know as as humans or we're going to have
such extraordinary technological
advances
um that we will be able to uh
dramatically extend lifespans to in in
ways that are I mean you know dealing
with with cell death and and molecular
destruction and genetic engineering and
I mean just looking at what is ahead of
us over the next 10 20 years this does
not feel remotely sustainable but that
doesn't mean it's horrible that means
it's one of two tail risks and I just
can't tell if it's the great one or the
bad one but to the extent that I have
any role on this planet I'd like to
nudge us as I know you would too in the
better Direction and that means getting
a handle on this technology and and
working to to help it work for Humanity
with Humanity as opposed to ER you know
not against it but you know kind of um
irrelevant to it um we don't want
technology that does not consider human
beings as relevant on the planet
you can reboot your life your health
even your career anything you want all
you need is discipline I can teach you
the tactics that I learned while growing
a billion dollar business that will
allow you to see your goals through
whether you want better health stronger
relationships a more successful career
any of that is possible with the mindset
and business programs and impact Theory
University join the thousands of
students who have already accomplished
amazing things tap now for a free trial
and get started today
no I agree with that the thing that I
think that we're going to have to
contend with though is what is a
governmental response going to be to the
potential of their weakened power so we
know how we know how China is is dealing
with it
um so it was really amazing to watch
China open up the capital markets and
really just explode and in your book you
talk about this I and I found it a
really interesting Insight that that
forced me to reorient my thinking about
what China did and so
um you know if you've read Mao the
untold story it's like it's just
devastating to see how much death and
destruction came out of an authoritarian
government and then at the same time
you're like I don't know that America's
approach is always the right the most
optimal answer I forget the exact words
you used to every problem and what you
pointed out with China when they opened
up like just the growth rate was pure
insanity and it's really really pretty
breathtaking but they learned from the
collapse of Russia exactly what not to
do and now they're clamping back down
now as somebody that grew up in the U.S
man I look at that and I'm just like
dude that I don't like that that freaks
me out the thought of always being on
that Razor's Edge of like the individual
doesn't matter and we can just
completely obliterate you but then I
watch not even the government
necessarily in the U.S but the people in
the U.S giving up on Free Speech which
as I think about what what's like the
one thing that you just can't let go of
if you want the individual to matter and
I think if you want to get to the quote
unquote right answer uh you have to have
free speech like even in my own company
where it would be very tempting to run
my company in an authoritarian way I
just know I have too many blind spots so
I'm constantly like trying to get the
team to be like hey say whatever you
need whatever you believe to be true if
what you believe to be true is that I'm
an and I do not know what I'm
doing you need to be able to say that
now I'm going to push you to articulate
why I don't want some emotional
statement I want like give me going back
to truth right what is our goal what's
the metric by which we determine whether
we're getting towards our goal what can
you show me in the math that shows that
I'm doing this the wrong way and then
you know what's your take and why do you
think it's going to work better but
when I look at just the the instability
of that on both sides so you have
authoritarian rule where we just
obliterate it now as soon as we don't
feel like the government's in control we
kidnap those my words Jack ma re-educate
him and then put him back forward
terrifying or on our side where it's
like no if you say something I don't
like you 100 should be canceled going
back to what you said about Trump so
how do we as two people that want to
nudge this in the right direction
what's the right pressure point is it is
it the government is it the individual
is it the algorithms is it making sure
that AI has
um the right biases like what what's the
the right pressure point I don't I don't
know that the right biases are the issue
um I mean you know again there's a lot
of whack-a-mole going on tweaking these
models as you roll them out
um I I think it is more in trying to
ensure that you have Clarity and
transparency in what these models are
doing
um and then the data that's being
collected as it's being collected that
has to be shared where this these are
experiments that are being run real time
on human beings
um and we wouldn't do that
um with a vaccine even in an emergency
we would have a lot more testing we
wouldn't do that
um on on a new GMO food uh because we'd
be concerned about you know sort of
disease cancer you name it but we're
doing that with these algorithms that's
very interesting to me and a little
chilling that the Chinese who have done
everything they can in the last 20 years
to catch up and surpass in some areas to
the Americans in new technology areas
they look at Ai and large language
models and they've said okay we're we're
going to have control over these when a
full censorship over these we're not
going to give them data sets they can
run on in the public because they think
it's too dangerous and that means that
the llms that the Chinese are running
right now are crap
they're nowhere near as good as what the
Americans presently have and that's
because the Chinese are willing to
accept the economic
disadvantage to ensure they have the
political stability
um and I I think that the United States
again we're not going to be able to
Simply stop this progress the progress
is going to happen there's too much
money it's too fast we don't know what
we're doing as a government in response
and also there are too many things we're
focused on yes you're focused on
proliferation but what I say is fake
news and what I say is disinformation
someone else is saying you're trying to
politicize it right and then you'll have
a whole bunch of people saying we can't
slow down our companies because we need
to beat the Chinese who are going to be
the largest economy in the world just
like you know Zuckerberg did with
Facebook you know 10 years ago
um and for all of so for all of those
reasons I don't think you can slow this
I don't think you can stop it I think
what we need is a partnership between
the technology companies and the
governments and that is going to have to
be regulated at the national level it's
going to have to be regulated at the
global level by the way the financial
Marketplace is not so radically
different from this but you have
algorithms trading algorithms that run
and they need to be regulated because
you want to know that certain types of
trading is not allowed and other types
of trading is and you know the 2008
financial crisis when it hit even though
it started in a small part of the
economy we were all worried oh my God
this could explode the whole economy
what happened all the banking CEOs and
the FED head and the the chairman of the
FED uh the secretary treasury they got
together and said okay what are we going
to do to ensure the system can stay
stable and in place and that happened in
real time and one of the reasons it
works relatively well in the financial
space is because the Central Bank
Governors are technocratic and somewhat
independent from government like they
know that you want to avoid a bad
depression a market collapse they know
that you have monetary and fiscal tools
that you can use to respond we're going
to need to create something like that in
the technology space we're going to have
to create Regulators who are in
government but are working directly with
the tech companies as partners to avoid
contagion to respond immediately to
crises when they occur and they won't
just lead to Market collapse they could
lead to National Security destruction
they could lead to lots of people
getting killed but it's going to be the
same basic kind of model
um and and we got to start working on
that now
all right so let's talk about then the
central thesis of your book
um so using my Words the book kind of
wants for a crisis hence the title The
Power of a little bit you call it you
call it the Goldilocks crisis something
that is uh devastating enough that
people stop and pay attention but not so
devastating that we can't respond well
to it
um
is that the only way to get people to
act to uh cooperate in the way that we
would need to cooperate and does it like
when you think about the ideal state of
the world is it globalized or sensibly
deglobalized
um I first of all it's a great question
and it's not like you can never make
progress outside of Crisis progress
happens all the time outside of Crisis
we see new legislation that gets passed
um we see you know new companies that
are started we see all sorts of we see
good works by people of other people on
the street you know
um but but you know it's one thing to
say
um can't we get can't we get the
progress we need
um in a family you can in a community
you can when you're working together
well within an alliance you frequently
can
in and in in what I call a g zero world
where there's not a level of functional
Global Leadership where countries aren't
working together well they don't trust
each other they don't have you know the
institutions that align with the balance
of powers today so it's not a G7 or a
G20 it's really an absence of Global
Leadership I think in an environment
like that the the most like by far the
most likely way to get an effective
response just like with the Soviets
versus the Americans Reagan versus
Gorbachev in the opening of my book is
if you have a crisis if the aliens come
down and you know it turned out that the
pandemic wasn't a big enough crisis
didn't kill young people uh it wasn't I
mean you know look at look at what
happened the Americans pull out of the
World Health Organization the Chinese
lie to everybody
um about not about not being transferred
human to human the relationship got
worse between the two countries the
Americans we didn't provide vaccines to
the poor countries around the world even
though we had people in the United
States that didn't need them and were
waiting on that already took them and
were waiting on boosters like it was it
was a complete pardon my
French
um and it's because it didn't feel like
an existential crisis it wasn't big
enough to force us
um to cooperate to a greater degree
January 6th in the United States
I mean maybe if Pence had been hung
maybe if some some I mean God forbid uh
maybe if uh if you know members of the
house or senate had been killed or
injured or kidnapped for a period of
time but as it stood that evening a
majority of Republicans in the house
voted not to certify the outcome why not
because they're focused on the jobs
because they knew it wasn't a
constitutional crisis they knew it
wasn't a coup so I
in a dysfunctional governance
environment where people don't trust
each other at the highest levels that
are in power where we don't have the
institutions that can work are proven to
work to respond to the crises in front
of us yeah we need a crisis and the good
news is that climate is clearly not only
a big enough crisis but also one that
Humanity I think is up for and so that
is forcing us
every year we actually are exceeding
radically exceeding in um in uh
renewable energy production and reduced
cost from what the International Energy
agency is predicting every year for
decades now we've been exceeding that
and that's because this crisis has been
big enough and it's affecting everyone
to mobilize our asses into action and
the question is is AI a crisis that we
can actually effectively respond to
there's no question the size is suitably
great that it should motivate us and
when I talk to government leaders around
the world today they are focused they
are focused on it they're focused on it
because of the size of the crisis but
also it's very interesting so the U.S
government it's not just because there's
suddenly all experts in AI it's also
because the three things that they are
most concerned about is National
Security priorities which is
confrontation with China
war between Russia and Ukraine and proxy
war with the Russians and threat to the
U.S democracy they think and they're
right that all of these are dramatically
transformed by AI developments so not
only is AI coming as a big new thing but
also all the things they're already
worried about spending a lot of time and
money on and blood
um are things that are they better
figure this out or they're in trouble so
I do think the the
Mo the motivation
um to get this right is going to be
there I just I hope we're up for it and
uh you know again I'm I'm an optimist
I'm I'm hopeful I mean at the end of the
day I mean the fact that we're here and
we're talking about it uh means that
we're capable of doing so my only fear
is that with global warming you can't
win global warming and get a leg up over
China or Russia uh but you can win Ai
and get a leg up and be better and I
think that that one thing that people
aren't talking about enough for sure is
that AI is going to be an adversarial
system meaning bad guys are going to
have ai and they're going to try to do
things to hurt me with that Ai and then
others are going to build AI that is
protective and try to stop the bad guys
and so you will have just like with
normal hacking you'll have an Ever
escalating arms race of AI and so even
if only with the best of intentions we
will end up getting to AI super
intelligence because we're trying to
stop somebody from doing a bad thing and
it's this is go ahead I was gonna say
that's a really good point and I've
given a lot of thought to that because
look we don't trust the Chinese at all
they don't trust us they've invested
billions and tens of billions of dollars
into next Generation nuclear wind uh
solar electric vehicles and the supply
chains for all of that now there are a
lot of people around the country that
are not particularly focused on climate
but they're focused on China and they're
saying hey we cannot let those guys
become the energy superpower post carbon
we've got to invest in it so that we're
going to be the energy superpower but
the good thing about that is hey that's
virtuous competition like if we end up
investing more so that we're the
dominant superpower that just means
cheaper post-carbon energy faster for
everybody but in the AI space it is
absolutely unclear that there is a
virtuous cycle of competition if we are
not working together the proliferation
risk is much much greater I couldn't
agree with you more on that point
yeah so now the question becomes when
when you look at what we get on the
other side of the crisis the cooperation
the banding together to focus on one
problem
um does does that lead us back to
globalization so we we opened this up
with globalization amazing create we
were lifting some ungod like 160
000 people out of poverty every day for
like nine years was absolutely crazy the
number of people that we pulled out of
poverty uh but you get the Rust Belt
pushback rise of populism it's not good
for everybody and so needing to really
be honest about that but
in this world let's say that we get the
right crisis what are we steering
towards is it re-globalization or is it
what I'm calling thoughtful D
globalization
I think we are trying uh to move back
towards uh globalization but thoughtful
globalization
um where you are using the resources you
have to more effectively take care of
the people that are uh Left Behind uh
that you are constantly retooling your
institutions and reforming them because
the Technologies are changing that fast
and that's something governments by
themselves won't be able to do again
they'll have to do in concert with these
new technology companies or governments
will have to change what they are
they'll have to integrate technology
companies into them and that's that
scares you that's more of an
authoritarian model frankly
um but I I do think
um that uh one of the reasons you've
steered me a couple times now in a
direction that historically I'd be very
easily steered which is to talk about us
versus China
and I've resisted it and the reason I've
resisted it even though U.S China is in
a horrible place right now and the
relationship is getting worse it's not
getting better but I I think it is more
likely within three five years that AI
companies Cutting Edge
in all sorts of fields will actually be
all over the world I don't I think this
is going to be a proliferating
technology for good and for bad so I'm
more concerned about individuals Rogue
States terrorist organizations doing
crazy things as opposed to the US versus
China that ultimately wants stability in
the system right but I'm also hopeful
that it's not going to be a small number
of dominant companies in the United
States and China that control all of the
Next Generation AI actually if you're at
a position where you can run a
near-cutting edge AI on your own laptop
or on your smartphone and millions and
millions of people have access to that
intelligence and they can do things with
it I don't think that a small number of
Mega Tech corporations are going to
control it I mean they may have
platforms that they'll be able to charge
taxes on basically tariffs on but I
think so much of both the value the
upside and the danger will be
distributed all over the world and
that's again very different than the way
we think about geopolitics today so I
don't think the uh I don't on the AI
front I don't think the U.S China fight
is the principal concern to worry about
in the next five to ten years
oh okay well so this is very interesting
one of the things you talked about in
the book is that when
Russia invaded the Ukraine one of the
things that they did to try to appease
the west and keep them calm was like hey
we know you're really worried about
hackers we're gonna go round them up
arrest them
um and what happens to the ability to
use political means to get these Bad
actors in line if they are proliferated
everywhere and we have varying degrees
of ability to influence
yeah uh it it's one of the reasons why I
think you don't have an Interpol model
or an iaea model it's why I think it's
going to be it's going to have to be
much more inclusive with the technology
companies I keep coming back to this I
don't think that the US government by
itself or the Russian government would
be able to make that kind of a promise
as easily Russians are a little bit
different here right if you're a
authoritarian State and you have real
control of the information space you
know maybe the vast majority of people
working on hacking are under your
Authority maybe but if AI really becomes
as explosive and as decentralized as I
believe it will then the governments by
themselves and you know are going to
have a hard time even maintaining
control of the AI space I'm not sure the
Chinese model on this is going to work I
mean in five and ten years time remember
they gave up on the gray great Chinese
firewall and instead because it was too
porous and instead what they did was
they used the surveillance mechanisms
and they had a whole bunch of people
that were online that were basically
nudging Chinese citizens towards better
behavior and towards certain things they
should say and certain thing again
certain things they didn't say and that
turned out to be more effective
um AI I think is going to become if it
becomes a much more decentralized space
it's going to be much much harder for an
authoritarian state to do that but
certainly it'll be impossible for
Democratic state to do it now the
question you haven't asked me is does
that mean that democracy is sustainable
I mean the U.S government feels
immediate national security threat from
all these tech companies and they can't
regulate it you know might the Americans
start finding the Chinese model on AI
much more attractive I don't think so
and I don't think so because I think our
system because our system is so
entrenched it's so slow moving it's so
receptive to money the companies are so
wealthy they have the ability to capture
the regulatory environment like again I
mean never say never it can happen here
if things are incredibly dangerous yes I
mean you know you can take Desperate
Measures but short of the worst
scenarios I think that the United States
is closer to kleptocracy than it is to
authoritarian regime if there's a way
that the Americans are going to move
away from democracy it's probably not a
Chinese model
right well that's horrifying uh I doubt
my hope it's funny my brain tried to
fill in what you were going to say and
your answer is probably more true than
what I was hoping you were going to say
but what I was hoping you were going to
say was that we have such a strong
shared narrative around Freedom that we
wouldn't make those he laughs ladies and
gentlemen he laughs uh yeah oh my God
that used to be true when my dad was
alive and after World War II I just
don't see it anymore I mean not unless
everyone's lying to the pollsters all
the time I it just doesn't feel that way
yeah I don't think we agree in the
United States what our country stands
for I don't think we do I don't think we
know what our country stands for there's
such incredible cynicism among young
people that they're just being lied to
that it's performative from their
governments from their corporations from
everybody from the media and some of it
I under some of it is very
understandable
um you know it's it's I it's painful but
like our economy is doing well so well
our technology is doing so well we have
the reserve currency it's not being
threatened that we're in a great
geography it's very safe it's very
stable there are so many things that are
great I saw that uh Jamie dimon you know
a few minutes that everyone was talking
about standing up for America but he
didn't talk about our political system
and our political system is
deteriorating and people don't believe
in it the way they used to and there are
no there I've not seen any pushback
against that in the last 20 years it got
worse under Obama it got worse under
Trump it's gotten worse under Biden it's
clearly not just about those people it's
structural there are a lot of things
driving it
um and uh that that I don't see a I mean
God forbid it we had a 911 right now
I mean I was here I was in New York at 9
11. I saw the second tower go down I saw
the way that New York City rallied I saw
the way the country rallied there was 92
percent approval for Bush for Bush a
month young people will not understand
how crazy that is uh and and I don't
think that could happen today
I don't think I I don't think it could
happen even with someone who is as much
of a unifier as Biden has been
historically and it certainly couldn't
happen under Trump
um and and that's that's really sad
that's really sad
do you have a sense of how we unwind
that this is the one thing my thesis has
been on this that until there is enough
pain and suffering which unfortunately
historically Means War
um you don't get the the country won't
come back together right so we've
obviously been more divided than we are
now because we've been an open Civil War
in the past but I don't see how you
unwind these increasingly Divergent
narratives of left and right without
real suffering
well I mean there was this great book
that was written by a Princeton
historian about the the great the three
great levelers uh and it talked about
how in societies whatever the governance
mechanism historically they tend to get
um more unequal and people with access
to power get closer access to power over
time uh unless one of three big things
happen uh famine uh Revolution or War
um and you know that's that's a little
depressing because that implies that you
have to have that kind of great kind of
serious Crackdown crash uh before you
before you you know come out and and
create more opportunities for people but
um I I also are am seeing
um I mean coming out of the pandemic
there was an enormous amount of money
that was spent
um on on poor people it wasn't just like
after 2008 when you bailed out AIG and
Lehman Brothers and the bankers this
time around I mean you bailed out
everybody you bailed out working mothers
you bailed out small and medium
Enterprise prizes and it made a
difference and inflation has hit hard
but now finally working-class wages are
actually growing faster
um than you know than inflation and then
the average wage um and that wasn't true
for decades so maybe there is a bit of a
lesson in that maybe there is a bit of a
lesson when people are seeing that you
know it's the wealthiest with their
legacy capabilities
um that are getting uh accepted to the
major universities the best universities
and not others
um and there's a backlash against that
and maybe that forces greater
transparency maybe it turns out that AI
becomes with all the wealth it can
generate uh becomes more of a leveler
um for people in the United States that
will have access to Opportunities they
hadn't had before maybe it allows
globalization to pick up again and not
everybody's vote will rise at the same
speed but at least everyone's vote will
be rising for a while but coming out of
the pandemic we had 50 years if we we
look at Humanity as you know this little
ball of eight billion people we had 50
years where overall we had extraordinary
growth and if you watched Stephen Pinker
and Hans rossling and all of these Pro
globalization folks it is true we
created not just very very wealthy
people but also a global middle class
and anyone looking at the globe you know
without a national without a nationality
just like you're an average person you
don't know where you're going to be born
you don't know what family would you
want to be born in the last 50 years yes
yes you would and hopefully you win the
lottery and you're in the United States
like you and me but you know anywhere if
you you that's the time you'd pick but
the last three years you wouldn't
because the last three years suddenly
human development indicators have gone
down more people are you know forced
migrants more people are you know born
into extreme poverty and and people are
getting angrier as a consequence of that
well I mean I think there's a good
chance that with AI we will have a new
globalization that will create far more
opportunities but we need to be very
careful about those negative
externalities and so far it's very early
days but we're not addressing them yet
so given all of that paint a picture for
me of the near term let's call it the
next 10 years the the world is Shifting
and changing what does the world order
look like
um as we look out into the future and
I'll contextualize that with you've got
things we've talked about here you've
got the war in Ukraine you've got a
dynamic between the US and China being
radically upended by the proliferation
of AI creating potentially powerful or
at least destructive entities anywhere
which make it harder for us to yank
levers of political persuasion
with all of the unique cocktail that's
Brewing now
um how does one
begin to conceptualize where the world
is heading over the next 10 years
well I can't imagine wanting to be alive
at any other time
I mean we talk about the anthropocene
where human beings
um first time in in history we have the
ability to actually shape the future of
humanity and uh our role on the planet
that we're on that's pretty
extraordinary
um and you know what does that mean uh I
think that means that governments and
governance will look radically different
than anything that we have lived with
we've lived for all of our lives for 50
years you and I on average now we've
lived in a fairly stable system the
Soviet Union collapsed U.S was in charge
China's had an extraordinary rise but
generally speaking the global order
today still looks more or less like the
global order you had 50 years ago Henry
Kissinger recognizes it right he was 50
now he's a hundred but it feels like
geopolitics still function the way they
used to you've got heads of state you've
got governance you still have the U.N
you know you've got the IMF you know
you've got the World Trade Organization
you've got these big things that that
more or less I mean are just at the
security Council security councils kind
of the same Security Council we had
before from the 70s but you know
whatever it's not it it's the the the
the rules the UN Charter it's all there
we you know it's it's you could have you
could have been born a long time ago
in 10 years time I think we'll still
recognize the the tectonics on the
planet I think the demographics we can
talk about we can talk about how Japan
will be smaller and how China's peaked
out now India is growing and that we
pretty good sense in that climate we've
got a pretty good sense of what
climate's going to look like and extreme
storms and the rest but but government
how government works how the geopolitics
work how the world is ordered ruled
I think it's going to look radically
different in 10 years I really do
certainly in 20 but probably in 10. I
think that a big piece of the power that
determines
who we are and how we interact with
people will be driven by a very small
number of human beings that control
these tech companies that may or may not
know what they're doing and
um they may not be with intentionality
and we don't really know what their
goals are and those goals can change
right I mean I I talked a little bit in
my TED Talk which I haven't really
talked much about which is kind of good
um uh is uh I talked a little bit about
how you know when you and I were raised
it was nature a nurture
and and that determined who we were and
that now for the first time in humanity
we are being raised by algorithm and
that we have a whole generation of kids
whose principle
understanding of how to interact with
Society will be intermediated by
programmed algorithms that have no
interest in the education of that child
that's a that is a subsidiary impact of
what they are trying to do those
algorithms it what it is trying to do
um and and a lot of the interactions
that will take place with those kids
will be AI interactions not just
intermediated but the actual
relationship will be with AI
which by the way if I could waive a
magic wand and do one regulation in the
world today I would say anyone under 16
cannot interact with an AI directly as
as if it were a human being unless it's
under human super direct human
supervision
because I just don't want people to be
raised by anything other than people
until we understand what that means
I mean the level of Education
but again into I want that to be
directly controlled by supervised by a
person so yes I think education I think
a doctor I'd love to have ai being used
you know for medical you know on medical
apps for kids but I'm saying if you're
having a relationship with something
including with a teacher I don't want
kids to have a relationship with an AI
educator unless it's unless it's
overseen by an adult until we know what
it does to the kids
you know we just don't know we just
don't know and I I worry about that a
lot I wouldn't want I mean I don't have
kids if I had them I'd worry about that
I know my mom wouldn't have allowed that
and thank God for it so yeah I think
that um I think that we're going to be
different as human beings I mean you
know you talked about you've all
um no Harare recently who I I find very
inspirational as a thinker
um and you know this homo Dias concept
that he comes up with I think that young
people today are already something a
little different from Homo sapiens and I
don't know exactly what that is none of
us do because we're running the
experiments on them now
um I'm not comfortable with that
it's a a good summary Ian this has been
incredible where can people follow you
uh they can follow me on Twitter at Ian
Bremer or LinkedIn at Ian Bremer or even
threads you know the the few people that
are on that but it's kind of fun Ian
Bremer uh what else I mean you know uh G
zero media.com g zero all
onewordmedia.com uh where we have a
little digital media company that we
reach out to people all over the world
and they can get our stuff for free uh
which hopefully it's uh it's engaging
and useful just like I really enjoyed
this last hour so this was uh this was a
lot of fun same man all right everybody
if you haven't already be sure to
subscribe and until next time my friends
be legendary take care peace if you want
to learn more about this topic check out
this interview
I actually want to start with a quote of
yours So for anybody that doesn't know
you're a former CIA legitimate spy which
is crazy and the reason I find that
interesting is because you would have to
be a master of psychology your own and
others