Biggest World Threat In 2024: AI Warfare, Media Companies, Ukraine & China Conflict | Ian Bremmer
nXJBccSwtB8 • 2023-08-01
Transcript preview
Open
Kind: captions
Language: en
you said these are dangerous times the
world order is Shifting before our eyes
we also both know that with Hyper
disruptive Technologies like AI on the
horizon a good outcome is not guaranteed
why do you think big Tech will become
the third superpower and what are the
dangers and opportunities if it does big
Tech is essentially Sovereign over the
digital world the fact that former
president Trump was de-platformed from
Facebook and from Twitter uh when he was
president you know most powerful
political figure on the planet and he's
just taken off of those networks and as
a consequence hundreds of millions of
people that would be regularly engaging
with him in real time suddenly can't see
it that wasn't a decision that was made
by a government it wasn't a decision
made by a a judge or by a regulatory
Authority or even by a multi-national
organization
um like you know the U.N it was made by
individuals uh that own tech companies
um the same thing is true in the
decision to help Ukraine uh in the war
in the early days the U.S didn't provide
much Military Support most of the
military capacity and the Cyber defenses
the ability to communicate on the ground
uh was stood up by some tech companies
that they're not allies of NATO they're
under no obligation to do that they've
got shareholders right but they still
decided to do it
um I think that whether we're talking
about Society or the economy or even
National Security if it touches the
digital space technology companies
basically act with dominion and that
didn't matter much when the internet was
first founded because the importance of
the internet for those things was pretty
small but as the importance of the
digital world drives a bigger and bigger
piece of the global economy a bigger and
bigger piece of Civil Society a bit
bigger and bigger piece of National
Security and even increasingly defines
who we are as people how we interact
with other human beings what we see what
we decide what we feel
um how we emote uh that that is an
astonishing amount of power in the hands
of these tech companies and yes there
are some efforts to rein them in to
break them up um to regulate them but
when I look at artificial intelligence
in particular
um I see these technology companies and
their Technologies vastly outstripping
the capacity of governments to regulate
in that space so does that mean that
suddenly you're not going to be citizens
of the US you're going to be citizens of
a tech company no I'm not going that far
but certainly in terms of who wields the
most power over us as human beings
increasingly you would put those
companies in that category and that none
of us even five years ago were thinking
about this seriously and certainly when
I I was studying as a political
scientist this is my entire career you
know the geopolitical space is
determined by governments right like
them or hate them and some of them are
powerful some of them are weak some of
them are rich some are poor some are
open some are closed some are
dictatorships right some are democracy
some are functional some are
dysfunctional but they're in charge
and that increasingly is not true as you
look at that potential or not potential
as you look at that growing reality how
does that play out does this become uh
the one thing when I look at that that I
really start getting paranoid about is
that
AI especially Quantum Computing I'm
maybe less familiar with but sort of
lingers in the back of my mind become
one of two things either weapons used by
governments
um even even if it's not against their
own people though I do especially with
authoritarian governments I get very
paranoid about that but even if they're
just used as Warfare against other
countries that sort of quiet invisible
battle freaks me out and then also I
worry very much about this becoming the
new battlefield for a cold war between
the US and China specifically do you see
us as moving towards that because the
tech will make that increasingly easy to
fight an invisible War I I do think of
course that all of these Technologies
are both enabling and destructive and it
all depends on the intention of the user
and in some cases
um you know it's someone who's just a
tinkerer that makes a mistake or that's
playing around and you know it explodes
I'm not particularly worried that the
robots are going to take over I'm not
particularly worried that we're on the
cusp of developing a superhuman
intelligence and that we're suddenly
irrelevant or we're you know held
hostage to it that's in other words I I
mean I know that you love the Matrix we
talked about that a little bit before
the show this is this is not my 5-10
year concern
um but the idea that this technology is
going to proliferate explosively I mean
vastly beyond anything we ever were
concerned about with nuclear weapons
we're 80 years on it's still just just a
handful of countries and no corporations
no terrorist groups no individuals to
have access to those nukes no no AI with
both its productive and destructive
capacities will not just be in the hands
of Rogue States but will also be in the
hands of people
and and terrorists
um and corporations and and they'll have
Cutting Edge access to that so I mean it
would be easier to deal with if it was
just about the United States and China
and we can talk about the United States
and China and how they think about that
technology differently and how we're
fighting over it and how it it has
become a technology Cold War I think
that we can say that that exists right
now not a cold war overall but a
technology Cold War I think that exists
um but I think the dangers of AI are far
greater than that it is precisely the
fact that non-governments will act as
principles in determining the future of
Digi of the digital world and of society
and National Security as a consequence
and governments right now governments
still seem to think that they're going
to be the ones that will drive all this
regulation and in the most recent days
the United States is taking just a few
baby steps to show that maybe they were
recognize that that's not the case
um but ultimately either we're going to
have to govern in new institutions with
technology companies as partners as
signatories
or they're not going to be regulated and
I think that that that reality is not
yet appreciated by citizens it's not yet
appreciated by governments okay so tell
me more about that what does the world
look like where this technology is
proliferating like that and is not
regulated
um well if it's not regulated at all um
that means that everyone has access to
it so let's look at the good side first
let's be let's be positive and
optimistic because I am a I'm a Believer
in this technology I think it does all
sorts of incredible things and I'm not
just talking about chat GPT I'm talking
about the ability to take any
proprietary data set and be maximally
efficient in extracting uh value from it
um helping allowing workers to become AI
adjacent in ways that will make them
more productive and effective I look at
my own firm Erasure group we've got
about 250 employees and I we did a town
hall with them the other day we do one
every order and we were talking about Ai
and I said I don't think there's anyone
in any of these offices globally that
will be displaced by AI in the next
three to five years not one of my
knowledge workers but I said all of you
will be AI adjacent and if you're not if
you're not learning how to use AI to
dramatically improve your work whether
you are an analyst or whether you're on
the business side or you're in finance
or you're you know in on the it help
desk or you're a graphics person an
editor whatever it is you will become
much less productive than other
employees that are doing that and that
will be a problem for you so we need to
get you the tools and you need to learn
so I and I think that that's that's true
in almost every industry imaginable it's
true in education it's true in health
care and for new Pharma and vaccines
it's true for new energy and critical
infrastructure and what's so amazing
about it one of the reasons why it's
taking us so long to respond to climate
change even now that we we all agree
that it's happening we all agree this
420 parts per million of carbon in the
atmosphere we all agree there's 1.2
degrees Centigrade of warming like
that's that's no longer in dispute and
yet it's really taking us a long time to
to get to the point that we can reduce
our carbon emissions and the reason for
that is because you need to change the
critical infrastructure right you need
to move from one entire supply chain
oriented around carbon to another one
oriented around something new whether
that's solar or you know Green hydrogen
or you name it right
um when you're talking about AI
you're talking about CR first and
foremost creating efficiencies using
your existing critical infrastructure
which means you have no vested
corporations that are saying we don't
want that no every corporation is saying
how can we invest in that to create
greater profitability everyone every
every oil company is going to use AI
just like every post fossil fuel company
is going to use it every bank is going
to use it
um every pharmaceutical company whether
they're using whether they're an mRNA or
they're in traditional uh uh you know uh
vaccines that are that are developed as
we have over decades now I I think that
we truly underestimate the impact that
will have in unlocking wealth in
unlocking human capital and it's going
to happen fast it's not decades as it
took with globalization to open markets
and get goods and services to to move
across the world it's years in some
cases it's months and that that to me is
very very exciting so that's the
positive side and uh frankly that's what
the positive side looks like without
regulation too because I mean look there
are trillions of dollars being spent on
this rollout and it's being spent by a
lot of people who are hyper smart they
are hyper competitive they want to get
there first before other companies that
are in that space and they don't need
any further incentive to ensure that
they can roll that out as fast as
possible so you and I can we can say
whatever we want but it's not you know
further subsidies are not required right
like that is just going to happen that
is going to happen
um but what they're not doing and I'm
sure what you want to spend more time on
with me is not the everything's going to
be great or you know what they call this
e Dash act the you know sort of
exponential accelerationists who just
believe that if we just put all this
money in it then we're gonna we're gonna
all become a greater species and it's
just gonna happen but they're going to
be a lot of negative externalities
and we know this from from globalization
I mean the miracle of your and my
lifetimes thus far before AI the miracle
was we managed to unlock access to the
global Marketplace for now 8 billion
people
trade and goods and capital and
investment and and the labor force the
workforce and that created dislocations
it meant that there were a whole bunch
of people that were more expensive in
the west that lost their jobs as
inexpensive labor that was very talented
in China and India gained jobs but but
that led to unprecedented growth for 50
years
there were also negative externalities
and those negative externalities played
out over many decades but it's when you
take all of this inexpensive coal and
oil and gas out of the ground and you
don't realize that you're actually using
a limited resource and you're affecting
the climate and so decades later we all
figure out oh wait a second
this is a really huge cost on humanity
and on all of these other species many
of which are already extinct and no
one's bothered to pay for them well with
AI the negative externalities will
happen basically simultaneously with all
the positive stuff I just talked about
and just like with climate
none of the people that are driving AI
are spending their time or resource
figuring out how to deal with those
those problems they're spending all
their time trying to figure out how to
save Humanity how to accelerate this
technology so if we don't talk about
those negative externalities they're
just gonna happen and they won't be
mitigated they won't be regulated and
there's a lot of them and you know we
can talk through what they are but I
mean there's you know just just to put
in everyone's head here that kind of
like climate change right we all wanted
globalization I'm a huge fan of
globalization we all hate climate change
we wish it hadn't happened you cannot
have one without the other
and you know the fact that we were so
focused on growth and that all of the
powerful forces are let's have more
stuff let's get more GDP let's extend
our lifespans let's improve our
education let's take people out of
abject poverty all of which are you know
laudable goals some more some less but
things that we all like
but there were there were consequences
that no one wanted no one dealt with no
one cared as much about because they're
not as directly relevant to us as the
shiny Apple that's right in front and
that that is what is about to happen ex
an exponential fashion with artificial
intelligence all right so we've got the
shiny object syndrome myself included I
am I am deploying AI in my company as
fast as I can but at the same time I am
very worried about how this plays out uh
you've already touched on job loss
you're not super worried about that in
the three to five year time Horizon I
may be a little more worried about that
than you but I gave a same uh a similar
speech to my company which is I have
literally zero intention to get rid of
anybody uh but I do have the expectation
that all of you are going to be learning
how to use Ai and I know that that is is
going to mean I'm going to get
efficiencies out of my current Workforce
which means I won't be hiring additional
people so while the people I have are
safe yep uh it certainly creates
instability in people uh in terms of
looking for a new job the the kind of
Mobility I don't think people are going
to be scaling as quickly as possible but
my real question for you is given that
you have a Global Perspective which
which I've come to late in the game and
for long time viewers of mine I will
just say the reason I become so obsessed
with this you and I were talking about
this before we started rolling I come at
everything from the perspective of the
individual and I think that that culture
and all these knock-on effects are all
Downstream of the individual and if we
want a good Society we have to be good
individuals but we have to take the time
to say what is that like what are we
aiming towards what's our North Star
what are we trying to get out of this so
for me the punch line is human
flourishing I don't spend time in this
interview defining what that means
certainly my listeners have heard me
talk about that before but what do you
think about I I assume you will roughly
given the the talk that you just gave
will roughly say something similar we
want good things we want to pull people
out of poverty we want to clean up the
environment there's going to be a lot a
lot of things we want to do that I think
more or less are about human flourishing
what then is the Collision of a new
technology like AI becoming so
ubiquitous in an unregulated fashion
that gives you pause is it us China is
it a rogue actor making bio weapons like
what's the thing that when you look near
term we'll say the three to five year
time Horizon
what gives you pause
so I there are a few things
um I and I I don't even though I said I
don't think I'm going to um fire anyone
because of AI I I do worry that the same
populous trends that we have experienced
in the developed World in particular
over the last 20 years can grow faster
if you are
um a rural
um you know living in a rural area or
you're undereducated
um and uh you know you're not going to
become AI adjacent in the next five
years ten years in the United States in
Europe and those people will be left
farther behind by the knowledge workers
that have that opportunity
um and so I'm not saying they're going
to have massive unemployment but I worry
about that what do you think about like
picking fruit and stuff like that with
robots that make your radar for anything
near-term again not so much so I again I
would say no let me tell you why I say
no about that because when I think about
what
CEOs do with their workforces generally
they take those productivity gains they
pocket them
um you know they pay out good bonuses to
themselves to their shareholders maybe
they invest more in growth but as long
as growth is moving they're not getting
rid of a whole bunch of people they like
the people that they have they want
they're always thinking the trees are
going to grow you know sort of to the
heavens and then when they face a sudden
contraction a recession or even worse a
depression then suddenly they look at
everything around them and say okay
where can we cut costs and if we've
suddenly if those workers if a lot of
those workers aren't as efficient as
they used to be and you get new
technologies suddenly it's not like
you're incrementally getting rid of
people every year it's that you've taken
a huge swath out of the workplace so I
don't think that that's going to happen
suddenly
um in the next few years because we're
coming out of a mild narrow slowdown
right now and the next few years should
look better
um I I more think about what happens the
next time we're in a major cyclical
downturn and and combining that with
where we've gotten to with the AI
productivity build up at that point but
I but I still think that in the interim
you're gonna have people that aren't
gaining the productivity benefits from
AI inside Western economies and those
are the same people that have been hit
by the fentanyl crisis those are the
same people that haven't had good
investments in their Educational Systems
than around the world the people the
digital Have Nots the people that aren't
even online so they won't be able to use
these new AI tools to be a to improve
their knowledge to have access to better
doctors so they'll be left behind this
new turbocharged globalization and
that's a lot of sub-Saharan Africa first
and foremost so I do think that there
are two groups of people that even in
the next five years that will suffer
comparatively and will be angry
politically and we'll create social
discontent so I didn't mean to imply
that I didn't care about that or that I
thought it was off the screen it was
more that I don't see that as a firm of
literally 250 people like we're tiny and
if you tell me that we're going to have
a lot more efficiency I I wouldn't
actually hire less I'd hire more because
I want to get to 500 people faster like
there's just more things that I want to
do without taking any outside investment
um but but that's a tiny tiny issue
compared to the other stuff we're
talking about the things that I'm
probably most worried about in the near
term three years let's say I'd say or
three buckets
um the first is the disinformation
bucket the fact that inside
democracies
increasingly especially with AI we as
Citizens cannot agree on what is true we
can't agree on facts and that that
delegitimizes the media it delegitimizes
our leaders and both political parties
or the many political parties that exist
in other developed countries it
delegitimizes our Judicial System rule
of law it even delegitimizes our
scientists and you can't really have an
effective democracy if there is no
longer a fact space I mean we're seeing
it right now in a tiny way with all of
these indictments of trump and it
doesn't matter what the indictments are
doesn't matter how many they are it
doesn't matter what he's being indicted
for what matters more to the political
outcome is whether or not you favor
Trump political if you do then this is
politicized It's a Witch Hunt and you
know Biden should be indicted and if you
don't
um then Trump is unfit and every
indictment doesn't matter what it is
before you even get a result of it uh
then you know he's guilty and and that
with AI becomes turbocharged
you can reboot your life your health
even your career anything you want all
you need is discipline I can teach you
the tactics that I learned while growing
a billion dollar business that will
allow you to see your goals through
whether you want better health stronger
relationships a more successful career
any of that is possible with the mindset
and business programs and impact Theory
University join the thousands of
students who have already accomplished
amazing things tap now for a free trial
and get started today
I want to get into why that happens so
my first question on that is
pre it's definitely pre-ai because I
think this started breaking down with
social media great
um how prior to social media do you
think that we were able to come to a
consensus on truth
um well couple reasons one uh is that a
lot of people got their media from
either the same Source or from
overlapping and adjacent sources so you
had more commonality to talk about
politics to the extent that you talked
about politics second it was mostly long
form
so you would read a newspaper article
you would listen to a radio show you
would watch a television show you
weren't just getting the headline
because today if you go on CNN or Fox
News on their website and don't look at
the headlines just look at the pieces
the pieces actually overlap a fair
amount if you look at the headlines and
then if you look at what headlines
you're being filtered to then the news
that you're getting is completely
different so I think that's a reason too
um and and of course the fact that
people are spending so much more time
intermediated by algorithms means
they're spending less time randomly just
meeting their fellow other and that's
even true with the rise of things like
um dating apps right I mean as opposed
to Just Happening to date someone you
were in high school with or in college
with or you know you meet at a bar I
mean if you're meeting that person
through a dating app you're already
being sorted in ways that will reduce
the randomness of the the views that
you're exposed to so in all sorts of
tiny ways that add up that are mostly
technologically driven we become much
more sorted
short head not sorted those sorted
probably too
um as as a population
um and and then you put AI into this and
and suddenly this is being Max so let me
get another example you'll remember that
I think it was David ogleby who the
great advertising uh entrepreneur who
once said that we know that 50 of
advertising dollars
um are you know are useful fifty percent
are useless we just don't know what
fifty percent and of course now we know
how to micro Target now we know that
when we're spending money we are
spending it to get the eyeballs of the
people who are going to be affected by
our message they will be angered by it
they will be titillated by it they will
be engaged by it they will spend money
they will become more Addicted by it all
of those things and when you do that you
more effectively sort the population as
opposed to throwing a message at the
wall but everybody gets the message and
so it is not the intention to destroy
democracy it is not the intention to rip
apart Civil Society it is merely an
unintended secondary effect of the fact
that we've been become so good at micro
targeting and sorting that people no
longer are together as a nation or as a
community an AI perfects that
AI allows you to take large language
models and predict with uncanny capacity
um what the next thing is and the next
thing for an advertising company is how
I can effectively Target and reach that
person and not the other person who who
doesn't care about mine yeah and keep
them engaged so let me give you my
thesis on this this I think is uh one of
the most important things for us to all
wrap our heads around I thought a lot
about why is there a sudden breakdown in
in truth
and the more I thought about okay what
is true how can we go about proving it
the reality is that so much of what we
perceive to be true is merely
um your interpretation of something so
you're gonna get a perspective on
something built around what I call your
frame of reference so your frame of
reference is basically it's your beliefs
and your values that you've cobbled
together sort of unknowingly throughout
the course of your life it becomes a
lens through which you view everything
but it is a very distorted lens that is
not making an effort to give you what is
true it's making an effort to conform to
the things you already believe are or
ought to be and so when people confuse
that for objective reality then you have
a problem and so when you introduce AI
what well one when you introduce
algorithms you get massive fragmentation
so now I can serve you just the things
you're interested in so like if you go
to my feed you're going to Niche down
into like really weird things around uh
video game creation which is something
that I'm very passionate about that
somebody else isn't going to see and so
you get already that fragmentation you
layer that on top of your perspective
which you're coming with those those
pre-distortions then you layer that on
top of the algorithm has an agenda that
may not match your agenda and now all of
a sudden you get into these Echo
chambers that are feeding back to you
your same perspective they're
eliminating nuance
by giving you like you were talking
about headlines earlier by giving you
like this is the talking point and so
now you start everything becomes
predictable if I know you're on the left
I know what you're you know on a basket
of
um Concepts I know where you're going to
fall if you're on the right same basket
of Concepts I know where you're going to
fall and so once you get rid of that
Nuance now all of a sudden again we're
not optimized for truth we're optimized
for party line and because that then
feeds into a sense of tribe and I belong
and ease of thought quite frankly which
is one of the things that scares me the
most is like oh I don't have to think
through that issue myself I just need to
know what my party line is cool got it
and and now I go and as we get more and
more fragmented
now it becomes okay I know what my party
line is in my very deep fragment here
but I don't know what's true and I no
longer even know how to assess what's
true in fact I probably think again
because that Distortion reads to me as
objective reality so I think it is true
and so now you have all these people who
are like this is true like there's not
there's nothing you could tell me that
will make me think any different because
I believe this to be true and so now the
question becomes if I'm right that truth
is perspective and interpretation and
and the you're you're soaked in the the
perspective and interpretation of others
so they they reinforce so it becomes
perspective interpretation and
reinforcement and so that becomes quote
unquote truth
outside of
science for lack no because even science
we run into the same problem so what do
we do you know the same problem science
yes so so in a world where uh the only
way I can think to get on the other side
of this Quagmire is to go I want to
achieve this thing and I'm going to
State this is my achieve my my um
desired outcome this is the metric by
which I will determine whether I have
achieved said outcome and then instead
of asking what's true I just ask what
moved me closer to my goal
is there any way else around that that
you see or is this just a one-way Street
to fragmented catastrophe no there are
lots of ways out of it we're just not
heading towards any of them uh I mean no
you look at your Twitter feed or your ex
feed and you've got
um the people you're following and if
you're willing to spend the time you can
curate a following feed that has people
of all sorts of different backgrounds
inclinations from all over the world and
I do that
um and but it takes a lot of time and
effort and you need expertise to be able
to do that you have to be able to
research and figure out who those people
are you have to know some people in the
field most people don't do that
um but of course the four you feed is
much more titillating before you feed is
very entertaining it engages you it
angers you
um and and it and it soothes you at the
same time you want more of that and that
of course is driving you exactly in the
direction you just suggested now a lot
of people will say well okay you watch
CNN all the time you should watch some
Fox as well no that's not the answer the
answer is not watching Fox because you
will just hate watch Fox because you've
already been programmed to realize that
everything that the people on the other
side saying is false and so they're all
evil and so all that's doing is
validating your existing truth no what
you really need to I tell I tell you
young people this all the time you
really want to understand and get
outside what's happening in the United
States ecosystem watch the CBC
or Al Jazeera or deutschevela or NHK in
Japan just watch their English language
news once a week for half an hour an
hour it's not for exciting but it's like
a completely external view of what the
hell is going on in the United States
and the rest of the world and that
forced you first of all it's long form
right it's not the headlines beating you
down and secondly it's like you don't
actually have your anchor of all of the
things that are stirring you up they're
not even playing with that they're just
kind of reporting on the best they can
tell what the hell is going on and then
they're occasionally talking to people
like that are locals and whatnot but
from every side that that's very
valuable but the thing that worries me
about AI
I don't believe that AI is becoming much
more like human beings they're not
faking us out by by just being by being
able to replicate me I think what's
actually happening is technology
companies are teaching us more
effectively how to engage like computers
I mean you and I in person in a
conversation in a relationship
um a work relationship a friend
relationship a sexual relationship
whatever it is
there's nothing a computer can do that
can tear us away from that but if we
spend our time increasingly in the
digital world
where we are driven by where all of our
inputs are our algorithmic well
computers can replicate that very easily
and so if they can only make us more
like computers then no it's not like the
Matrix where you want to feed off Us in
terms of fuel it's much more that we're
very valuable in driving the economy if
you give us all of your attention and
data
and and that is the way that you create
right a a maximal AI economy it also
happens to be completely dehumanized
because we all know that human beings
are social animals we know if you stick
us in a room or you stick us in a desert
island we're gonna like engage with each
other talk to each other figure out
things about each other doesn't matter
what color we are what sexual
orientation we are we will figure it out
if we're stuck if we have no choice but
if you if you take us and you and you
use our most base most reptilian
impulses and you and you monetize those
so that we're the product
oh no no then then you lose everything
we built as human beings all the
governance all the community all the
social organizations the churches the
things the family the things that matter
to us that we're losing
that we're losing the things that make
us rooted and make us sane and make us
care and make us love I mean flourishing
flourishing starts right here it starts
at home it doesn't start online
flourishing start those are tools that
we need to use to create wealth but you
can't flourish if you don't have real
relationships that takes away strips
away the essence of who we are as people
and yet we are all running headlong away
from flourishing
yeah so that
um
the only thing I'll take exception with
there is the sense that we're we're
running away from it I think we're there
being pulled a natural exactly that that
feels more right to me that's right
that's a better term important I agree
one of the things that I feel like is is
really falling apart and this is the
thing I don't have a good solution for
this uh is shared narratives so
um you've all know what Harari talked
about this very eloquently and he said
you know look there are other species
that can coordinate in massive groups
um as big if not bigger than the way
that humans can do but we're the only
ones that can coordinate in these huge
groups flexibly and he said the way that
we create that flexibility is through
shared narratives now they have
historically come most compellingly
through religion and as religion changes
I I resonate with the language that you
know God is dead Nietzsche's sort of
interpretation of that that can Hackle
some people so I'll just say that that
the tenor of it has changed teaching
that in a world where I think a lot of
people have alternate belief systems or
things they gravitate towards or not
even necessarily thinking about religion
I think there's a god-shaped hole in all
of us and and I am not a Believer as my
longtime listeners will know but I
acknowledge that I have a god-shaped
hole in me that I need to fill with
meaning and purpose and
as we fragment so going back to this
idea as we fragment this gets very scary
because we don't have shared narratives
anymore and so now we're not necessarily
cooperating in as large groups where at
least before we would have the The
Narrative of the nation and so we had
something that we could Galvanize around
um but obviously with the rise of
populism cyclically throughout history
it's not like just now
um but whenever that rears its ugly head
then some very dark things can happen
um but on the flip side of and so I'll
say that's like a hyper
um shared narrative right something has
an injustice has been done to me and the
other person did it and we need to rise
up against okay cool shared narrative
can get dark but you can also have on
the other side where there is no shared
narrative you are now to your point
about you're being pulled in a direction
that doesn't unite us but only fragments
us for further
and I'll plug into that the reason that
I don't look at that and go oh we just
need to then come up with a shared
narrative in fact I'm going to put this
in the the framing of your book you open
your book The Power of crisis with the
story of Reagan and Gorbachev and Reagan
says the Gorbachev hey if I if the U.S
this is like at the height of the Cold
War if the U.S were invaded by an alien
would you help us and Gorbachev said yes
absolutely and that idea of okay there
are things that we could rally around
that take us out of our smaller
narrative into a larger narrative hence
the the title of the book The Power of
Crisis there is a thing that that can
bring us together and give us that
shared narrative but what scares me is
if you plug in AI bias into this
equation you can't go now I yeah now I'm
like whoa like one who gets to decide
what the ai's value system is what the
ai's belief system is how the AI
interprets truth what the AI reinforces
and then if there are a lot of AI which
which is probably the thing that
protects us from an authoritarian answer
but at the same time then you have all
this competing reinforcement that again
just brings us back to fragmentation so
as you look at that Suite of uh
unnerving potential problems
what do you see as our path to the other
side of this to doing it well
yeah
um so President Biden just uh two weeks
ago
had a group of seven uh AI Founders
slash CEOs the most powerful companies
in this space as of right now that will
not be true in a year or two they'll be
vastly more some of them are
hyperscalers some of them are a large
language model uh creators and some are
both
um and uh it was very interesting
because those seven companies basically
agreed on a set of voluntary principles
that included things like
um watermarks on AI
um and uh you know it was reporting on
vulnerabilities
uh sharing best practices uh on on
testing the models all of this stuff and
the stuff that if you looked at it
carefully you'd say those are all things
we want those are things that will help
protect us from the worst successes of
of AI um proliferation now on the one
hand they are not only were they
voluntary but they were super undefined
in ways that every company that was
there could already say we're doing all
of those things we don't need to spend
any more money on them
um but
um I am told those seven companies are
planning on creating an institution that
will meet together
um and will work on more advanced on
advancing those standards and defining
them more clearly uh we'll see uh where
that goes but also I mean as more
companies get in the space you're
creating an expectation in the media in
the government in the population that
these are things that they're committing
to and so increasingly other companies
will also want to show that they're
doing that and maybe there will be some
some backlash if they're not effective
at doing so but but you know what was
interesting to me about that initial
meeting is the White House convened it
but they didn't actually set the agenda
really at all because they don't have
the expertise they don't have the
technology they don't know what these
tools do I mean they're trying out get
up to speed and hire people as fast as
they can but they they're not going to
be anywhere close to these companies and
what I think needs to happen in short
order
is that you're going to need to create
an approach that marries these things
you'll need the tech companies to have
these institutions that that they are
you know involved in standing up but the
governments are going to need to work
with them
and and they're going to need to have
carrots and sticks they'll need to be
licensing regimes like we see for
financial institutions
um there's going to need to be uh
deterrence penalties that need to be
responsible for what's on their
platforms and if they're used in
nefarious ways there's going to have to
be penalties that could include shutting
them down
um and uh you know there's also some
carrots that they should have as this
becomes a field of thousands and
thousands of companies there's
proprietary data sets that the US
government and American universities
have access to that can you can drive
massive wealth with AI and maybe those
will become public data sets that any AI
company that's licensed can potentially
use I mean all of this needs to be
created
but we are nowhere on this right now and
and the AI like what that we've been
hearing about for 40 years but suddenly
it's exponential and exponential is not
like Moore's Law exponential it's not
like a doubling every 18 months it's
like 10x in terms of the size and the
impact of the data sets every year
so we don't have years on this
um and that that's why the urgency
that's why I mean I've completely
retooled you know our knowledge set to
focus on what's the impact of AI on
geopolitics I mean in the last year uh
because I've never seen anything that's
had so much dramatic impact on how I
think about the world and how
geopolitics actually plays out and so
far you and I have only talked about the
disinformation piece and a little bit of
the job piece we haven't talked about
what's probably the most dangerous piece
which is the proliferation piece of
things like hackers and you know
developing bio weapons and you know
viruses that can kill I mean I don't I'm
sure you've heard this I've heard from
friends of mine that are coders
um that in past weeks that they cannot
imagine coding without using the most
advanced AI tools right now because it's
just like it's just a world changer for
them and how much they can do I I don't
know any hackers
um but I'm sure that criminal malware
developers are saying I can't imagine
developing criminal malware or
spearfishing without using these new AI
tools because I mean it's just going to
allow them to Target in such an
extraordinary and pinpoint way and also
to send out so much more you know sort
of capable malware that will elicit so
much more engagement and therefore you
know bring so much more money to them or
shut down so many more servers and give
them so much more illicit data and so
much of the illicit data that they've
already collected from the hacks on you
know all of these companies that you've
heard about Target for example other
firms I mean so much of that so far is
just oh we're just selling that for
people that want to like use the credit
cards no now you're going to sell it to
people that are empowered with AI that
can generate malware against that data
and that again and that's that's like
we're going to develop all these new
vaccines and new Pharmaceuticals that'll
deal with uh Alzheimer's and deal with
Cancers and it's going to be an
incredible time for medicine but we'll
also be able to develop new bio weapons
that will kill people
um and that's not going to be just in
the hands of North Koreans or Russians
in the lab it's going to be in the hands
of small number of people that are
intelligence agencies are not yet
prepared to effectively track right
there's a reason why we don't have
nuclear weapons everywhere it's because
it's expensive it's dangerous it's
really hard I mean imagine the
biohackers thinking back to the days
when oh my God you know how hard it was
like you know you'd have to actually mix
this stuff in a lab you could you could
die yourself I mean now we can do all
this on the computer the quaint old days
you know so yeah I I worry deeply about
the the proliferation of these
incredible tools used in dangerous ways
and we are not going to be able to to
allow the slippage
that we have had
um around cyber tools that we have had
around uh terrorism and their
capabilities we're going to need to get
like you know our net our filter is
going to have to be incredibly
incredibly uh robust
do you have a sense of how we pull that
filter off
well
um part of it is as I say a hybrid
organization
um so there have been some people that
have spoken about an international
atomic energy agency model so it'd be an
international AI uh agency uh model
um I I think that won't work because
that implies a state agency with
inspectors that have a small number of
targets that they're engaging in those
inspections on I don't think that works
I think what you're going to need is an
agency that involves the tech companies
themselves and so you know if you're
developing an AI
um capacity in your garage if you want
to use that anywhere it's going to have
to be licensed
if you've got software that's going to
run AI it's going to have to be licensed
and and the tech companies that are
running these models are going to have
to police that in conjunction with
governments so this is I think this is a
new governance model I don't think it
will work with the governments by
themselves because they won't have the
ability to understand what the
capabilities of these algorithms are how
fast they can because they can
proliferate what they can do how they
can be used dangerously
um but the governments are the ones that
are going to be able to impose penalties
they will have the effective deterrent
measure I mean Microsoft Google Facebook
meta you know these these companies are
not what are they going to do they'll
throw you off their platform no no that
can't be the penalty for developing
um you know a bio weapon
um you're going to need to be working
together around this and and together
not just in the company hands over the
information to the government the
agencies are going to need to be much
more integrated so here's one thing that
I've been thinking a lot about be very
curious to get your feedback on this so
um I am definitely somebody who is a big
believer in um Bitcoin and what's going
on in cryptocurrency
but as I look at it I'm like oh like
this is definitely if we have it the the
thing that makes me believe in Bitcoin
specifically is that it's the closest
thing to a digital Recreation of an
exploding star so for people that
understand uh for people that understand
how gold has become across a bunch of
cultures throughout time the thing is
because it uh it doesn't mold it doesn't
rot and it it could only be generated
from an exploding star so there's no way
to fake it there's no way to make more I
see and so yeah so you you have this
thing
um that's very good about carrying
wealth across time and space it isn't
that it is
um inherently like people say oh but you
can make jewelry and stuff yeah but if
we don't care about jewelry then that
never becomes a thing and there's no
reason that we should care about gold
jewelry yeah industrial uses of gold are
utterly marginal to its utility as a
currency I agree exactly so Along Comes
Bitcoin which same idea there is a
finite amount of it you can never make
more it's the sort of computer
equivalent of the exploding star and
it's better about going across space so
maybe it's equal to gold in terms of
across time but it's certainly much
easier in terms of going across space so
I'm like okay cool I really believe in
that but as you create that you now have
alternatives to government Fiat
currencies right and that is this slight
weakening of their power they're gonna
obviously push back on that and so we'll
see how that sort of plays out from a
regulatory perspective whether they just
get in on it and start buying it or
whether they're they get very anti-it I
think that yet to be determined
um but when I think about the the things
that will weaken the government's hold
on things the next thing that comes into
the picture is just the government's
absolute inability to stay on top of AI
and so now you've got oh we're already
having to lean on these these companies
and so if it becomes the most powerful
tool the most dangerous tool and it's
not controllable by governments in the
way that nuclear weapons is that's
another weakening of the power and so
now you start getting into this two
paths before you you get bologies if I
don't know if you know homology is but
you get his idea of the network state
where it's a
non-geographically bound grouping so
going back to that idea of shared
narratives so people share narratives
from all over the world they come
together they have digital currency they
can sort of make their own rules and
laws and then the other one is the
authoritarian version where it's like we
just grab a hold of all of this it is
top down and you're going to adhere or
life is going to be brutal obviously
that would be China's take but both of
those
aren't ideal for me as a child of the
80s where it's just like oh this is so
stable and wonderful so
um one do you think that
are are those the sort of two
most likely polls or is there something
in the middle that's more likely yeah um
so I I agree with you that um you know
Bitcoin and crypto represent a similar
kind of proliferated decentralized
threat to governments as AI having said
that crypto the amount of crypto you
know in in
um in in
existence compare and being used
compared to
um Fiat currencies is de minimis
and I do not think that there is any
plausible uh threat of scale against
Fiat currencies in the next say five
years
um and if I do believe that if it became
a threat of scale every government in
the world that matters would do
everything they could to ensure that
they continue to have a regulatory
environment that maintains fiat currency
is dominant and they'll lean into stable
coins they'll lean into the technology
but they want they will want to have
control over it China obvious I mean
you've got you know WeChat and lots of
digital currencies that are that are
work but you can only you have to use
the digital RMB
um you know that they they refuse to
have currency that they don't have
control over because they want the
information set they want the political
stability in the United States it's also
the importance of having the dominant
Reserve currency globally which matters
immensely to America's ability to
project power
um to maintain you know our level of
indebtedness uh all of these things so
um to to weaponize finance to you know
to declare sanctions and tariffs to get
cut other countries to do what we want
uh to align with us so given that I
think the timeline for AI being
fundamentally transformative in
governance is minimum two to three years
maximum five to ten I only see one thing
here I'm an even climate change which is
huge and in front of us and trillions
and trillions of dollars of impact and
changing the way everybody thinks about
spending money and governance and where
they live and all of that uh climate
change in many ways is slower moving and
slower impact than what we're going to
see from AI like I think AI is going to
have much more geopolitical impact in
the next five to ten years than even
climate will and that was you know what
was one of the things that when I wrote
the book The Power of Crisis and that
was before AI really took off for me
each of the crises I was talking about
were becoming larger and more
existential and I started with the
pandemic because I was writing kind of
in the middle of it and then I moved to
climate and then I moved to disruptive
Technologies and Ai and people were
saying how could you not put climate you
know as the big one I'm like well
because climate like is first of all
it's not existential like we are
actually on a path to responding to to
climate it's just going to cause a lot
of damage
um and we're going to end up at like 2.5
degrees 2.7 degrees warming and it's
also going to happen like over the next
75 years and will probably be at Peak
carbon in the atmosphere at around 2045
and then a majority of the world's
ene
Resume
Read
file updated 2026-02-12 01:36:38 UTC
Categories
Manage