Transcript
OpSmCKe27WE • Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0399_OpSmCKe27WE.txt
Kind: captions
Language: en
the following is a conversation with Ben
Gerel one of the most interesting Minds
in the artificial intelligence Community
he's the founder of Singularity net
designer of opencog AI framework
formerly a director of research at the
machine intelligence Research Institute
and chief scientist of Hansen robotics
the company that created the Sophia
robot he has been a central figure in
the AGI Community for many years
including in his organizing and
contributing to the conference and
artificial general in Ence the 2020
version of which is actually happening
this week Wednesday Thursday and Friday
it's virtual and free I encourage you to
check out the talks including by yosa
Bak uh from episode 101 of this podcast
quick summary of the ads two sponsors
the Jordan Harbinger show and Master
Class please consider supporting this
podcast by going to Jordan Harbinger
docomo a masterclass.com Lex
click the links buy all the stuff it's
the best way to support this podcast and
the journey I'm on in my research and
startup this is the artificial
intelligence podcast if you enjoy it
subscribe on YouTube review it with five
stars on Apple podcast support on
patreon or connect with me on Twitter
Alex fredman spelled without the e just
f r i d
m as usual I'll do a few minutes of as
now and never any ads in the middle that
can break the flow of the
conversation this episode is supported
by the Jordan Harbinger show go to
Jordan Harbinger dcom Lex it's how he
knows I sent you on that page there's
links to subscribe to it on Apple
podcast Spotify and everywhere else I've
been binging on his podcast Jordan is
great he gets the best out of his guests
Dives deep calls them out when it's
needed and makes the whole thing fun to
listen to he's interviewed Kobe Bryant
Mark Cuban Neila grass Tyson gri
Kasparov and many more more this
conversation with Kobe is a reminder how
much focus and hard work is required for
greatness and Sport business and life I
highly recommend the episode if you want
to be inspired again go to Jordan
har.com Lex it's how Jordan knows I sent
you this Show sponsored by masterclass
sign up at masterclass.com Lex to get a
discount and to support this podcast
when I first heard about master class I
thought it was too good to be true for
180 bucks a year you get an all access
pass to watch courses from to list some
of my favorites Chris Hatfield on space
exploration Neil degrass Tyson on
scientific thinking and communication
will WR creator of the greatest city
building game ever Sim City and Sims on
game design Carlos Santana on guitar
grick Kasparov the greatest chess player
ever on chess Daniel Neo on poker or
many more Chris Hatfield explaining how
rockets work and the experience of being
launched into space alone is worth the
money once again sign up on
masterclass.com Lex to get a discount
and to support this podcast and now
here's my conversation with Ben
gitzo would books authors ideas had a
lot of impact on you um in your life in
the early
days you know what got me into
Ai and science fiction and such in the
first place wasn't a book but the
original Star Trek TV show which my dad
watched with me like in its first run it
would have been 1968 69 or something and
that that was incredible CU every every
show they visited a different a
different alien civilization with
different culture and weird mechanisms
but that that got me into science
fiction and there wasn't that much
science fiction to watch on TV at that
stage so that got me into reading the
whole the whole literature of Science
Fiction you know from from the beginning
of the previous Century until that time
and the I mean there was so many science
fiction writers who were in
inspirational to me I'd say if I had to
pick two it would have been H Stanis LM
the the Polish writer yeah Solaris and
then he had he had a bunch of more
obscure writings on on superhuman AIS
that were engineered Solaris was sort of
a superhuman naturally occurring in
intelligence than Philip K dick who you
know ultimately my fandom for Philip K
dick is one of the things that brought
me together with David Hansen my
collaborator on on on robotics project
so you know Stannis slm was was very
much an intellectual right so he he had
a very broad view of intelligence going
beyond the human and into what I would
call you know open-ended super
intelligence the the Solaris super inell
ocean was intelligent in some ways more
generally intelligent than people but in
a complex and confusing way so that
human beings could never quite connect
to it but it was but it was still
palpably very very smart and then the
the what Golem four supercomputer in one
of one of lm's lm's books this was
engineered by people but eventually it
became very intelligent in a different
direction than humans and decided that
humans were kind of trivial and not that
interesting so it it put some
impenetrable shield around itself shut
itself off from humanity and then issued
some philosophical screed about the
pathetic and hopeless nature of of
humanity and and all human thought and
then and then disappeared now Philip K
dick he was a bit different he was human
focused right his main thing was you
know human compassion and the human
heart and soul are going to be the
constant that will keep us going through
whatever aliens aliens we discover or
telepathy machines or or or super AIS or
or or whatever it might be so he didn't
believe in reality like the reality that
we see may be a simulation or or or a
dream or or something else we can't even
comprehend but he believed in love and
compassion is something persistent
through the various simulated realities
so those those two science fiction
writers had had a huge impact on me then
a little older than that I got into dovi
and friedi n and rambod and a bunch of
uh more more literary type writing we
talk about some of those things so on
the Solara side stof lamb uh this kind
of idea of they being intelligences out
there that are different than our own do
you think there are intelligences maybe
all around us that were not able to even
detect so this kind of idea of uh maybe
you can comment also on Steven Wolfram
thinking that there's computations all
around us and we're just not smart
enough to kind of detect their their
intelligence or appreciate their
intelligence yeah so my friend Hugo
degaris who I've been talking to about
these things for for for many decades
since the early 90s he had an idea he
called SII the search for
intraparticulate
intelligence so the concept there was as
AIS get smarter and smarter and smarter
you know assuming the laws of physics as
we know them now are still are still
what these super intelligences perceived
to hold and are bound by as they get
smarter and smarter they're going to
shrink themselves little and little
because special relativity make makes it
to sort of communicate between two
specially distant points so they're
going to get smaller and smaller but
then ultimately what does that mean the
minds of the super super super
intelligences they're going to be packed
into the the interaction of of
Elementary particles or quirks or the
partons inside quirks or whatever it is
so what we perceive as random
fluctuations on the quantum or
subquantum level may actually be the
thoughts of the micro micro micro
miniaturized super
intelligences because there's no way we
can tell random from structured but with
an algorithmic information more complex
than our brains right we can't tell the
difference so what we think is random
could be the thought processes of some
really tiny super minds and if so
there's not a damn thing we can do about
it except you know try to upgrade our
intelligences and expand our mind so
that we can we can perceive more of
what's around us but if th if those
random fluctuations like even if we go
to like quantum mechanics if that if
that's actually uh super intelligent
systems aren't we then part of the soup
of super intelligence that we're aren't
we just like like a finger of the
entirety of the body of the
superintelligence system we could be I
mean a finger is a is a strange metaphor
I mean we we we a finger is dumb is what
I mean is uh is but a finger is also
useful and is controlled with with
intent by by the brain where we may be
much less than that right I mean I mean
yeah we may be just some random EPA
phenomenon that that they don't care
about too much like think about the the
shape of the crowd emanating from a
sports Stadium or something right there
there's some Emer shape to the crowd
it's there you could take a picture of
it it's kind of cool it's irrelevant to
the main point of the sports event or
where the people are going or or or or
what's on the minds of the people making
that shape in the crowd right so we we
may just be some semi arbitrary higher
level pattern popping out of of a lower
level hyper intelligent
self-organization and I'm me so so be it
right I mean that's one thing that still
a fun ride yeah I mean the older I've
gotten the more respect I've achieved
for our fundamental ignorance I mean M
mine and everybody else's I I look at my
my two dogs two beautiful little toy
poodles and you know they watch me
sitting at the computer typing they just
think I'm sitting there wiggling my
fingers to exercise them maybe or
guarding the monitor on the desk that
they have no idea that I'm communicating
with other people Halfway Around the
World let let alone you know creating
complex algorithms running in in RAM on
some computer server in St Petersburg or
something right they although they're
right there they're right there in the
room with me so what things are there
right around us that we just too stupid
or close-minded to comprehend probably
probably quite a lot you're very your
very poodle could be uh could also be
communicating across multiple Dimensions
with with other with other beings and
you're too you're too unintelligent to
understand the kind of communication
mechanism they're going through there
there there have been various uh TV
shows and science fiction novels pusing
cats Dolphins uh mice and whatnot are
actually super intelligence is here to
observe that I I would I would guess as
one of the other quantum physics
Founders said those theories are not
crazy enough to be true the reality is
probably crazier than that beautifully
put so on The Human Side uh with uh
Philip K dick and and uh in
general where do you fall on this idea
that uh love and just the basic Spirit
of human nature persists throughout
these
multiple realities um are you on the
side like the thing that inspires you
about artificial
intelligence is it the human side of
somehow
persisting through all of the different
systems we engineer or is it or is AI
inspire you to create something that's
greater than human that's beyond human
that's almost
nonhuman I would say my motivation to
create AGI comes from from both of those
directions actually so when I when I
first became passionate about AGI when I
was it would have been two or three
years old after watching robots on Star
Trek I mean then it was really a
combination of intellectual curiosity
like can a machine really think how how
would you do that and yeah just ambition
to create something much better than all
the clearly limited and and
fundamentally defective humans I saw
around me then as I got older and got
more in mesed in in in the human world
and you know got married had children
some my parents begin to age I started
to realize well not only will AGI let
you go far beyond the limitations of the
human but it could also like stop us
from dying and and suffering and and and
feeling pain and and tormenting
ourselves mentally you can see AGI has
amazing capability to do good for humans
as humans alongside with with its
capability to go far far beyond the
human level so I mean both both aspects
are are there which makes it uh even
more exciting and important so you
mentioned
the what did you pick up from those guys
I mean that that that would probably go
beyond the beyond the scope of of a
brief interview certainly I mean both of
those are amazing thinkers who one will
necessarily have a
a complex relationship with right so I
mean dovi on the on the minus side he's
kind of a religious fanatic and he sort
of helped squash the Russian nihilist
movement which was very interesting
because what what nihilism meant
originally in in in that period of the
mid late 1800s in Russia was not not
taking anything fully 100% for granted
it was really more like what we'd call
beanis now where you don't want to adopt
anything as a dogmatic certitude and
always leave mind open and how dovi
parody nihilism was was was was a bit
different right he parody is people who
believe absolutely nothing so they M
they must assigned an equal probability
weight to to every proposition which
which which doesn't really work so on
the one hand I I didn't really agree
with dovi on on his sort of religious
point of view on on on the on the other
hand if you look at his understanding of
human nature and sort of the human mind
and and and heart and and soul it's it's
it's really unparalleled and he had an
amazing view of how human beings you
know construct a world for themselves
based on their own understanding and and
their own mental predisposition and I
think if if you look in the brothers
karamazov in particular the the Russian
literary theorist M Bakin wrote about
this as a polyphonic mode of fiction
which means it's not third person but
it's not first person from any one
person really there are many different
characters in the novel and each of them
is sort of telling part of the story
from their own point of view so the
reality of the whole story is is an
intersection like synergetically of the
many different characters worldviews and
that really it's a beautiful metaphor
and even a reflection I think of how all
of us socially create our reality like
each of us
sees the world in a certain way each of
us in a sense is making the world as we
see it based on on our own minds and
understanding but it's poony like like
in like in music where multiple
instruments are coming get coming
together to create the sound the
Ultimate Reality that's created comes
out of each of our subjective
understandings you know intersecting
with each other and that that was one of
the many beautiful things in in
DVI so maybe a little bit to mention you
have a connection to Russia and the
Soviet culture I mean I'm not sure
exactly what the nature of the
connection is but there at least the
spirit of your thinking well my my my
ancestry is three4 Eastern European
Jewish so I mean my three of my
great-grandparents immigrated to New
York from Lithuania and sort of Border
regions of of Poland which were in and
out of Poland in around the around the
time of world World War I and they were
they
were socialists and and Communists as
well as Jews mostly menic not not
Bolshevik and they sort of they fled at
just the right time to the US for their
own personal reasons and then almost all
or maybe all of my extended family that
remained in Eastern Europe was killed
either by hitlin or or Stalin's minions
at some point so the branch of the
family that immigrated to the US was was
was pretty much the the only one right
so how much of the spirit of the people
is in your blood still like do you when
you look in the mirror do you see uh
what do you see meat I see a bag of meat
that I want to transcend by uploading
into some sort of superior reality but
very yeah I mean yeah very clearly well
I mean I'm I'm not religious in a
traditional sense but clearly the the
Eastern European Jewish tradition was
what I what I was raised in I there was
my grandfather Leo well was a a physical
chemist who worked with Lis Pauling and
a bunch of the other early greats and in
quantum mechanics I mean he was he was
uh into x-ray defraction he was on the
material science side experimentalist
rather than a theorist his sister was
was also a physicist and my my father's
Father Victor gzel was a PhD in in
Psychology who had the unenviable job of
giving Psychotherapy to the Japanese
jaes in internment camps in the US in in
in World War II like to counsel them why
they shouldn't kill themselves even
though they'd had all their stuff taken
away and been imprisoned for no good
reason so I mean there yeah there
there's a lot of uh Eastern European
jewishness in my in my background one of
my great uncles was I guess conductor of
San Francisco Orchestra so there there's
a lot of Mickey Sul and bunch of music
music in there also and clearly this
culture was all about learning and and
understanding understanding the world
and also not quite taking yourself too
seriously while you do it right there's
a lot of Y Yiddish humor in there so I I
do appreciate that that culture although
the whole idea that like the Jews are
the chosen people of God never resonated
with me too much the graph of the gzel
family I mean just the people I've
encountered just doing some research and
just knowing your work through through
the decades uh it's kind of fascinating
I'm just the the the number of phds yeah
yeah I mean f my dad is a sociology
Professor who recently retired from from
ruers University but that clearly that
gave me a head start in life I mean my
my grandfather gave me all his quantum
mechanics books when I was like seven or
eight years old you know I remember
going through them and it was all the
old Quant mechanics like r Rutherford
Adams and stuff so I got to the part of
wave functions which I didn't understand
although I was very bright kid and I
realized he he didn't quite understand
it either but at least like he pointed
me to some Professor he knew at at upen
nearby who who understood these things
right so that's that that's an unusual
opportunity for a kid to have right and
my my dad he was programming for tram
when I was 10 or 11 years old on like HP
3000 Main frames at ruers University so
I got to do linear regression in Fortran
on on Punch Cards at when when I was in
in in middle school right because he was
doing I guess analysis of demographic
and and and sociology data so yes
certainly certainly that gave me a head
start and a push towards science be
beyond what would have been the case
with many many different situations when
did you first fall in love with AI is it
the is it the programming side of
Fortran is it the maybe the sociology
psychology that you picked up from your
dad or I when I was probably 3 years old
when I saw a robot on Star Trek it was
turning around in a circle going eror
error error error because Spock and Kirk
had tricked into a mechanical breakdown
by presenting with a logical Paradox and
I was just like well this makes no sense
this AI is very very smart It's been
traveling all around the universe but
these people could trick it with a
simple logical Paradox like what if you
know if the human brain can get beyond
that Paradox wh wh why why can't why
can't can't this AI so I I I felt the
the screenwriters of Star Trek had
misunderstood the nature of intelligence
and I complained to my dad about it and
he he wasn't he wasn't going to say
anything one way or the other but you
know in before I was born when my dad
was at Antioch College in uh in the
middle of the US he he led uh he led a
protest movement called slam student
League against mortality they were
protesting against death wandering
across the campus so he he he he was
into some futuristic things even back
then but whether AI could confront
logical paradoxes or not he did he
didn't know but that you know when I 10
years after that or something I
discovered Douglas Hoffer's book gordal
shabbach and that was sort of to the
same point of AI and Paradox and logic
right because he was over with over and
over with gle's incompleteness theorem
and Canon AI really fully
model itself reflexively or does that
lead you into some Paradox can the human
mind truly model itself reflexively or
does that lead you into some Paradox so
when I think that book Gord lerach which
I think I read when it first came out it
would have been 12 years old or
something I remember it was like 16-hour
day I read it cover to cover and then
ReRe it ReRe it I reread it after that
because there was a lot of weird things
with little formal systems in there that
were hard for me at the time but that
was the first book I read that
gave me a feeling for AI as like a
practical academic or engineering
discipline that that people were working
in because before I read Gord shach I
was into AI from the point of view of a
of a science fiction fan and I I had the
idea well it may be a long time before
we can achieve immortality in superhuman
AGI so I should figure out how to build
a spacecraft traveling close to the
speed of light go far away then come
back to the Earth in a Million years
when technology is more advanced and we
can build these things reading G shach
well it didn't all ring true to me a lot
of it did and but I could see like there
are smart people right now at various
universities around me who are actually
trying to work on building what I would
Now call AGI although Hoff didn't didn't
call it that so really it was when I
read that book which would have been
probably Middle School that then I
started to think well this this is
something that I could I could
practically work supposed to flying away
and waiting it out you can actually be
the one of the people that actually uh
EXA and if you think about I I was
interested in what we'd Now call
nanotechnology and in the human
immortality and time travel all the same
cool things as every other like science
fiction loving kid but AI seemed like if
Hoff did it was right you just figure
out the right program sit there and type
it like you don't you don't need to you
don't need to spin Stars into weird
configurations or get government
approval to cut people up and Fiddle
with their DNA or something right it's
just programming and then of course that
can achieve anything else that there's
another book from back then which was
by fine bomb Gerald fbom who was a who
was a physicist at at at
Princeton and that was the Prometheus
project and this book was written in the
late 1960s though I encountered it in
the mid '70s
but what this book said is in the next
few decades humanity is going to create
superhuman thinking machines molecular
nanotechnology and human immortality and
then the challenge we'll have is what to
do with it do we use it to expand human
consciousness in a positive direction or
or do we use it just to further vapid uh
consu consumerism and what he proposed
was that the UN should do a survey on
this and the UN should send people out
to every little village in in remotest
Africa or South America
and explain to everyone what technology
was going to bring the next few decades
and the choice that we had about how to
use it and let everyone on the whole
planet vote about whether we should
develop you know super AI nanotechnology
and and and immortality for expanded
Consciousness or for rampant rampant
consumerism and needless to say that
didn't quite happen and I think this guy
died in the mid 80s so he didn't even
see his ideas start to become become
more mainstream but it's interesting
many of the themes I'm engaged with now
from AGI and immortality even to trying
to democratize technology as I've been
pushing for with Singularity my work in
the blockchain world many of these
themes were there in you know fine
bomb's book in uh in the late 60s even
and of course Valentin churchin uh a
Russian writer who who I and a great
Russian physicist who I got to know when
we both lived in New York in the late
90s and early Arts I mean he he had a
book in the late 60s in in Russia which
was the phenomenon of science which laid
out laid out all these all these same
things as well and Val died in I don't
remember 2004 five or something of
parkinsonism so yeah it's easy easy for
people to lose track now of the fact
that the the futurist and
singularitarianism
almost mainstream and they're on TV all
the time I mean
these these are not that new right
they're sort of new in the history of
the human species but I mean these were
all around in Fairly mature form in in
the middle of the last century were
written about quite articulately by
fairly mainstream people who are
professors at at top universities it's
just until the enabling Technologies got
to a a certain point then you you
couldn't make it real so and even in the
70s I was sort of seeing that and and
living living through it right from Star
Trek to Douglas Hoffer things were
getting very very practical from the
late 60s to the late 70s and you know
the first computer I bought you could
only program with heximal machine code
and you had to solder it together and
then then like a few years later there's
Punch Cards and a few years later you
could get like Atari 400 and commodore
Victor 20 and you could you could type
on the keyboard and program in higher
level languages along alongside the
Assembly Language so these ideas have
been building up a while and I guess my
generation got to feel them build up
which is different than people coming
into the field now now for whom these
things have just been part of the
Ambiance of of culture for their whole
career even or even their even their
whole life well it's fascinating to
think about you know there being all of
these ideas kind of
swimming you know almost with a noise
all around the world all the different
generations and then some kind of
nonlinear thing happens where they
percolate up and and uh capture the
imagination of the mainstream and that
seems to be what's happening with AI now
I mean n who you mentioned had the idea
of the Superman right but he he didn't
understand enough about technology to
think you could physically engineer a
Superman by piecing together Mo
molecules in a certain way he he was a
bit vague about how how the how the
Superman would appear but he was quite
deep at thinking about what the State of
Consciousness and the mode of cognition
of of a
Superman would be he he was a very
astute analyst of you know how the human
mind constructs the illusion of a self
how it constructs the illusion of Free
Will how how it constructs values like
like good and evil out of its own you
know desire to maintain and Advance its
own organism he understood a lot about
how human minds work then he understood
a lot about how postum Minds would work
I mean this Superman was supposed to be
a mind that would basically have
complete root access to its own brain
and Consciousness and be able to
architect it its own its own value
system and inspect and fine-tune all all
of its own its own biases so that's a
lot of powerful thinking there which
then fed in and and sort of seated all
of postmodern Continental philosophy and
all sorts of of things have been very
valuable in development of culture and
indirectly even even of Technology but
of course without the technology there
it was all some quite abstract thinking
so now now we're at a time in history
when a lot of these
ideas can be can be made real which is
amazing amazing and scary right it's
kind of interesting to think what do you
think n would if he was born a a century
later or transported through time what
do you think you would say about AI I
mean well those are quite different if
he's born a century later or transported
through time well he'd be he'd be on
like Tik Tok and Instagram and he would
never write the great works he's written
so let's transport him through time
maybe also sparra would be a music video
right I mean I mean I mean who knows
yeah but if he was transported through
time do you think
uh that'd be interesting actually to go
back uh you just made me realize that
it's possible to go back and read ni
with an eye of is there some thinking
about artificial beings I'm sure there
he has in he had inklings I mean with
Frankenstein before him I'm sure he had
inklings of artificial beings somewhere
in the text it'd be interesting to see
to try to read his work to see if he
hadn't if if uh uh Superman was actually
an AGI system like if he had inklings of
that kind of thinking didn't he didn't
no I I would say not I mean he
had he had a lot of inklings of modern
cognitive science which are very
interesting if you look in like the the
third part of of the collection that's
been titled the will to power I mean in
book three there there's there's very
deep analysis of thinking processes but
he he wasn't so much of a physical
tinkerer type type guy right was very
abstract and do you think uh what do you
think about the will to power do you
think human what do you think drives
humans is it is it uh oh an Unholy mix
of things I I don't think there's one
pure simple and elegant objective
function D driving humans by by by by
any means well do you think
um if we look at I know it's hard to
look at humans in an aggregate but do
you think overall humans are
good or or uh do we have both good and
evil within us that uh depending on the
circumstances depending on the whatever
can can can percolate to the top good
and evil
are very ambiguous complicated and in
some ways silly Concepts but if we we
could dig into your question from a
couple directions so I think if you look
in
evolution humanity is shaped both by
individual selection and what biologists
would call group selection like tribe
level selection right so individual
selection has driven us in a selfish DNA
sort of way so so that each of us does
to a certain approximation what will
help us propagate our our DNA to to
Future Generations I mean that that that
that's why I've got four kids so far and
and probably that's not the last one
yeah on the other hand I like the
ambition tribal like group selection
means humans in a way will do what what
will advocate for the Persistence of the
DNA of their whole their whole tribe or
their their social group and in biology
you you have both of these right like a
and you can see say an ant colony or
beehive there's a lot of group selection
in in in the evolution of those social
animals on the other hand say a a big
cat or some very solitary animal it's a
lot more biased toward toward individual
selection humans are an interesting
balance and I think this reflects itself
in what we would view as selfishness
versus altruism to to to some extent so
we just have both of those objective
functions contributing to the the makeup
of of our brains and then as n analyzed
in his own way and others have analyzed
in different ways I mean we abstract
this as well we have both good good and
and and evil with within us right
because a lot of what we view as evil is
really just selfishness and a lot of
what we view as good is altruism which
means doing doing what's good for the
for the tribe and on that level we have
both of those just baked baked into us
and that's that's how it is of course
there are psychopaths and sociopaths and
people who you know get gratified by the
suffering of others and
that's that that that that's that's a
different thing yeah those are
exceptions but on the whole I think at
core we're not purely selfish we're not
purely altruistic we we are a mix and
that's that's the nature of it and we
also have a complex constellation of
values that are just very specific to
our our Evol evolutionary history like
we you know we we love waterways and and
and mountains and the the ideal place to
put a house as in a mountain overlooking
the water right and you know we we we we
care a lot about our our kids and we
care a little less about our cousins and
even less about our fifth cousins I mean
there are many particularities to human
values which whether they're good or
evil depends on your on on on your
perspective really see I I I spent a lot
of time in Ethiopia in Adis Ababa where
we have one of our AI development
offices for my Singularity net project
and when I walk through the streets in
Otis you know there's so there's people
Lying by the side of the road like just
living there by the side of the road
dying probably of curable diseases
without enough food or medicine and when
I walk by them you know I feel terrible
I give them money when I come back home
to the developed
world they're not on my mind that much I
I do donate some but I mean I I also
spend some of the limited money I have
enjoying myself in frivolous ways rather
than donating it to those people who are
right now like starving dying and and
suffering on on the roads side so does
that make me evil I mean it makes me
somewhat selfish and somewhat altruistic
and we each we each balance that in in
in our own way right so
that's that that whether that will be
true of all possible
agis is a is a is a is a subtler
question so you you that's how humans
are so you have a sense you kind of
mentioned that there's a selfish I'm not
going to bring up the whole irand idea
of uh selfishness being the core virtue
that's an whole interesting kind of
tangent that I think will just distract
our I I I have to make one amusing
comment or comment that has amused me
anyway so the the yeah I I I have
extraordinary negative respect for for
Ein Rand negative what's a negative
respect but when I worked with a company
called
jesin which was evolving flies to have
extraordinary long lives in in in
Southern California so we we had flies
that were evolved by artificial
selection to have five times a lifespan
of normal fruit flies but the population
of super long live flies was physically
sitting in a spare room at an IR Rand
Elementary School in Southern California
so that was just like wow if if I saw
this in a movie I wouldn't believe it
right well yeah the universe has a sense
of humor in that kind of way that fits
in there humor fits in somehow into this
whole absurd existence but you you
mentioned the balance between
selfishness and altruism as kind of
being innate do you think it's possible
that's kind of an
emergent Fe phenomena those
peculiarities of our value system how
much of it is innate how much of it is
something we collectively kind of like a
dfki novel bring to life together as a
civilization I mean the the answer to
nature versus nurture is usually both
and of course it's nature versus nurture
versus
self-organization as you mentioned so
clearly they are evolutionary roots to
individual and group selection leading
to a mix of selfishness and altruism on
the other hand different cultures
manifest that in in in in different ways
while we we all have basically the same
biology and if you look if if you look
at sort of pre-vedic
the yanam Mamo in Venezuela which which
their their culture is focused on on
killing killing other tribes and you
have other Stone Age tribes that are are
mostly Peaceable and have big tabos
against violence so you you can
certainly have a big difference in in
how culture manifests these innate
biological characteristics but still you
know there's probably limits that are
given by biology I I used to argue this
with my great-grandparents who were
marxists actually because they they
believed in the withering away of the
state like they they believe that you
know as you move from capitalism to
socialism to Communism people would just
become more socialm minded so that a
state would be unnecessary and people
would just give give everyone would give
everyone else what what they needed Now
setting aside that that's not what the
various Marxist experiments on the
planet seem to be
heading toward in in practice just a as
a theoretical point I was very dubious
that that human nature could go there
like at that time when my
great-grandparents are alive I was just
like you know I'm a cynical teenager I I
think humans are humans are just jerks
the state is not going to wither away if
you don't have some structure keeping
people from screwing each other over
they're going to do it and so now I
actually don't quite see things that way
I mean I think the
my feeling now subjectively is the
culture aspect is more significant than
I thought it was when I when I was a
teenager and I think you could have a
human society that was dialed
dramatically further toward you know
self-awareness other awareness
compassion and sharing than our current
society and of course greater material
abundance helps but to some extent
material abundance is a subjective
perception also because many Stone Age
cultures perceived themselves as living
in great material abundance that they
had all the food and water they wanted
they lived in a beautiful place that
they had sex lives that they had
children I mean they they they they had
abundance without any factories right so
I I think Humanity probably would be
capable of fundamentally more positive
and and joy-filled mode of of social
existence than than what we have now
clearly Marx didn't quite have the right
idea about about how to how to get there
I mean he missed he missed a number of
of key aspects of uh of human society
and and its Evolution and if we look at
where we are in society
now how to get there is is a quite a
quite different question because they're
very powerful forces pushing people in
in different directions than a positive
joyous comp
compassionate existence right so if we
were tried to um you know Elon Musk is
uh dreams of colonizing Mars at the
moment so we maybe he'll have a chance
to start a new civilization uh with a
new governmental system and certainly
there's quite a bit of chaos we're
sitting now I don't know what the date
is but this is uh June there's quite a
bit of chaos and all different forms
going on in the United States and all
over the world so there's a hunger for
new types of governments new types of
leadership new types of of systems what
uh and so what are the forces at play
and how do we move forward yeah I mean
colonizing Mars first of all it's it's a
super cool thing to do we we we should
be doing it so you're you're you love
the idea yeah I mean it's more important
it's more important than making
chocolatier chocolates and and sexier
lingerie and and many of the things that
we spend a lot more resources on as a as
a species right so I mean we certainly
should do it I think that
the possible Futures in which a Mars
colony makes a critical difference for
Humanity are are are are very few I mean
I I I I I think I mean assuming we make
a Mars colony and people go live there
in a couple decades I mean their
supplies are going to come from Earth
the money to make the colony came from
Earth and whatever powers are supplying
the the the goods there from from Earth
are going to in effect be in in control
of that of that Mars colony of course
there are outlier situations where you
know Earth gets nuked into Oblivion and
somehow Mars has been made self-
sustaining by that point and and then
Mars is what allows Humanity to persist
but I think that those are very very
very unlikely you don't think it could
be a first step on a long journey of
course it's a first step on a long
journey which which is which is awesome
I'm guessing the colonization of the
rest of the physical universe will
probably be done
by agis that are better designed to live
in space than by by the meat machines
that that that we are but I mean who
knows we may cryopreserve ourselves in
some Superior way to what we know now
and like shoot ourselves out to Alpha
centurum Beyond I mean that's all cool
it's very interesting and it's much more
valuable than most things that you
spending its resources on on the other
hand with aggi we can get to a
singularity before the Mars colony
becomes sustaining for sure possibly
before it's even operational so your
intuition is that that's the problem if
we really invest resources and we can
get to faster than a legitimate full
like self- sustaining colonization of
Mars yeah and it's it's very clear that
we will to me because there's so much
economic value in getting from Nar AI
toward toward AGI whereas the Mars
colony there's less economic value until
you get quite far far far out into the
into the future so I think that's very
interesting I just think it's it's
somewhat somewhat off to the side I mean
Ju Just as I think say you know art and
music are are very very interesting and
I want to see resources go into amazing
art and music being being created and I
i' rather see that than a lot of the
garbage that Society spends their money
on on the other hand I don't think Mars
colonization or inventing amazing new
genres of music is is not one of the
things that is most likely to make a
critical difference in the evolution of
human or
non-human life in in in in this part of
the universe o o over the next decades
you think AGI is really AI is is by far
the most important thing that's on the
horizon
and then
technologies that have direct ability to
enable AGI or to accelerate AGI are also
very important for example say qu
Quantum Computing I don't think that's
critical to achieve AGI but certainly
you could see how the right qualum
Computing architecture could massively
accelerate AGI similar other other types
of of nanotechnology right now the quest
to cure aging and end disease while not
in the big picture as important as as as
AGI of course it's important to to all
of us as as as individual humans and if
someone made a super longevity pill and
distributed it tomorrow I mean that
would be huge and a much larger impact
than a Mars colony is is is going to
have for quite some time but perhaps not
as much as an AGI system no because if
you get if you can make a benevolent AGI
then all the other problems are solved I
mean the if then the AGI can be once
it's as generally intelligent as humans
it can rapidly become massively more
generally intelligent than humans and
and then that that AGI should be able to
solve science and engineering problems
but much better than than than human
beings as long as it is in fact
motivated to do so that's why I said a a
benevolent AGI there could be other
kinds maybe it's good to step back a
little bit I mean we've been using that
term
AGI people often cite you as the Creator
or at least the popularizer of the term
AGI artificial general intelligence can
you tell the origin story of the term
sure so yeah I would say I I launched
the term AGI upon the world for for for
what what it's worth without ever fully
being in in in love with the term right
what happened is I was editing a book
and this process started around 2001 or
2 I think the book came out 2005 finally
I was editing a book which I
provisionally was titling real
Ai and I mean the goal was to gather
together fairly serious academic is
papers on the topic of making thinking
machines that could really think in the
sense like people can or or or even more
broadly than people can right so then I
was reaching out to other folks that i'
had encountered here or there who were
in interested in in in that which
included some some other folks out of
the who I knew from the transhumanist
and singularitarianism
I think he may have been have just
started doing his PhD with uh Marcus
Hooter who at that time hadn't yet
published his book Universal AI which
sort of gives a mathematical foundation
for artificial general intelligence so I
reached out to Shane and Marcus and
Peter Vos and my pay Wang who was
another former employee of mine who had
been Douglas hoffstead PhD student who
had his own approach to AGI and a bunch
of some Russian
folks reached out to these guys and they
contributed papers for the book but that
was my provisional title but I never
loved it because in the end you know I
was doing some what we would Now call
narrow AI as well like applying machine
learning to genomics data or chat data
for sentiment analysis and I mean that
work is real and in a sense in a sense
it's it's really AI it's just a
different kind of kind of AI Ray KW
wrote about narrow AI versus strong
AI but that seemed weird to me because
first of all narrow and strong are not
Anton that's right I mean but secondly
strong AI was used in the cognitive
science literature to mean the
hypothesis that digital computer AIS
could have true Consciousness like like
human beings so there was already a
meaning to strong AI which was complexly
different but related right so
we were tossing around on an an email
list whether what title title it should
be and so we we talked about narrow AI
broad AI wide AI narrow AI General Ai
and I think it it was either Shane leg
or Peter Vos on the private email
discussion we had you said but why don't
we go with AGI artificial general
intelligence and pay Wang wanted to do
GI General artificial intelligence cuz
in Chinese it goes in that order right
but we figured gay wouldn't work in in
in US culture at that time right so so
we we went with the AGI AGI we used it
for the for the title of that book and
part of Peter and Shane's reasoning was
you have the G factor in Psychology
which is IQ general intelligence right
so you have a meaning of GI general
intelligence in Psychology so then
you're looking like artificial GI so
then oh that makes a lot of sense we use
that for the we use that for the title
of the book and so I think I maybe both
Shane and Peter think they invented the
term but but then later after the book
was published this guy Mark grid came up
to me and he's like well I I publish an
essay with the term AGI in it in like
1997 or something and so I'm just
waiting for some Russian to come out and
say they published that in 1953 right I
mean that term that term is not
dramatically
inovative or anything it's one of these
obvious in hindsight things which is
also annoying in a way
because you know Josh habach who you you
interviewed is a close friend of mine he
likes the term synthetic intelligence
which I like much better but it hasn't
it hasn't actually caught on right
because I mean
artificial is a bit off to me because AR
artifice is like a tool or something but
but not all AGI are going to be tools I
mean they may be now but we're aiming
toward making them agents rather than
than tools and in a way I don't like the
distinction between artificial and
natural because I mean we're we're part
of nature also and machines are part of
are are part of nature I mean you can
look at evolved versus engineered but
that that's a different that's a
different distinction then it should be
engineered general intelligence right
and then General well if you look at
Marcus Hooter's book Universal what he
argues there is is you know within the
domain of computation Theory which is
limited but interesting so if you assume
computable environment it's a computable
reward functions then he articulates
what would be a truly general
intelligence a system called aixi which
is quite beautiful I I and that's that's
the middle name of of my latest child
actually is it what's the first name
first name is quiry Q rxi which my my
wife came out with but that that's an
acronym for Quantum organized r
expanding intelligence and his middle
name is his middle name is exes actually
which is uh mean means the the former
principle underlying I exe but in any
case you're giving Ela musk a new child
a run I I I did it first he he he cop he
copied me with this this new freakish
name but now if I have another baby I'm
GNA have to out outdo him it's become an
arms Ras of weird geeky baby names yeah
we'll see what the babies think about it
right but I mean my oldest son
zarathustra loves his name and my
daughter sharizad loves her name so so
far basically if you give your kids
weird names they live up to it well
you're obliged to make the kids weird
enough that they like the names right it
directs their upbringing in a certain
way but yeah anyway I mean what Marcus
showed in that book is that a truly
general intelligence theoretically is
possible but would take infinite
computing power so then the artificial
is a little off the General is not
really achievable within physics as as
as we know it and I mean physics as we
know it may be limited but that's what
we have to work with now intelligence
infinitely General you mean like yeah
information processing perspective yeah
yeah in intelligence is not very well-
defined either right I mean what what
what what does it mean I mean in AI now
it's fashionable to look at it as
maximizing and expected reward over the
future but that's that sort of
definition is path olical in various
ways and my my friend David wein bomb
AKA Weaver he had a beautiful PhD thesis
on open-ended intelligence trying to
conceive intelligence in a without a
reward without yeah he's just looking at
it differently he's looking at complex
self-organizing systems and looking at
an intelligent system as being one that
you know revises and grows and improves
itself in conjunction with with its with
its environment without necessarily
there being one objective function it's
trying to maximize although over certain
intervals of time it may act as if it's
optimizing a certain objective function
very much Solaris from Stan's novels
right so yeah the point is artificial
general and intelligence don't work
they're all bad on the other hand
everyone knows what AI is yeah and
AI seems immediately comprehensible to
people with with a technical background
so I think that the term is served as
sociological function and now now now
it's out there everywhere which which
which baffles me it's like KFC I mean
that's that's it you're we're stuck with
agii probably for a very long time until
AGI systems take over and rename
themselves yeah and that I mean that
we'll be we're stuck with gpus too which
mostly have nothing to do with Graphics
anymore right I wonder what the AGI
system will call us humans I was maybe
Grandpa
yeah GPS yeah Grandpa Processing Unit
yeah biological Grandpa processing units
uh okay so um maybe also just a comment
on AGI representing before even the term
existed representing a kind of community
now you've talked about this in the past
sort of AI has come in waves but there's
always always been this community of
people who dream about
creating uh General human level super
intelligence
systems uh can you maybe give your sense
of the history of this community as it
exists today as it existed before this
deep learning re Evolution all all
throughout the winters and the Summers
of AI sure uh first I would say as a
side point the winters and Summers of AI
are greatly exaggerated by by Americans
yeah and in the if you look at the
publication record of the artificial
intelligence Community since say the
1950s you would find a pretty steady
growth in advance of ideas and and and
papers and what's thought of as an AI
winter or summer was sort of how much
money is the US military pumping into AI
which was was meaningful on the other
hand there was AI going on in Germany UK
and in Japan and in Russia all over the
place while US military got more and
less less in enthused about AI so what I
mean that happened to be just for people
who don't know the US military happened
to be the main source of funding for AI
research so another way to phrase that
is it's up and down of uh funding for
artificial intelligence research true
and I would say the correlation between
funding and intellectual Advance was not
100% right because I mean in in Russia
as an example or in Germany there was
less dollar funding than in the US but
many foundational ideas were were laid
out but it was more Theory than than
implementation right and us really
excelled at sort of breaking through
from theoretical papers to working
implementations which which did go up
and down somewhat with US military
funding but still I mean you can look in
the 1980s Dietrich derer in Germany had
self-driving cars on the autoban right
and I mean this it was a little early
with regard to the car industry so it
didn't catch on such as has has happened
now but I mean that whole advancement of
self-driving car technology in Germany
was Prett pretty much independent of AI
military Summers and and Winters in the
US so there there's been more going on
in AI globally than not only most people
on the planet realize but than most new
AI phds realize because they've come out
within a certain Sub sub field of of AI
and haven't had to look so much so much
beyond that but I I would say when I got
when I got my PhD in 1989 in in
mathematics I was interested in AI
already Philadelphia by yeah I started
at myu then I transferred to to
Philadelphia to Temple University good
old North Philly North Philly yeah yeah
yeah the pearl of pearl of the US right
yeah you never stopped at a red light
then because you were afraid if you
stopped at the red light some more car
Jackie so strive through every red light
yeah it is a every every day driving or
bicycling to Temple from my house was it
was like a new new adventure right but
yeah when I the reason I didn't do PC
and AI was what people were doing in the
academic AI field then was just
astoundingly boring and seemed
wrong-headed to me it was really like
rule-based expert systems and production
systems and I actually I loved
mathematical logic I had nothing against
logic as the cognitive engine for an AI
but the idea that you could type in the
knowledge that AI would need to think
seemed just completely stupid and and
and and wrong-headed to me I mean you
can use logic if you want but somehow
the system has got to be automated
learning right it should be learning
from experience and the AI field then
was not interested in learning from
experience I mean some researchers
certainly were I mean I I remember in
mid 80s I discovered a book by John
Andreas which was it was about uh
reinforcement learning system called
perus p rr- p USS which was an acronym
that I can't even remember what it was
for but purpose anyway but he I mean
that was a system that was supposed to
be an AGI and basically by some sort of
fancy like Markoff decision process
learning it was supposed to learn
everything just from the bits coming
into it and learn to maximize its reward
and become become intelligent right so
that was there in Academia back then but
it was like isolated scattered weird
people but all these scattered weird
people in that period I mean they they
laid the intellectual grounds for what
happened later so you look at John
Andreas at University of Canterbury with
his purpose reinforcement learning marov
system he was the PG supervisor for John
clar in in in New Zealand now John clear
worked with me when I was at wado
University in in 1993 in in in New
Zealand and he worked with Ian Whitten
there and they launched WCA which was
the first open- Source machine learning
toolkit which was launched in I guess
'93 or '94 when I was at wada University
written in Java unfortunately written in
Java which was a cool language back then
though right I guess it's still well
it's not cool anymore but it's powerful
I find like most programmers now I find
Java
unnecessarily bloated but back then it
was like Java or C++ basically and Java
oriented so it's Java was easier for
students yeah amusingly a lot of the
work on wo when we were in New Zealand
was funded by a u sorry A New Zealand
government grant to use machine learning
to predict the menstrual cycles of cows
so in the US all the grant funding for
AI was about how to kill how to kill
people or spy on people in New Zealand
it's all about cows or kiwi fruits right
yeah so so yeah anyway I mean Andre John
Andreas had his probability Theory based
reinforcement learning Proto AGI John
clear was trying to do much more
ambitious probabilistic AGI systems now
John
clear helped do wo which was the first
open- Source machine learning tool get
so the predecessor for tensor flow and
torch and and all these things also
Shane leg was was at at wado working
with working with with with John CLE and
Ian Ian wh and and this whole group and
and then working with my own company my
company webm an AI company I had in in
the late '90s with a team there at wado
University which is how Shane got his
head full of of AGI which led him to go
on and with Gabus found Deep Mind so
what you can see through that lineage is
you know in the 80s and 70s John Andreas
was trying to build probalistic
reinforcemen AGI systems the technology
the computers just weren't there to
support it his ideas were were very
similar to what people are doing now but
you know although he's
long since passed away and didn't become
that famous outside of Canterbury I mean
the lineage of ideas passed on from him
to his students to their students you
can go Trace directly from there to me
and and to Deep Mind right so that there
was a lot going on in AGI that
did ultimately lay the groundwork for
what we have today but there was there
wasn't a community right and so when I
when I
started trying to pull together an AGI
community it was in the I guess the
early Arts when I was living in in
Washington DC and making a living doing
AI Consulting for VAR various US
government
agencies and I I organized the first
Agia Workshop in
2006 and I mean it wasn't it wasn't like
it was literally in my basement or
something I mean it was it was in the
conference room at the Marriott in
Bethesda it's not not that not that edgy
or underground unfortunately but still
how many people attended about 60 or
something that's not bad I mean DC has a
lot of AI going on probably until the
last five or 10 years much more than
Silicon Valley although it's just quiet
because of the nature of what what Happ
what happens in in in DC their business
isn't driven by PR mostly when something
starts to work really well it's taken
black and becomes even even more quiet
right yeah but yeah the thing is that
really had the feeling of a group of
star eyed Mavericks
like huddled in a basement like plotting
how to overthrow the the narrow AI
establishment and you know for the first
time in some cases coming together with
others who shared their passion for AGI
and the technical seriousness about
about working on it right and that I
mean
that's very very different than than
what we have today I mean now now it's a
little bit different we we have AGI
conference every year and then there's
several hundred people rather than than
50 now now it's more like this is the
main Gathering of people who want to
achieve AGI and think that uh large
scale nonlinear regression is is is is
not the golden path to to AGI so I mean
it's AK and your all Network yeah yeah
yeah well certain architectures for
for for learning using neural network so
yeah the AGI conferences are sort of now
the main concentration of people not
obsessed with deep neural Nets and deep
reinforcement learning but but still
interested in in in a in AGI not not not
not the only ones I mean there there's
other little conferences and and
groupings interested in uh human level
Ai and and cognitive cognitive
architectures and so forth but yeah it's
it's been a big shift like back back
then you couldn't really it would be
very very edgy then to give a university
Department seminar
the mentioned AGI or human level AI it
was more like you had to talk about
something more short-term and
immediately practical then you know in
the bar after the seminar you could
about AGI in the same breath as
uh as time travel or or the simulation
hypothesis or something right whereas
now now AGI is not only in the academic
seminar room like you have Vladimir
Putin knows what AGI is and he's like
Russia needs to become the leader in AGI
right so
national leaders
and CEOs of large corporations I mean
the CTO of Intel just Ratner this was
years ago Singularity Summit conference
2008 or something he's like We Believe
Ray KW The Singularity will happen in
2045 and it will have Intel Inside So I
mean so it's gone it's gone from being
something which is the pursuit of like
crazed Mavericks crackpots and and
science fiction Fanatics to being
you know a a marketing term for large
large corporations and and national
leaders right which is
a astounding transition but yet in the
in the course of this transition I think
a bunch of sub communities have formed
and the community around the AGI
conference
series is certainly one of them it
hasn't grown as big as I might have
liked it to on on the other hand you
know sometimes a a modest ized Community
can be better for making intellectual
progress also like you go to a society
for Neuroscience conference you have 35
or 40,000 neuroscientists on the one
hand it's it's amazing on the other hand
you're not going to talk to the leaders
of the of the of of the field there if
if you're an outsider yeah in the same
in the same sense the triple AI the
artificial intelligence uh the main kind
of generic artificial intelligence Comm
uh conference it's too big it's uh too
amorphous like it it doesn't make it and
and nip has become a a company
advertising Outlet now right so I so
yeah so I mean to to comment on uh the
role of AGI in the research Community i'
still if you look at neurs if you look
at cvpr if you look at these uh eye
clear you know um AGI is still seen as
the outcast I would still I would say in
these main machine learning in these
main artificial intelligence uh
conferences amongst the researchers I
don't know if it's an accepted term yet
I what I've seen bravely you mention
Shane leg is Deep Mind and then open AI
are the two places that are I would say
unapologetically so far I think it's
actually changing unfortunately but so
far they've been pushing the idea that
the goal is to create an AGI well they
have billions of dollars behind them so
I mean that that they in the public mind
that that certainly carries some oomph
right I mean I mean but they also have
really strong researchers right they
they do they're great teams I mean Deep
Mind in particular yeah and they have I
mean Deep Mind has Marcus hutter walking
around I mean there's all these folks
who basically their full-time position
involves dreaming about creating AGI
yeah I mean Google brain has a lot of
amazing AGI oriented people also I mean
and and I mean so I'd say from
a public marketing view Deep Mind and
open AI are the two large wef funded
organizations that have put the term and
concept AGI out there sort of as part of
their Public Image but I mean there
there're certainly not there are other
groups that are doing research that
seems just as as as aiish to me I mean
including a bunch of groups in in
Google's main main Mountain View office
so yeah it's true AGI is
somewhat away from the mainstream now
but if you compare to where it was right
you know 15 years ago there's there's
there's there's been an amazing
mainstreaming you could say the same
thing about super longevity research
which is one of my my application areas
that I'm excited about I mean I've been
talking about this since the '90s but
working on this since 2001 and back then
really to say you're trying to create
therapies to allow people to live
hundreds or thousands of years you you
were way way way way out of out of the
industry academic mainstream but now you
know Google had had project Calico Craig
ventor had human longevity Incorporated
and once once the Suits come marching in
right I mean once once there's big money
in it then people are forced to take it
take it seriously because that's the way
Mo modern society works so it's still
not as mainstream as cancer research
just as AGI is is not as mainstream as
automated driving or something but the
degree of mainstreaming that's happened
in the last uh you know 10 to 15 years
is is astounding to those of us who've
been at it for a while yeah but there's
a marketing aspect to the term but in
terms of actual full force research
that's going on under the header of AGI
it's currently I would say dominated
maybe you can disag degree dominated by
neural networks research that the
nonlinear regression as you mentioned um
like what's your sense with with open
Cog with your work in in
general I was uh logic based systems and
expert systems for me always seemed uh
to capture a deep element of
intelligence that needs to be there like
you said it needs to learn it needs to
be automated somehow but that seems to
be missing from a lot of the a lot of
research currently um so what's your
sense I guess one way to ask this
question what's your sense of what kind
of things will an AGI system need to
have yeah that that's a very interesting
topic that I thought about for for a
long time and I I think
there are many many different approaches
that can work for getting to to human
level AI so I I I don't I don't think
there's like one golden algorithm one
one golden design that that that can
that can work and I mean flying machines
is the the much warant analogy here
right like I mean you have airplanes you
have helicopters you you you you have
balloons you have stealth bombers that
don't look like regular airplanes you
you've got all blimps Birds too Birds
yeah and and bugs right and uh you I
mean and there are certainly many kinds
of flying machines that and there's a
catapult that you can just
launch there's bicycle powered like uh
flying machines right nice yeah yeah so
now these are all analyzable by a basic
theory of of aerodynamics right now so
one issue with AGI is we don't yet have
the analog of the theory of aerodynamics
and that's that's what Marcus hter was
trying to make with the Axe and his
general theory of general intelligence
but that theory in its most clearly
articulated Parts really only works for
either infinitely powerful machines or
almost or insanely yeah impractically
powerful machines so I mean if if you
were going to take a theory based
approach to AGI what you would do is say
well let's let's take what's called say
axe TL which is which is hutter's Axe
machine that can work on merely insanely
much processing power rather than
infinitely much what does TL stand for
uh time time and length Okay so you're
basically how how constrained somehow
yeah yeah yeah so how how axe Works
basically is each each each action that
it wants to take before taking that
action it looks at all its history yeah
and then it looks at all possible
programs that it could use to make a
decision yeah and it decides like which
decision program would have let it make
the best decisions according to its
reward function over its history and he
uses that decision program to take to
make the next decision right it's not
afraid of infinite resources it's
searching through the space of all
possible computer programs in between
each action and each next Action Now
XL searches through all possible
computer programs that have runtime less
than T and length less than l so it's
which is still an impracticably
humongous space right so what what you
would like to do to make an AGI and what
will probably be done 50 years from now
to make an AGI is say okay well we we we
have some constraints we have these
processing power constraints and you
know we have the space and time
constraints on on on on the program we
have energy utilization constraints and
we have this particular class
environments class of environments that
we care about which may be say you know
manipulating physical objects on on the
surface the Earth communicating in in in
human language I mean whatever our
particular not not annihilating Humanity
whatever our particular requirements
happen to be if you formalize those
requirements in some formal
specification language you should then
be able to run a automated program
specializer on Axl specialize it to the
the Computing resource constraints and
the particular environment and goal and
then it will spit out
like the the specialized version of Axl
to your resource restrictions in your
environment which will be your AGI right
and that that that I think is how our
super AGI will create new AGI systems
right but but and that's a very rough
seems really inefficient that's a very
Russian approach by the way like the
whole field of program specialization CA
came out of Russia can you backtrack so
what is program specialization so it's
basically well say take take take
sorting for example you can have a
generic program for sorting lists but
what if all your lists you care about
are length 10,000 or less got it you can
run an automated program specializer on
your sorting algorithm and it will come
up with the algorithm that's optimal for
sorting lists of length 1,000 or Le or
10,000 or less right it's kind of like
isn't that the kind of the process of
evolution is uh it's a program
specializer to the environment so you're
you're kind of evolving human beings or
Liv exactly I your Russian Heritage is
is showing there I mean so without
Alexander vitv I mean there and Peter
Anin and so on I mean there's a yeah
there there's a long history of of
thinking about Evolution Evolution that
way that way also right so well my my my
my point is that what we're thinking of
as a human level general
intelligence you know if you start from
narrow AIS like are being used in the
commercial AI field now then you're
thinking okay how do we make it more and
more General on the other hand if you
start from aexi or Schmid Uber's girdle
machine or these infinite infinitely
powerful but practically infusible AIS
then getting to a human level AGI is a
matter of specialization it's like how
do you how do you take these maximally
General learning processes and how do
you how do you specialize them so that
they can operate within the resource
constraints that you have but will
achieve the particular things that that
you care about because we we are not we
humans are not maximally General
intelligences right if I ask you to run
a maze in 750 Dimensions you'll probably
be very slow whereas at two Dimensions
you you're probably you're probably way
better right so I mean we're special be
because our our hippocampus has a
two-dimensional map in it right and it
does not have a 750 dimensional map in
it so I mean we we are you know A
peculiar mix of generality and and and
specialization right we'll probably
start quite General at Birth uh not
obviously still narrow but like more
General than we are at age uh 20 and 30
and 40 and 50 and 60 I don't think that
I I I think it's more complex than that
because I mean the young in in some
sense a young
child is less biased and the brain has
yet to sort of crystallize into
appropriate structures for processing
aspects of the physical and social world
on on on the other hand the young child
is very tied to their sensorium whereas
we can we can deal with abstract
mathematics like 750 dimensions and the
young child cannot because they they
haven't they haven't grown what pette
called the the formal capabilities they
they haven't learned to abstract yet
right and and the ability to abstract
gives you a different kind of generality
than than what than what a baby has so
there there's both more specialization
and more generalization that comes with
with the development process actually I
mean I guess just the the trajectories
of the specialization are most
controllable at the young age I guess is
uh as one way to put it do you have do
you have kids no they're not as
controllable as you think so you think
it's uh interesting I I I I I think
honestly I think a human adult is much
more generally intelligent that than a
human baby babies are very stupid
I mean I mean I mean they're cute
they're cute which is which is why we
put up with their repetitiveness and and
stupidity but and they have what the Zen
guys would call a a beginner's mind
which is a beautiful thing but that
doesn't necessarily correlate with with
a high level of of in of Intelligence on
the plot of like cuteness and stupidity
there there's a the there's a process
that allows us to put up with their
stupidity they get become more by the
time you're an ugly old man like me you
got to get really really smart to
compensate
okay cool but yeah going back to your
your original question
so the the way I look at human level AGI
is yeah how do you
specialize you know unrealistically
inefficient superhuman Brute Force
learning processes to the specific goals
that humans need to achieve and the
specific resources that that that we
have and both of these the goals and the
resources the environments I me all all
this is important and on the on the
resources side it's important that the
hardware resources we're bringing to
Bear are very different than the human
brain so the way the way I would want to
implement AGI on a a bunch of
neurons in a vat that I could rewire
arbitrarily is quite different than the
way I would want to create AGI on say a
modern server Farm of CPUs and gpus
which in turn may be quite different
than the way I would want to implement
AGI on you know whatever quantum
computer will will have in in 10 years
supposing someone makes a robust Quantum
turning machine or or something right so
I I I I think you know there there's
been co-evolution of the the patterns of
organization in the human brain and and
the
physiological particulars of of of the
human brain o o over time and when you
look at neural network works that is one
powerful class of learning algorithms
but it's also a class of learning
algorithms that evolve to exploit the
particulars of the human brain as as a
computational substrate if you're
looking at the computational substrate
of a modern server Farm you won't
necessarily want the same algorithms
that you want that you want on the on
the human brain and you know from the
right level of abstraction you you you
could look at maybe the best algorithms
on a brain and the best algorithms on a
modern computer network as implementing
the same abstract learning and
representation processes but you know
finding that level of abstraction is its
own AI research project then right so
that's about the hardware side and and
the software side which follows from
that then regarding what are the
requirements I I wrote the paper years
ago on what I called the embodied
communication prior which was quite
similar in intent to Yoshua benio's
recent paper on the Consciousness prior
except I I didn't want to wrap up
Consciousness in it because to me the
qualia problem and subjective experience
is a very interesting issue also which
which we can chat about yeah but I would
rather keep that philosophical debate
distinct from the debate of what kind of
biases do you want to put in a general
intelligence to give it humanlike
general intelligence and I'm not sure
yosha Benjo is really addressing that
kind of I's just using the term I I love
yosua to to pieces like he he's by far
my favorite of the the Lions of of deep
learning but he's s such such a
good-hearted
Guyer for sure I am not I am not sure he
has Plumb to the depths of the
philosophy of of of Consciousness no
he's using it as a sexy T yeah yeah yeah
so I I what I called it was the embodied
communication prior and can you maybe
explain it a little bit yeah yeah what
what I meant was you know what are we
humans evolved for you can say Being
Human but that's that's very abstract
right I mean we our minds control
individual bodies which are autonomous
agents moving around in a world that's
composed largely of solid objects right
and we've also evolved to communicate
via language with other you know solid
object agents that are going around
doing things collectively with us in a
in a world of solid objects and these
things are very obvious but if you
compare them to the scope of all
possible intelligences or even all
possible intelligences that are
physically realizable that actually can
strange things a lot so if you start to
look at you know how would you realize
some Specialized or constrained version
of universal general intelligence in a
system that has you know limited memory
and limited speed of processing but who
general intelligence will be biased
toward controlling a solid object agent
which is Mobile in a solid object world
for manipulating solid objects and
communicating via language with other
similar agents in that same world right
then starting from that you're starting
to get a requirements analysis for for
for human human level general
intelligence and then that that leads
you into cognitive science and you can
look at say what are the different types
of memory that the human mind and brain
has and this this has matured over the
last decades and I got into this a lot
in so after getting my PhD in math I was
an academic for eight years I was in
Departments of mathematics computer
science and psychology when I was in the
psychology department the University of
Western Australia I was focused on
cognitive science of of memory and and
perception actually I was I was teaching
neuron Nets and deep neural Nets and it
was multi-layer perceptrons right
psychology yeah a cognitive science it
was cross disciplinary among engineering
math psychology philosophy Linguistics
Compu computer science but yeah we we
were teaching psychology students to try
to model the data from Human cognition
experiments using multi-layer
perceptrons which was the early version
of a deep neural network very very yeah
recurrent back propop was very very slow
to train back then right so this is the
study of these constraint systems that
are supposed to deal with physical
object so if you look if you look
at if you look at cognitive psychology
you can see there's multiple types of
memory which are to some extent
represented by different subsystems in
the human brain so we have episodic
memory which takes into account our life
history and everything that's happened
to us we have declarative or semantic
memory which is like facts and beliefs
abstracted from the particular
situations as they occurred in there's
sensory memory which to some extent is
sense modality specific and then to some
extent is is Unified ac across across
sense modalities there's procedural
memory memory of how to do stuff like
how how to swing the tennis racket right
which is there's motor memory but it's
also a little more more abstract than
than motor memory it involves cerebellum
and cortex working working together then
then there's there's memory linkage with
emotion between with which has to do
with linkages of Cortex and and and and
lyic system there's specifics of spatial
and temporal modeling connected with
memory which has to do with you know
hippocampus and Thalamus connecting to
Cortex and the basil ganglia which
influences Golds so we have specific
memory of what goals sub goals and Sub
sub goals we want to perceive in which
context in the past human brain has
substantially different subsystems for
these different types of memory and
substantially differently tuned learning
like differently tuned modes of
long-term potentiation to do with the
types of neurons and neurotransmitters
and the different parts of the brain
correspond to these different types of
knowledge and these different types of
memory and learning in the human brain I
mean you can back these all into
embodied communication for controlling
agents and in Worlds of of of solid
objects now so if you look at building
an AGI system one way to do it which
starts more from cognitive science than
Neuroscience is to say okay what are the
types of memory that that are necessary
for this kind of world yeah yeah
necessary for this this sort of
intelligence what types of learning work
well with these different types of
memory and then how do you connect all
these things together right and of
course the human brain did it
incrementally through Evolution because
each of the subn networks of the brain I
mean it's not really the lobes of the
brain it's the sub networks Each of
which is is widely distributed which of
the sub each of the sub networks of the
brain Co involved with the other subn
networks of of the brain both in terms
of its patterns of organization and the
particulars of the neurophysiology so
they all grew up communicating and
adapting to each other it's not like
they were separate black boxes that were
then were then glommed together right
whereas as Engineers we would tend to
say let's make let's make the
declarative memory box here and the
procedural memory box here and the
perception box here and W wire them
together and when you can do that it's
it's interesting I mean that's how a car
is built right but on the other hand
that's clearly not how biological
systems are are are made the parts
coevolve so as to adapt and and work
together so this that's by the way how
every human engineered system that flies
that was were using that analogy before
is built as well so do you find this at
all appealing like there's been a lot of
really exciting which I find strange
that it's ignored uh work in cognitive
architectures for example throughout the
last few decades do you find
yeah I mean I I I had a lot to do with
that with that community and you know
Paul Rosen Blum who was one of the and
John lared who built the sore
architecture are friends of mine and uh
I I learned SAR quite well and actar and
these different cognitive architectures
and how I was looking at the AI World
about 10 years ago before this whole
commercial deep learning explosion was
on the one hand you had these cognitive
architecture guys who were working
closely with psychologists and cognitive
scientists who had thought a lot about
how the different parts of a humanlike
mind should should work together on the
other hand you had these learning theory
guys who didn't care at all about the
architecture but were just thinking
about like how how do you recognize
patterns and large amounts of data and
in some sense what you needed to do was
was to get the learning that the
learning theory guys were doing and put
it together with the architecture that
the cognitive architecture guys were
doing and then you would have what you
needed now you can't unfortunately when
you look at the
details you can't just do that without
totally rebuilding what what is
happening on both the cognitive
architecture and the learning side so I
mean they tried to do that in sore but
what they ultimately did is like take a
neural deep neural net or something for
perception and you includ as one of the
one of the black boxes it's one it
becomes one of the boxes the learning
mechanism becomes one of the boxes
opposed to fundamental that that doesn't
quite work you could look at some of the
stuff Deep Mind has done like the the
differential neural computer or
something that sort of has a neural net
for deep learning perception it has
another neural net which is like a a
memory matrix it's St say the map of the
London Subway or something so probably
Demis aabus was thinking about as like
part of Cortex and part of hippocampus
because hippocampus has a spatial map
and when he was a neuroscientist he was
doing a bunch on cortex hippocampus
inter ction so there the DNC would be an
example of folks from the deep neural
net world trying to take a step in the
cognitive architecture Direction by
having two neurom modules that
correspond roughly to two different
parts of the human brain that deal with
different kinds of memory and learning
but on the other hand it's super super
super crude from the cognitive
architecture view right Ju Just as what
what John L and sore did with neural
Nets was super super crude from from a
from a learning point of view because
the learning was like off to the side
not affecting the core representations
right and mean you weren't learning the
representation you were learning the
data that feeds into the rep you you
were learning abstractions of perceptual
data to feed into the the representation
that was was not learned right so yeah
this this was this was clear to me a
while ago and one of my hopes with the
AGI Community was to sort of bring
people from those two directions
together that didn't happen much in
terms of not yet and or what I was going
to say is it didn't happen in terms of
bringing like the Lions of cognitive
architecture together with the Lions of
deep learning it did work in the sense
that a bunch of younger researchers have
had their heads filled with both of
those ideas this this comes back to a a
saying my dad who was a university
Professor often quoted to me which was a
science advances one funeral at a time
which I'm I'm I'm trying to avoid like
I'm I'm 53 years old and I'm trying to
invent amazing weird ass new things that
that that nobody ever ever thought about
which which we'll talk about in in in a
few in a few minutes but there but there
but there there is that aspect right
like the people who've been at AI a long
time and have made their career
developing one aspect like a cognitive
architecture or or a deep learning
approach it can be hard on once you're
old and have made your career doing one
thing it can be hard to mentally shift
gears I mean I I I try quite hard to
remain flexible minded been successful
somewhat in changing maybe uh have you
changed your mind on some aspects of
what it takes to build an AGI like
technical things the hard part is that
the world doesn't want you to the world
or your own brain the world well that
one point is your brain doesn't want to
the other part is that the world doesn't
want you to like like the people who
have followed your ideas get mad at you
if if if if you change your mind and and
you know the media wants to pigeonhole
you as as Avatar of of of of a certain a
certain idea but yeah I i' I've changed
my mind on on a bunch of things I mean
when I started my career I really
thought Quantum Computing would be
necessary for for AGI and I I doubt it's
necessary now although I think it will
be a super major enhancement but I mean
I'm also I'm now in the middle of
embarking on a
complete rethink and rewrite from
scratch of our opencog AGI system
together with uh alexe poov and his team
in St Petersburg who's working with me
in Singularity net so now we're trying
to like go back to basics take take
everything we learned from working with
the current opencog system take
everything everybody else has learned
from working with with their with their
their Proto AGI systems and and design
design the best framework for the for
the next stage and I do think there's a
lot to be learned from the recent
successes with deep neural Nets and deep
reinforcement systems I mean people made
these essentially trivial systems work
much better than I thought they would
and there's a lot to be learned from
that and I want to incorporate that
knowledge appropriately in our our
opencog 2.0 system on on on the other
hand I also think current deep neural
net architectures as such will never get
you any anywhere near AGI so I think you
you avoid the pathology of throwing the
baby out with the bath water and like
saying well these things are garbage
because foolish journalists overblow
them as as as being the path to AGI and
and a few researchers over overblow them
as as well yeah there there's there's a
lot of interesting stuff to be learned
there on even though those are not not
the golden path so maybe this is a good
chance to step back you mentioned open
coock 2.0 but go back to open K 0.0
which exist yeah uh yeah maybe talk to
the history of open absolutely and
you're thinking about these ideas I
would
say opencog 2.0 is a term we're throwing
around sort of tongue and cheek because
the existing opencog system that we're
working on now is not remotely close to
what we consider we'd consider a one a
1.0 right I mean I mean it's it it's an
early it's it's been
around what 13 years or something but
it's still an early stage research
system right and actually we're we
are going back
to the beginning in terms of theory and
implementation because we feel like
that's the right thing to do but I'm
sure what we end up with is going to
have a huge amount in common with with
with with the current system I mean we
all still like that the general approach
so that so first of all what is open Cog
sure open
Cog is an open-source software project
that I launched together with several
others in 2008 and probably the first
code written to that was written in 2001
or two or something that was
developed uh as a proprietary co-base
within my AI company nov LLC then we
decided to open source it in two in 2008
cleaned up the code throughout some
things add added some new things and
what language is it written in it's C++
primarily there's a bunch of scheme as
well but most of it C++ and it's
separate from that something we'll also
talk about is singularity net so it was
it was born as a non networked thing
correct correct well there are many
levels of networks in inv involved here
right no connectivity to the internet or
no at at Birth yeah I mean Singularity
net is a separate project and a separate
body of code and you can use Singularity
net as part of the infrastructure for a
distributed opencog system but they are
there are different layers yeah got it
so
opencog on the one hand as a software
framework could be used to implement a
variety of different Ai architectures
and and and algorithms but in practice
there's been a group of developers which
I've been leading together with lenus
vus nil giler and a few others which
have been using the opencog platform and
infrastructure to to implement certain
ideas about how to make an an AGI so
there there's been a little bit of
ambiguity about opencog the software
platform versus opencog the the the AGI
design because in theory you could use
that software to do you could use it to
make a neural net you could you could
use it to make a lot of different what
kind of stuff does the software platform
provide like in terms of utilities tools
like what what yeah let me first tell
about
opencog as a software platform and then
I'll tell you the specific AGI R&D we've
been building building on top of it yep
so the core component of opencog as a
software platform is what we call the
adom
space which is a weighted labeled
hypergraph atom atom space atom space
yeah yeah not not Adam like Adam and Eve
although that would be cool too yeah so
you have a hyper graph which is like a
so a graph in this sense is a bunch of
nodes with links between them a hyper
graph is like a graph but links can go
between more than two nodes so you have
a link between three nodes and in fact
in fact open cogs Adam space would
properly be called a metagraph because
you can have links pointing to link LS
or you could have links playing the
whole subgraphs right so it's a it's an
extended hyper graph or or a metagraph
and is metagraph a technical term it is
now a technical term interesting but I
don't think it was yet a technical term
when we started calling this a
generalized hypergraph in but in in any
case it's a weighted labeled generalized
hypergraph or weighted labeled metagraph
the the weights and labels mean that the
nodes and links can have numbers and and
symbols attached to them so they can
have Types on them they can have numbers
on them that represent say a truth value
or an importance value for for a certain
purpose and of course like with all
things you can reduce that to a
hypergraph and then the hypergraph can
be reduce hypergraph to a graph and you
could reduce a graph to an adjacency
Matrix so I mean there's always multiple
representations but there's a layer of
representation that seems to work well
here got it right right right and so
similarly you could have a link to a
whole graph because a whole graph could
repres present say a body of information
and I could say I reject this body of
information then one way to do that is
make that link go to that whole subgraph
representing the body of information
right I mean there's many there are many
alternate representations but that's
anyway what we have an opencog we have
an atom space which is this weighted
labeled generalized hypergraph knowledge
store it lives in Ram there's also a way
to back back it up to disk there are
ways to spread it among among multiple
different Mach Maes then there are
various utilities for dealing with that
so there's a pattern matcher which lets
you specify a sort of abstract pattern
and then search through a whole atom
space w labeled hypergraph to see what
sub hypergraphs May match that that
pattern for an example so that's then
there's there's something called the the
Cog server in opencog which lets you run
a bunch of different agents or processes
these in auler and each of these agents
basically it reads stuff from the adom
space and it writes stuff to the adom
space so th this is sort of the basic
operational model like that's the
software framework right and and of
course that's there's a lot there just
from a scalable software engineering
standpoint so you could use this I don't
know if you've have you looked into the
Steven wols physics project recently
with the hypog grass and stuff could you
theoretically use like the software
framework to play certainly could
although Wolfram would rather die than
use anything but Mathematica for for his
work well that's yeah but there's a big
community of people who are uh you know
would love integration and like like you
said the young minds love the idea of
integrating of connecting yeah that's
right and I would add on that note the
idea of using hypergraph type models in
physics is not very new like if if you
look at the Russians did it first well
I'm sure they did and a guy named
Ben drus who's a mathematician a
professor in Louisiana or somewhere had
a beautiful book on Quantum sets and
hypergraphs and algebraic topology for
discreet models of physics and carried
it much farther than than than Wolfram
has but he's he's not rich and famous so
so it didn't didn't get in the headlines
but yeah wolf from aide yeah certainly
that's a good way to put it the whole
opencog framework you could use it to
model biological networks and and
simulate biology processes you could use
it to model physics on on on discrete
graph models of of of physics so you can
you could use it you could use it to do
say biologically
realistic neur neural networks for for
for example and that's so that that's a
framework what do agents and processes
do do they grow the graph do they what
kind of computations just to get get a
sense are they supposed in theory they
could do anything they want to do
they're just C++ processes on on the
other hand the computation framework is
sort of designed for agents where most
of their processing time is taken up
with reads and writs to the atom space
and so that's that's a very different
processing model then say the matrix
multiplication based model as underlies
most deep most deep Learning Systems
right so so you could I mean you could
create an agent that just factored
numbers for a billion years it would run
within the opencog platform but it would
be pointless right the I mean the point
of doing opencog is because you want to
make agents that are cooperating via
reading and writing into this this
weighted labeled hypergraph right and so
that and that that has both cognitive
architecture importance because then
this hypergraph is being used as a sort
of shared memory among different
cognitive processes but it also has you
know software and Hardware
implementation implications cuz current
GPU architectures are not so useful for
opencog
whereas a graph chip would be incredibly
useful right and I think graph core has
those now but they're not ideally suited
for this but I think in the next let's
say three to five years we're going to
see new chips where like a graph is put
on the Chip And and you know the back
and forth between multiple processes
acting simdi and Mimi on that on that
graph is going to be fast and then that
may do for opencog type architectures
what gpus did for for deep neural
architectures as a small tangent can you
comment on thoughts about neuromorphic
Computing so like Hardware
implementations of all these different
kind of uh are you interested are you
excited by that possi I'm excited by
graph processors because I think they
can massively speed up speed up opencog
which is a class of architectures that
that I'm that that I'm working on I
think if you know in principle
neomorphic
Computing should be amazing I haven't
yet been fully sold on any of the
systems that that are out there like
memists should be amazing too right so a
lot of these things have obvious
potential but I haven't yet put my hands
on the system that that seemed to
manifest that Marxism should be amazing
but the the the current systems not been
great I mean look for example if you
wanted to make a biologically real real
istic Hardware neural network like
taking making
a circuit in Hardware that emulated like
the hjin Huxley equation or the isak
kevich equation like equ differential
equations for biologically realistic
neuron and putting that in Hardware on
the chip that would seem that it would
make more feasible to make a large scale
truly biologically realistic neural
network now what's been done so far
is not like that so I guess personally
as a researcher I mean I've done a bunch
of work in cognitive neuro in sorry in
computational Neuroscience where I did
some work with IPA in in in DC
intelligence Advanced research project
agency we were looking it how do you how
do you make a biologically realistic
simulation of seven different parts of
the brain cooperating with each other
using like realistic nonl dynamical
models of neurons and how do you get
that to simulate what's going on in the
Mind of a geoint intelligence analyst
while they're trying to find terrorists
on a map right so if you want to do
something like that having neuromorphic
Hardware that really let you simulate
like a realistic model of the neuron
would would would would be would be
amazing but that's that's sort of with
my computational Neuroscience haton
right with an AGI haton I'm just more
interested in these hypergraph knowledge
representation based architectures which
which would benefit benefit more from
from various types of of graph
processors because the main processing
bottleneck is Reading Writing to Ram
it's reading writing to the graph in Ram
the main processing bottleneck for this
kind of Proto AGI architecture is not
multiplying matrices and and for that
reason gpus which are really good at
multiplying matrices don't don't don't
apply as as well there there are
Frameworks like gunrock and others that
try to boil down graph processing to
Matrix operations and and they're cool
but you're still putting a square peg in
in in into a round hole in a certain way
the same is
true I mean current Quantum machine
learning which is very cool it's also
all about how to get Matrix and Vector
operations in in quantum mechanics and I
see why that's natural to do I mean
quantum mechanics is all unitary
matrices and and vectors right on on the
other hand you could also try to make
graph Centric quantum computers which I
I think is is is is is is is where
things will go and then then we can have
then we can make like take the opencog
implementation layer implement it in a
uncollapsed state inside a quantum
computer but that that may be the
singularity squared right I'm I'm not
I'm not I'm I'm not I'm not sure we need
that to get get to human human level
human level that's already beyond the
the first singular but uh can we just
yeah let's go back to opencog no yeah
and the hypergraph and open yeah that's
the software framework right so the the
next thing is is our cognitive
architecture tells us particular
algorithms to put there got it can we
Backtrack on the kind of do is this
graph designed is it uh in general
supposed to be sparse and the operations
constantly grow and change the graph
yeah the graph is sparse and but is it
constantly adding links and so on it is
a self modifying hypergraph so it's not
so the write and read operations you're
referring to this isn't just a fixed
graph to which you change way it's
constant growing graph yeah that yeah
that that that's that's true so it's it
is different model then say current deep
neural Nets S have a fixed neural
architecture and you're updating the
weights although there have been like
Cascade correlational neural
architectures that grow new new nodes
and links but the most common neural
architectures now have a fixed neural
architecture you're updating the weights
and in open Cog you can update the
weights and that certainly happens a lot
but adding new
nodes adding new links REM removing
nodes and links is an equally critical
part of the Systems Operations got it so
now when you start to add these
cognitive algorithms on top of this
opencog architecture what does that look
like so what yeah so that the within
this framework then creating a cognitive
architecture is basically two things
it's it's choosing what type system you
want to put on the nod and links in the
hypergraph what types of nodes and links
you want M and then then it's choosing
what collection of Agents what
collection of AI algorithms or processes
are going to run to to operate on on on
on this hypergraph and of course those
two decisions are are closely closely
connected to each other so in terms of
the type system there are some links
that are more neuronet like they just
like have weights to get updated by
heavan learning and activation spreads
along them there are other links that
are more logic like and nodes that are
more logic like so you could have a
variable node and you can have a node
representing a universal or existential
quantifier as in in predicate logic or
or term logic so you can have logic like
nodes and links or you can have neural
like nodes and links you can also have
procedure like nodes and links as as in
say uh combinatory logic or or or Lambda
calculus representing programs so you
can have nodes and links
representing many different types of
semantics which means you could make a
horrible ugly mess or you can make a
system where these different types of
knowledge all interpenetrate and
synergize with each other beautifully
right so you so the so the hypergraph
can contain programs yeah it can contain
programs
although it can in the current version
it is a very inefficient way to guide
the execution of programs which is one
thing that we are aiming to resolve with
our our rewrite of the of the system
now so what to you is the most beautiful
aspect of open Cog just you personally
some aspect that uh captivates your
imagination from beauty or power uh yeah
what what fascinates me is finding a
common
representation that
underlies
abstract declarative knowledge and
sensory knowledge and movement knowledge
and and procedural knowledge and
episodic knowledge finding the right
level of representation where all these
types of knowledge are stored in a sort
of universal and
interconvertible yet practically
manipulable way right so that that that
that's to me to me that's the core
because once you've done that then the
different learning algorithms
can help each other out like what you
want is if you have a logic engine that
helps with declarative knowledge and you
have a deep neural net that gathers
perceptual knowledge and you have say an
evolutionary Learning System that learns
procedures you want these to not only
interact on the level of sharing results
and passing inputs and outputs to each
other you want the logic engine when it
gets stuck to be able to share its
intermediate state with the neural net
and with the evolutionary learning
algorithm so so that they can help each
other out of of bottlenecks and help
each other solve combinatorial
explosions by intervening inside each
other's cognitive processes but that can
only be done if the intermediate state
of a logic engine The evolutionary
learning engine and a deep neural net
are represented in in the same form and
that's what we figured out how to do by
putting the right type system on top of
this weighted labeled hypergraph so is
there can you maybe elaborate on what
that what are the different
characteristics of a type system that
that can uh coexist uh amongst all these
different kinds of knowledge that needs
to be represented and is I mean like is
it
hierarchical um just any kind of
insights you can give on that kind of
type system yeah yeah so this this this
gets very nitty-gritty and and
mathematical of course but what one key
part is switching from predicate logic
to term logic what is predicate logic
what is term logic logic so term logic
was invented by Aristotle or at least
that's the the oldest oldest
recollection we we we we have we have of
it but term logic breaks down basic
logic into basically simple links
between nodes like a inheritance link
between between node a and and node B so
in term Logic the basic deduction
operation is a implies b b implies C
therefore a implies C whereas in
predicate logic the basic operation is
modus ponents like a a implies B
therefore B so there're there it's a
slightly different way of breaking down
logic but by breaking down logic into
term logic you get a nice way of
breaking logic down into into into nodes
and links so your Concepts can can
become nodes The Logical relations
become links and so then inference is
like so if this link is a implies B this
link is B implies C then deduction
builds a link a implies C and and your
problemistic algorithm can assign assign
a certain weight there now you may also
have like a heavy and neural link from a
to c which is the degree to which
thinking the degree to which a being the
focus of attention should make B the
focus of attention right so you could
have then a neural link and and you
could you could have a symbolic like
logical inheritance Link in your term
logic and they have separate meaning but
they they could be used to to guide each
other as well like if if there's a large
amount of neural weight on the link
between A and B that may direct your
logic engine to think about well what is
the relation are they similar is is
there an inheritance relation are they s
are they similar in some context on the
other hand if there's a logical relation
between A and B that may want that may
direct your neural component to think
well when I'm thinking about a should I
be directing some attention to be also
because there's a logical relation so in
terms of logic there's a lot of thought
that went into how do you break
down logic relations including basic
sort of propositional logic relations as
Aristotelian term logic deals with and
then quantifier logic relations also how
do you break those down elegantly in
into a hyper graph because you I mean
you can boil logic Expressions into a
graph in many different ways many of
them are very ugly right right we we
tried to find elegant ways of sort of
hierarchically breaking down complex
logic expression in into no into nodes
and links so so that if you have say
different nodes representing you know
Ben AI Lex interview or whatever the
logic relations between those things are
compact in in the in the node and Link
representation so that when you have a
neural net acting on those same nodes
and links the neural net and the logic
engine can can sort of inter
interoperate with each other and also
interpretable by humans is that is that
an important that's tough in simple
cases it's interpretable by humans but
honestly you
know I would say logic systems give more
potential for yeah transparency and
comprehensibility than neuron net
systems but you still have to work at it
because I mean if if if I show you a
predicate logic proposition with like
500 nested Universal and existential
quantifiers and 217 variables that's no
more comprehensible than the weight
Matrix of a neural network right so the
I'd say the logic Expressions that an AI
learns from its experience are mostly
totally opaque to human beings and maybe
even harder to understand than they're
on that because I mean when you have
multiple nested quantifier bindings it's
a very high level of abstraction there
is a difference though in that within
logic it's a little more straightforward
to pose the problem of like normalize
this and boil this down to a certain
form I mean you can do that in neural
Nets too like you can distill a neural
net to a simpler form but that's more
often done to make a neural net that'll
run on an embedded device or something
it's it's harder to distill a net to a
comprehensible form than it is to
simplify logic expression to a
comprehensible form but but it doesn't
come for free like what's what what's in
the ai's mind is is incomprehensible to
to a human unless you do some special
work to make it comprehensible so on the
on the procedural side there's some
different and sort of interesting Voodoo
there I mean if if you're familiar in
computer science there's something
called the curry Howard correspondence
which is a onetoone mapping between
proofs and programs so every program can
be mapped into a proof every proof can
be mapped into a program you can model
this using category Theory and a bunch
of a bunch of of of nice math but we
want to make that practical right so
that so that if you if you have an
executive program that like Mo moves a
robot's arm or figures out in what order
to say things in a dialogue that's a
procedure represented in opencog
hypergraph but if you want to reason on
how to how to improve that procedure you
need to map that procedure into logic
using Curry Howard the isomorphism so
then the logic the logic engine can
reason about how to improve that
procedure and then map that back into
the procedural representation that is
efficient for execution so again that
comes down to not just can you make your
procedure into a bunch of nodes and
links because I mean that can be done
trivially a a C++ compiler has nodes and
links inside it can you boil down your
procedure into a bunch of nodes and
links in a way that's like
hierarchically decomposed and simple
enough you can reason about yeah yeah
that given the resource constraints at
hand you can map it back in forth to
your to your term logic like fast enough
and without having a bloated logic
expression right so there's just a lot
of there's a lot of nitty-gritty
particulars there but I'm I'm by the
same to and if you if you ask a chip
designer like how do you make the Intel
i7 chip so good right there's a there's
a long list of of technical answers
there which which will take take a while
to go through right and this has been
Decades of work I mean the the first AI
system of this nature I tried to build
was called Web mind in the mid1 1990s
and we had a big graph a big graph
operating in Ram implemented with Java
1.1 which was a terrible terrible
implementation idea and then each each
node had its own processing so like that
there the core Loop looped through all
nodes in the network and that each node
enact what it what its little thing was
doing and we had logic and neural Nets
in there but and evolutionary learning
but we hadn't done enough of the math to
get them to operate together very
cleanly so it was really it was quite a
horrible mess so as as well as shift
doing an implementation where the graph
is its own object and the agents are
separately scheduled we've also done a
lot of work on how do you represent
programs how do you represent procedures
you know how do you represent genotypes
for evolution in a way that the
interoperability between the different
types of learning associated with these
different types of knowledge actually
works and that's been quite difficult
it's taken decades and it's totally off
to the side of what the commercial
mainstream of the of the AI I field is
doing which isn't thinking about
representation at all really although
you could see like in the DNC they had
to think a little bit about how do you
make representation of a map in this
memory Matrix work together with a
representation needed for say visual
pattern recognition in the hierarchical
neural network but I would say we have
taken that direction of taking the types
of knowledge you need for different
types of learning like declarative
procedural attentional and how do you
make these types of knowledge represent
in a way that allows cross learning
across these different types of memory
we've been prototyping and experimenting
with this within opencog and before that
web mine since the mid mid
1990s now disappointingly to all of us
this has not yet been cashed out in an A
in an AGI system right I mean we've used
this system within our consult
Consulting business so we've built
natural language processing and robot
control and financial analysis we've
built a bunch of sort of vertical Market
specific proprietary AI projects they
use opencog on on the back end but we we
haven't that's not the AGI goal right
that's that's it's interesting but it's
not the AGI goal so now what what we're
looking at with our rebuilded the system
2.0 yeah we're also calling it true AGI
so we're not quite sure what the what
what the name what the name is yet that
we we made a website for 2 ai. but we we
haven't put anything on there yet so we
may come up with an an even even better
name but it's kind of like the real AI
starting point for your a but I like
true better because true has like you
can be true-hearted right you can be
true to your girlfriend so true true has
true has a number and it also has logic
in it right because logic is is a
key so yeah with with with the with the
true AGI system
we're sticking with the same basic
architecture but we're we're we're
trying to build on what we've learned
one thing we've learned is that you know
we need type checking among dependent
types to be much faster and among
probalistic dependent types to be much
faster so as it is now you can have
complex Types on the nodes and links but
if you want to put like if you want
types to be first class citizens so that
you can have the types can be variables
and then and then you do type checking
among complex higher order types you can
do that in the system now but it's very
slow this is stuff like it's done in in
Cutting Edge program languages like like
agda or something these obscure research
languages on the other hand we've been
doing a lot time together deep neural
Nets with symbolic learning so we did a
project for Cisco for example which was
on this was Street Scene analysis but
they had deep neural models for a bunch
of cameras watching Street Scenes but
they trained to different model for each
camera because they couldn't get the
transfer learing to work between camera
a and Camera B so we took what came out
of all the deeper models for the
different cameras we fed it into an
opencog symbolic representation then we
did some pattern Mining and some
reasoning on what came out of all the
different cameras within the symbolic
graph and that worked well for that
application I mean Yugo latapi from
Cisco gave a talk touching on that at
last year's AGI conference it was in
Shenzhen on the other hand we learned
from there it was kind of clunky to get
the Deep neural models to work well with
the symbolic system because we were
using torch and torch keeps uh a sort of
State computation graph but you needed
like real time access to that
computation graph within our hyper graph
and we we we certainly did it Alexa poov
who leads our St Petersburg team wrote a
great paper on cognitive modules in
opencog explaining sort of how do you
deal with the torch compute graph inside
opencog but in the end we realized like
that just hadn't been one of our design
thoughts when we when we built opencog
right so between wanting really fast
dependent type checking and wanting much
more efficient interoperation between
the computation graphs of deep neural
net Frameworks and opencog hypergraph
and adding on top of that wanting to
more effectively run an opencog
hypergraph distributed across Ram in
10,000 machines which is we're doing
dozens of machines now but it's just not
we we we didn't architect it with that
sort of modern scalability in mind so
these performance requirements are what
have driven us to want to to
rearchitecturing
with the current infrastructure that was
you know built in the phase 2001 to 2008
which is which is is is hard is hardly
shocking right so well I mean the three
things you mentioned are really
interesting so what do you think about
in terms of interoperability uh
communicating with the computational
graph of Neal networks what do you think
about the representations that neural
networks form um they're bad but there's
many ways that you could that you could
deal with that so I've been wrestling
with this a lot in some work on on
supervised grammar induction and I have
a simple paper on that that I'll give it
the next a AGI conference the online
portion of which is next week actually
so what is grammar induction so this
isn't AGI either but it it's sort of on
the verge between nari and AGI or
something unsupervised grammar induction
is the problem throw your AI system a
huge body of
text and have it learn the grammar of
the language that produced that text so
you're you're not giving it labeled
examples so you're not giving it like a
thousand sentences where the parses were
marked up by graduate students so it's
just got to infer the grammar from from
from the text it's like it's like the
Rosetta Stone but worse right because
you only have the one language yeah and
you have to figure out what what is the
grammar so that's not really AGI because
I mean the the way a human learns
language is not that right I mean we we
learn from language that's used in
context so it's a social embodied thing
we see we see how a given is grounded in
in observation there's an interactive
element I guess to yeah yeah on the
other hand so I'm I'm more interested in
that I'm more interested in making an
AGI system learn language from its
social and embodied experience on the
other hand that's also more of a pain to
do and that that would lead us into
Hansen Robotics and their robotics work
I've done which we'll talk about in a
few minutes but just as an intellectual
exercise as a learning exercise trying
to learn grammar from a corpus is very
very interesting right and and that's
been a field in AI for a long time no
one can do it very well so we've been
looking at Transformer neural networks
and tree Transformers which are amazing
these came out of uh of Google Google
brain actually and actually on that team
was Lucas Kaiser who used to work for me
in in one the period 200 5 through 8
through 8 or something so it's been fun
to see my former sort of AGI employee
disperse and do all these amazing things
way too many sucked into Google actually
well yeah anyway we'll talk about that
too Lucas Kaiser and a bunch of these
guys they they create Transformer
networks that classic paper like
attention is all you need and all these
things following on from that so we're
looking at Transformer networks and like
these are able to I mean this is what
underlies gpt2 and gpt3 and so on which
are very very cool and have absolutely
no cognitive understanding of any of the
Texs are at like they're they're very
intell they're very intelligent idiots
right so uh sorry to take but a small
bring us back but do you think
gpt3 understands no not all it
understands nothing he's a complete
idiot but brilliant idiot you don't
think GPT uh 20 will understand langage
no no no AB size is not going to buy you
understanding any more than a faster car
is going to get you to Mars yeah okay
it's a completely different kind of
thing I mean
these networks are very cool and as an
entrepreneur I can see many highly
valuable uses for them and as as an as
an artist I I love them right so I mean
I I we we're using our own neurom model
which is along those lines to control
the Philip K dick robot now and it's
amazing to like train train a neurom
model on the robot Philip K dick and see
it come up with like craze stoned
philosopher pronouncements very much
like what Philip kadik might have said
right like that so these models are are
are super cool and I'm I'm working with
Hansen robotics now on using a similar
but more sopade one for Sophia which
which we we we have we haven't launch
launched yet but so I think it's cool
but no these but it's not understanding
these are recognizing a large number of
shallow patterns that they're not
they're not forming an abstract
representation and that's the point I
was coming to when we're looking at
grammar induction we tried to mine
patterns out of the structure of the
Transformer
Network
and you can but the patterns aren't what
you want they're they're nasty so I mean
you if you do supervised learning if you
look at sentences where you know the
correct parts of a sentence you can
learn a matrix that Maps between the
internal representation of the
Transformer and the parts of the
sentence and so then you can actually
train something that will output the
sentence parse from the Transformer
Network's internal State and we we did
this I think
uh Christopher Manning some some some
others have now done this
also but I mean what you get is that the
representation is horribly ugly and is
scouted all over the network and doesn't
look like the rules of grammar that you
know are the right rules of grammar
right it's kind of ugly so what what
we're actually doing is we're using a
symbolic grammar learning algorithm but
we're using the Transformer neural
network as a sentence probability Oracle
so like when when you if you have a rule
of grammar and you wen't sure if it's a
correct rule of grammar or not you can
generate a bunch of sentences using that
rule of grammar and a bunch of sentences
violating that rule of grammar and you
can see the the Transformer model
doesn't think the sentences obeying the
rule of grammar are more probable than
the Sens is disobeying the rule of
grammar so in that way you can use the
neural model as a sense probability
Oracle to guide guide a symbolic grammar
learning process and that's to work
better than trying to milk the grammar
out of the neural network that doesn't
have it in there so I think the thing is
these neural Nets are not getting a
semantically meaningful representation
internally by and large so one line of
research is to try to get them to do
that and in infog gam was trying to do
that so like if if you look back like
two years ago there was all these papers
on like Ed Edward this probalistic
programming neural NET Framework that
Google had which came out of infog so
the the idea there was like you you
could train an infogan neural net model
which is a generative associative
Network to recognize and generate faces
and the model would automatically learn
a variable for how long the nose is and
automatically learn a variable for how
wide the eyes are or or how big the lips
are or something right so it
automatically learn the the these
variables which have a semantic meaning
so that that was a rare case where a
neuronet trained with a fairly standard
Gan method was able to actually learn
the semantic representation so so for
many years many of us tried to take that
the next step and get a Gant type neural
network that that would have not just a
list of semantic latent variables but
would have say a baset of semantic
latent variables with dependencies
between them the whole programming
framework Edward was was was made for
that I mean no one got to work right and
it could you think it's possible yeah do
you think I don't I don't know it might
be that back propagation just won't work
for it because the gradients are too
screwed up maybe you could get to work
using CES or some like floating Point
evolutionary algorithm right we tried we
didn't get it to work eventually we just
paused that rather than gave it up we
paused that and said well okay let's
let's try more innovative ways to learn
implic to learn what are the
representations implicit in that Network
without trying to make it grow in inside
that Network and I I
described how we're doing that
in language you can do similar things in
Vision right so use it as an oracle yeah
yeah yeah so you can that's one way is
you use a structure learning algorithm
which is symbolic and and and then you
use the the Deep neural net as an oracle
to guide the structure learning
algorithm the other way to do it is like
infog gam was trying to do and try to
tweak the neural network to have the
symbolic representation inside it I I
tend to think what the brain is doing is
more like using the Deep neural net type
thing as an oracle like I think the the
visual cortex or the cerebellum are
probably learning a non semantically
meaningful opaque Tangled representation
and then when they interface with the
more cognitive parts of the cortex the
cortex is sort of using those as an
Oracle and learning the abstract
representation so if you do Sports say
take for example serving in tennis right
I mean my tennis serve is okay not not
great but I learned it by trial and
error right and I mean I learned music
by trial Nar 2 I I just sit down and
play but then if you're an athlete which
I'm not a good athlete I mean then
you'll watch videos of yourself serving
and your coach will help you think about
what you're doing and you'll then form a
declarative representation but your
cerebellum maybe didn't have a
declarative representation same way with
music like I will hear something in my
head I'll sit down and play the thing
like I heard it and then then I will try
to study what my fingers did to see like
what what did you just play like how how
did you do that right because if you're
composing you may want to see how you
did it and then declaratively morph that
in some way that your fingers wouldn't
wouldn't think of right but the the
physiological movement may come out of
some opaque like cbella rein rein
reinforcement learned thing right and so
that's I think trying to milk the
structure of a neuronet by treating as
an oracle maybe more like how your
declarative mind postprocesses what what
what your your visual or or or motor
cortex I mean I mean in Vision it's the
same way like you can recognize
beautiful
art much better than you can say why you
think that piece of art is beautiful but
if you're trained as an art critic you
do learn to say why and some of it's
but some of it isn't right some
of it is learning to map sensory
knowledge into declarative and and and
lingu and linguistic knowledge yet
without necessarily making the sensory
system itself use use a transparent and
easily communicable
representation yeah that's fascinating
to think of NE networks as like dumb
question anwers that you can just milk
to build up uh a knowledge base and
there could be multiple networks I
suppose from different uh yeah yeah so I
think if if a group like deep mind or
open AI were to build AGI and I think
Deep Mind mind is like a thousand times
more likely from from from from what I
could tell but cuz they've hired a lot
of people
with broad Minds in many different
approaches and and angles on on AGI
worse open AI is also awesome but I see
them as more of like a pure deep
reinforcement learning shop time I got
you there's a lot of there you're right
there's um I mean there's so much
interdisciplinary work at Deep Mind like
neuroscience together with Google brain
which granted they're not working that
closely together now but you know my
oldest son zarra is doing his PhD in
machine learning applied to automated
theorem proving in in Prague under
Joseph Urban so the the first paper deep
math which applied deep neural Nets to
guide theor improving without of Google
brain I mean by now by now the the
automated theorem proving Community is
gone way way way beyond anything go
Google was doing but still yeah that but
anyway if that Community was going to
make an AGI probably one way they would
do it was you know take 25 different
neural modules architected in different
ways maybe resembling different parts of
the brain like a Bas basil ganglia model
cerebellum Model A Thal palus model few
few hippocampus models number of
different models representing parts of
the cortex right take all of these and
then wire them together to to to co-
Trin and like learn them together like
that that would be an approach to
creating an an an AGI one could
Implement something like that
efficiently on top of our our true AGI
like opencog 2.0 system once it exists
although obviously Google has has their
own highly efficient implementation
architecture so I think that's a decent
way to build AGI I was very interested
in that in the mid90s but I mean the
knowledge about how the brain works sort
of pissed me off like was it wasn't
there yet like you know in the
hippocampus you have these concept
neurons like the so-called grandmother
neuron which everyone laughed at it it's
actually there like I have some Lex
Friedman neurons that fire
differentially when I when I see you and
not when I see any other person right
yeah so how how do these Lex Friedman
neurons how do they coordinate with the
distributed representation of Lex
Freedman I have in my cortex right
there's some back and forth between
cortex and hippocampus that lets these
discreet symbolic representations in
hippoc campus correlate and cooperate
with the distributed representations in
cortex this probably has to do with how
the brain does its version of
abstraction and quantifier logic right
like you can have a single neuron
hippocampus that that activates a whole
distributed activation pattern in in
cortex well this this maybe how the
brain does like symbolization and
abstraction as in in functional
programming or something but we can't
measure it like we we we don't have
enough electrodes stuck between the the
cortex and the and the hippoc campus and
any known experiment to measure it so I
got I got frustrated with that direction
not cuz it's impossible because we just
don't understand enough yet we don't of
course it's a valid research direction
and you can try to understand more and
more and we are measuring more and more
about what what happens in the brain now
than ever before so it's it's quite
interesting on the other hand I sort of
got more of an engineering mindset about
AGI I'm like well okay we don't know how
the brain works that well we don't know
birds fly that well yet yet either we
have no idea how a hummingbird flies in
terms of the the aerodynamics of it on
the other hand we know basic principles
of like flapping and and and pushing the
air down and we know the basic
principles of how the different parts of
the brain work so let's take those basic
principles and engineer something that
embodies those Bas basic principles but
you know is welld designed for the
hardware that that we have on on hand
right right now so do you think we can
create AGI before we understand how the
brains I think I think that's probably
that's probably what will happen and
maybe the AGI will help us do better
brain Imaging that will then let us
build artificial humans which is very
very interesting to us because we are
humans right I mean building artificial
humans is is super worthwhile I I just
think it's probably not the shortest
path to AGI so it's fascinating idea
that we would build AGI to help us
understand ourselves uh you know a lot
of people ask me if uh you know the
young people interested in doing
artificial intelligence they look at
sort of uh you know doing graduate level
even undergrads but graduate level
research uh they see what the artificial
intelligence Community stands now it's
not really AGI type research for the
most part yeah so the the natural
question they ask is what advice would
you give I mean maybe I could ask uh if
people were interested in working on uh
open Cog or in some kind of direct or
indirect connection to open Cog or AGI
research what would you
recommend opencog first of all is open-
source project there's a there's a
Google group uh dis discussion list
there's a GitHub repository so if
anyone's interested in lending a hand
with that aspect of of of AGI introduce
yourself on the open opencog email list
and uh there's a slack as well I mean
we're we're certainly interested to have
uh you know in inputs into our redesign
process for a new version of opencog but
but also we're doing a lot of very
interesting research I mean we're
working on on data analysis for covid
clinical trials we're working with
Hansen robotics we're doing a lot of
cool things with the current version of
of of opencog now so there there's
certainly opportunity to jump into
opencog or or various other open- source
a AGI oriented projects so would you say
there's like Masters and phg thesises in
there plenty yeah plenty of of course I
mean the challenge is to find a
supervisor who wants to Foster that that
that sort of research but it's way
easier than it was when I got my PhD
right so okay great we talked about open
Cog which is kind of uh one the software
framework but also the actual uh attempt
to build an AGI system and then there is
this exciting idea of Singularity net so
maybe can you
say first what is singularity net sure
sure Singularity net is a platform
for realizing a
decentralized network of of artificial
intelligences
so Marvin Minsky the AI Pioneer who who
I knew a little bit he had the idea of a
society of Minds like you should achieve
an AI not by writing one algorithm or
one program
but you should put a bunch of different
AIS out there and the different AIS will
interact with each other each playing
their own role and then the totality of
the Society of AIS would would be the
thing that displayed the human level
intelligence and I had when he was alive
I had many debates with with Marvin
about about this idea and
he I think
uh he really thought the mind was more
like a society than than I do like I
think I think you could have a a mind
that was as disorganized as a human
society but I think a humanlike mind has
a bit more central control than that
actually like I mean we have this
Thalamus and the medulla and lyic system
we we have a sort of top- down control
system that guides much of much of what
we do more so than than a society does
so I think he stretched that metaphor a
little too far but I but I also think
there's there's something interesting
there and so in the in the '90s when I
started my first sort of non-academic AI
project web mind which was an AI startup
in New York in the Silicon alley area in
in the late 90s what I was aiming to do
there was make a distributed Society of
AIS the different parts of which would
live on different computers all around
the world and each one would do its own
thinking about the data local to it but
they would all share information with
each other and Outsource work with each
other and cooperate and the intelligence
would be in in in the whole Collective
and I organized a conference together
with Francis hean at free University of
Brussels in 2001 which was the global
brain zero conference and we're we're
planning the next version of the global
brain one conference at the Free
University of Brussels for next year
2021 so 20 20 years after then we maybe
we can have the next one 10 years after
that like exponentially faster until the
singularity comes right uh the timing is
right yeah yeah yeah exactly so the yeah
the idea with the global brain was you
know maybe the AI won't just be in a
program on one guy's computer but the AI
will be you know in the internet as a
whole with the cooperation of different
AI modules living in different places so
one of the issues you face when
architecting a system like that is you
know how how is the whole thing
controlled do you have like a
centralized control unit that pulls the
puppet strings of all the different
modules there or do you have a
fundamentally decentralized Network
where the Society of of AIS is
controlled in some democratic and
self-organized Way by all the AIS in in
that Society right and Francis and I had
different view of many things but we
both we both wanted to make like a
global Society of AI Minds with a
decentralized or organ organization mode
now the main difference was he wanted
the individual AIS to be all incredibly
simple and all the intelligence to be on
the collective level whereas I thought
that was cool but I thought a more
practical way to do it might be if some
of the agents in the Society of Minds
were fairly generally intelligent on
their own so like you could have a bunch
of open cogs out there and a bunch of
simpler learning systems and then these
are are all cooperating and coordinating
together soort of like in the brain okay
the brain as a whole is the general
intelligence but some parts of the
cortex you could say have a fair rid of
general intelligence on their own
whereas say parts of the cerebellum or
lyic system have very little general
intelligence on their own and they're
contributing to general intelligence you
know by way of their connectivity to to
other other modules do you see
instantiations of the same kind of you
know maybe different versions of open
Cog but also just the same version of
open Cog and maybe many instantiations
of it as part as that's what David and H
and I want to do with many Sophia and
other robots yeah yeah each one has its
own individual mind living on the server
but there's also a collective
intelligence infusing them and a part of
the mind living on the edge in each
robot right yeah so so the the thing is
at that time as well as webmind being
implemented in Java 1.1 as like a
massive distributed system yeah that you
know the there blockchain wasn't there
yet so how how them do this
decentralized
control you know we sort of knew it we
knew about distributed systems we knew
about encryption so I mean we had the
key principles of what underlies
blockchain now but I mean we didn't put
it together in the way that's it's been
done now so when when vitalic butterin
and colleagues came out with aium
blockchain you know many many year years
later like 2013 or something then I was
like well this is interesting like this
is solidity scripting language it's kind
of dorky in a way and I don't see why
you need a turn complete language for
this purpose but on the other hand this
is like the first time I could sit down
and start to like script infrastructure
for decentralized control of the AIS in
a society of Minds in a tractable way
like you could hack the Bitcoin cbase
but it's it's really annoying whereas
sady is is ethereum scripting language
is just nicer and and easier to use I'm
very annoyed with it by this point but
like Java I mean these languages are
amazing when when they first come out so
then I came up with the idea that turned
into Singularity that okay let's let's
make a decentralized agent system where
a bunch of different AIS you know
wrapped up in say different Docker
containers or lxc containers different
AIS can each of them have their own
identity on the blockchain and the
coordination of this community of AIS
has no Central controller no dictator
right the and there's no Central
repository of information the
coordination of the Society of Minds is
done entirely by the decentralized
network in a in a decentralized way by
the by the algorithms right because you
know the model of Bitcoin is in math We
Trust right and so that that that's what
you need you need the Society of Minds
to trust only in math not trust only in
one one
centralized server so the AI systems
themselves are outside of the blockchain
but then the communication at the moment
yeah yeah we I would have loved to put
the ai's operations on chain in some
sense but in ethereum it's just too slow
you you you you you can't you can't do
it somehow it's the basic communication
between AI systems that's uh yeah yeah
so basically an AI is just some software
in singular an AI is just some software
process living in a container M and
there's input and output there's a proxy
that lives in that container along with
the AI that handles the interaction with
the rest of of of Singularity net and
then when one AI wants to contribute
with another one in the network they set
up a number of channels and the setup
those channels uses the ethereum
blockchain but once the channels are set
up then data flows along those channels
without without having to be having to
be on the blockchain all that goes on
the blockchain is the fact that some
data went along that channel so you can
do so there's not a shared
knowledge uh it's well the the identity
of each agent is on the blockchain right
on the ethereum blockchain if one agent
rates the reputation of another agent
that goes on the blockchain and agents
can publish what apis they will fulfill
on the on the blockchain but the actual
data for AI and the results AI is not on
the blockch do you think it could be do
you think it should be um in some cases
it should be in some cases maybe it
shouldn't be but I mean I I I think that
uh so I'll give you an example using
ethereum you can't do it using now
there's more modern and faster
blockchains where you could you could
start to do that in in in in in some
cases two years ago that was less so
it's a very rapidly evolving ecosystem
so like one example maybe you can
comment on uh something I worked a lot
on is autonomous vehicles you can see
each individual vehicle as a AI system
and uh you can see vehicles from uh
Tesla for example and then uh Ford and
GM and all these has also like
larger I mean they all are running the
same kind of system on each sets of
vehicles
so it's individual AI systems and
individual vehicles but it's all
different station is the same AI system
within the same company so you know you
can Envision a situation where all of
those AI systems are put on Singularity
net right yeah
and how how do you see that happening
and what would be the benefit and could
they share data I gu I guess one of the
biggest things is that the power there
is in a decentralized control but uh the
benefit would have been is is really
nice if they can somehow share the
knowledge in an open way if they choose
to yeah yeah yeah those are those are
all all quite good points so I I think
the the benefit from being on the on the
decentralized network as we envision it
is that we want the AIS and the network
to be Outsourcing work to each other and
making a API calls to to each other
frequently I got you so the real benefit
would be if that AI wanted to Outsource
some cognitive processing or data
processing or data pre-processing
whatever to some other AIS in the
network which specialize in in something
different and this this really requires
a different way of thinking about AI
software development right so just like
object-oriented programming was
different than imperative programming
and now object or programmers all use
these Frameworks to do things rather
than just libraries even you know
shifting to agent-based programming
where your AI agent is asking other like
live realtime evolving agents for
feedback in what they're doing that's a
different way of thinking I mean it's
it's not a new one there was loads of
papers on agent-based programming in the
80s and onward but if you're willing to
shift to an agent-based model of
development then you can put less and
less in your AI and rely more and more
on interactive calls to other AIS
running in in the network and of course
that's not fully manifested yet because
although we've rolled out a nice working
version of singular unet platform
there's there's only 5200 AIS running in
there now there's not tens of thousands
of of AI so we don't have the critical
mass for the whole Society of Mind to be
doing doing what what we want what we
want the magic really happens when it's
just a huge number of Agents yeah yeah
exact exactly in terms of data we're
partnering closely with another
blockchain project called ocean protocol
and ocean protocol that's uh the project
of Trent Miki who developed Big Chain DB
which is a blockchainbased database so
ocean protocol is basically
blockchainbased big data and nams at
make making it efficient for for
different AI processes or or statistical
processes or whatever to to share L
large large data sets or one process can
send a clone of itself to work on the
other guy's data set and send results
back and so forth so by getting ocean
and and you know you have data lake so
this is the data ocean right so by
getting ocean and Singularity net to to
interoperate we're aiming aiming to take
into account of of the Big Data aspect
also but it's it's quite challenging
because to build this whole
decentralized blockchainbased
infrastructure I mean your competitors
are like Google Microsoft Alibaba and
Amazon which have so much money to put
behind their centralized infrastructures
plus they're solving simpler algorithmic
problems because making it centralized
in some ways is is is easier right so
they're they're very
major computer science challenges and I
think what what you saw with the whole
icoo boom in in the blockchain and
cryptocurrency world is a lot of young
hackers who are hacking Bitcoin or
ethereum and they see well why don't we
make this decentralized on blockchain
then after they raise some money through
an Ico they realize how hard it is it's
like like actually we're wrestling with
incredibly hard computer science and
software engineering and distributed
systems problems which are can be solved
but they're just very difficult to solve
and in some cases the individuals who
started those projects were not were not
well equipped to to actually solve the
problems that they wanted so you think
would you say that's the main bottleneck
if uh if you look at the future of
currency uh you know the question is
currency the main B bck is politics like
it's government and the bands of armed
thugs that will shoot you if you bypass
their their currency restrictions that's
right so like your sense is that versus
the technical challenges because you
kind of just suggested the technical
challenges are quite high as well I mean
for making a distributed money you could
do that on alar end right now I mean so
that while ethereum is too
slow there's algorand and there's a few
other more modern more scalable
blockchains it would work fine for a a
decentralized global global currency
right so I think there were technical
bot next to that two years ago and maybe
ethereum 2.0 will be as fast as Al I I
don't know that's not that's not F fully
written yet right so I think the
obstacle to currency being put on the
blockchain is that is the other currency
will be on the blockchain it'll just be
on the blockchain in a way that enforces
centralized control and government hedge
money rather than otherwise like the ER
andb will probably be the first Global
the first currency on the blockchain the
E Ruble maybe next they're already e
Ruble yeah yeah yeah I mean the point
that's hilarious digital currency you
know makes total sense but they would
rather do it in the way that Putin and
xiin ping have have have access to the
the global keys for everything right
then so and then the analogy to that in
terms of Singularity net I mean there's
Echo I I think you've mentioned before
that Linux gives you hope and AI is not
as heavily regulated as money right not
yet right not yet oh that's a lot
slipperier than money too right I mean
money is is easier to regulate because
it it's kind of easier to to Define
whereas AI is it's almost everywhere
inside everything where's the boundary
between Ai and software right I mean if
you're going to regulate AI there's no
IQ test for Every Hardware device that
has a learning algorithm you're going to
be putting like honic regulation on all
software and I don't rule out that that
sof yeah but how do you tell if software
is adaptive and what every software is
going to be adaptive I mean or maybe
they they maybe uh the you know maybe
we're living in the Golden Age of Open
Source that will not that will not
always be open maybe uh it'll become
centralized control of software by
government it it is entirely possible
and part of what I think we're doing
with things like Singularity
protocol is creating a tool
set that can be used to counteract that
sort of thing say a similar thing about
Mesh networking right plays a minor role
now the ability to access Internet like
directly phone to phone on the other
hand if your government starts trying to
control your use of the internet
suddenly having mesh working Mesh
networking there can be very convenient
right and so right
now something like a decentralized
blockchainbased a AGI framework or or
narrow AI framework it's cool it's it's
nice to have on the other hand if
government start trying to T down on my
AI interoperating with someone's AI in
in Russia or somewhere right then
suddenly having a decentralized protocol
that nobody owns or controls becomes an
extremely valuable part of the of the
tool set and you know we've we've put
that out there now it's not perfect but
it but it it it operates and you know
it's pretty blockchain agnostic so we're
talking to algorand about making part of
single run run on algorand my good
friend TWY CBA has a cool blockchain
project called TOA which is a blockchain
without a distributed Ledger it's like a
whole other architecture so there so
there there there's a lot of more
advanced things you can do in the
blockchain world Singularity net could
be ported and to a whole bunch of it
could be made multi-chain and port to a
whole bunch of different blockchains and
there there's a lot of
potential and a lot of importance to
putting this kind of tool set out there
if you compare to opencog what you could
see is opencog allows tight integration
of a few AI algorithms that share the
same knowledge store in real time in in
Ram right Singularity net allows loose
integration of multiple different AIS
they they can share knowledge but
they're mostly not going to be sharing
knowledge in in Ram in RAM on on on the
same machine and I think what we're
going to have is a network of network of
networks right like I mean you you have
the knowledge graph in inside inside the
the opencog system and then you you have
a network of machines inside a
distributed opencog mind but then that
opencog will interface with other AIS
doing deep neural Nets or or custom
biology data analysis or what whatever
they're doing in Singularity net which
is a looser integration
of different AI some of which may be may
be their their their own networks right
and I think at a very loose analogy you
could see that in the human body like
the brain has regions like cortex or
hippocampus which tightly interconnect
like cortical columns with it within the
cortex for example then there's looser
connection within the different loes of
the brain and then the brain
interconnects with the endocrine system
and different parts of the of the body
even even more Loosely then your body
interacts even more Loosely with the
other other people that you talk to so
you often have networks within networks
within networks with progressively
looser coupling as as as you get get
higher up in that hierarchy I mean you
have that in biology you have that in in
the internet as a just networking medium
and I think I think that's what we're
going to have in the network of of
software processes leading to to AGI
that's a beautiful way to see the world
uh again the same similar question is
with open Cog if somebody wanted to
build an AI system and plug into the
singularity net what would you recommend
like how so that's much easier I mean o
Open Cog is still a research system so
it takes some expertise to in sometime
we have tutorials but it's it's somewhat
cognitively labor intensive to get up to
speed on on on
opencog and I mean what's one of the
things we hope to change with the true
AGI opencog 2.0 version is just make
make the learning curve more similar to
tensor flow or torch or something is
right now open Cog is amazingly powerful
but but not simple to not simple to deal
with on the other hand Singularity net
you know as a as a open platform was
developed a little more with usability
in mind although the blockchain is is
still kind of a pain so I mean I mean if
you're a command line guy there's a
command line interface it's quite easy
to you know take na AI that has an API
and lives in a Docker container and put
it online anywhere and then it joins the
global Singularity net and Anyone who
puts a request for services out into the
singularity net the peer-to-peer
Discovery mechanism will find your your
AI and if it does what what was asked it
will it can then start a conversation
with your AI about whether it wants to
ask your AI to do something for it how
much it would cost and so on so that
that that's that's fairly simple if you
wrote an AI and want it listed on Like
official Singularity net Marketplace
which is which is on our website then we
we have a a publisher portal and then
there's a kyc process to go through
because then we have some legal
liability for what goes on on that
website so the in a way that's been in
education too there's sort of two layers
like there's the open decentralized
protocol and there's the market yeah
anyone can use the open decentralized
protocol so say some developers from
Iran and there's brilliant guys in
University of isvan and Tran they can
put their stuff on Singularity net
protocol and just like they can put
something on the internet right I don't
control it but if we're going to list
something on the singularity net
Marketplace and put a little picture and
a link to it yeah then if I put some
Iranian AI genius's code on there then
Donald Trump can send a bunch of Jack
booted thugs to my house to to arrest me
for doing business with Iran right so so
I mean we we already see in some ways
the value of having a decentralized
protocol because what I hope is that
someone in Iran will put online an
Iranian Singularity net marketplace
right which you can pay in a
cryptographic token which is not owned
by any country and then if you're in
like Congo or somewhere that doesn't
have any problem with Iran you can
subcontract AI services that you find on
on on on that marketplace right even
though US citizens can't by by US law so
right now that's kind of a point you
know as as you alluded if if regulations
go in the wrong direction it could
become more of a major point but I think
it also is the case that having these
workarounds to regulations in place is a
defense mechanism against those
regulations being put into place and you
could see that in the music industry
right I mean Napster just happened and
bit torrent just happened and now most
people in my kids generation they're
baffled by the idea of paying for music
right
I mean my dad pays for music but I mean
yeah but because these decentralized
mechanisms happened and then the
regulations followed right and the
regulations would be very different if
they'd been put into place before there
was Napster and bit Tor and so forth so
in the same way we got to put AI out
there in a decentralized vein and Big
Data out there in a decentralized vein
now so that the most advanced AI in the
world is fundamentally decentralized and
if that's the case that's just the
reality The Regulators have to deal with
and then as in the music case they're
going to come up with regulations that
sort of work with the with the
decentralized
reality beautiful you were the chief
scientist of Hansen robotics uh you're
still involved with Hansen robotics uh
doing a lot of really interesting stuff
there this is for people who don't know
the company that created sopia the Robot
can you tell me who who Sophia is I'd
rather start by telling you who David
Hansen is because it's David is the
brilliant mind behind this Sophia robot
and he remains so far he remains more
interesting than his than his creation
alth although she may be improving
faster than he is actually I mean's yeah
so yeah I met a good point I met David
maybe
2007 or something at some futurist
conference we were both speaking at and
I could see we had a great great deal in
common I mean we we're both kind of
crazy but we also we we both had a
passion for AGI and and and the
singularity and we were both huge fans
of the work of uh Philip KCK the the
science fiction writer and I wanted to
create benevolent AGI that that would uh
you know create massively better life
for all humans and all sensient beings
including animals plants and superum
beings and David he wanted exactly the
same thing but he had a different idea
of of how to do it he wanted to get
computational compassion like he wanted
to get machines that that would would
love people and empathize with people
and he thought the way to do that was to
make a machine that could you know look
people eye to eye face to face look at
look at people and make people love the
machine and the Machine loves the people
back so I thought that was very
different way of looking at it cuz I'm
very math oriented and I'm just thinking
like what is the abstract
cognitive algorithm that will let the
system you know internalize the complex
patterns of human values blah blah blah
whereas he's like look you in the face
in the eye and love you right so so I we
we we hit it off quite well and we talk
to each other off and on then I moved to
Hong Kong in 20 20 11 so I'd been I mean
I've been I've been living all over the
place I've been in Australia and New
Zealand in my AC academic career then in
in Las Vegas for a while was in New York
in the late 90s starting my my
entrepreneurial career was in DC for 9
years doing a bunch of US Government
consulting stuff then moved to Hong Kong
in in in
2011 mostly because I met a Chinese girl
who I fell in love with we we we got
married she's actually not from Hong
Kong she's from mland China but we
converge together in Hong Kong still
married now have have a 2-year-old baby
so went to Hong Kong to see about a girl
I guess yeah pretty pretty pretty much
yeah and on the other hand I started
doing some cool research there with gou
at Hong Kong poly Technic University I
got involved with a project called idea
using machine learning for stock and
Futures prediction which was quite
interesting and I also got to know
something about the consumer electronics
and Hardware manufacturer ecosystem in
Shenzhen across the border which is like
the only place in the world that makes
sense to make complex consumer
electronics at large scale and low cost
it's just it's astounding the hardware
ecosystem that you have in in in in
South China like you us people here
cannot imagine what it what what it's
like
so David was starting to explore that
also I invited him to Hong Kong to give
a talk at Hong Kong Pou and I introduced
him in Hong Kong to some investors who
were interested in his robots and he
didn't have Sophia then he had a robot
of Philip K dick our favorite science
fiction writer he had a robot Einstein
he had some little toy robots that
looked like his his son Zeno so through
the investors I connected him to he
managed to get some funding to basically
Port Hansen robotics to Hong Kong and
when he first moved to Hong Kong I was
working on AGI research and also on this
uh machine learning trading project
so I didn't get that tightly involved
with with Hansen
robotics but as as I hung out with David
more and more as we were both there in
the same place I started to get I
started to think about what you could
do to make his robots smarter than they
were and so we started working together
and for a few years I was Chief
scientist and head of software at at
Hansen robotics then when I got deeply
into the blockchain side of things
I I stepped back from that and
co-founded Singularity net David Hansen
was also one of the co-founders of of of
singularity in that so part of our goal
there had been to make the
blockchainbased like Cloud mind platform
for sopia and the other other other
sopia would be just one of the robots in
this uh ins Singularity net yeah yeah
yeah EXA ex exactly Sophia many copies
of the Sophia robot would would would be
you know among the user interfaces to
the globally distributed singular net
Cloud mind and I mean David and I talked
about that for quite a while before
co-founding s Singularity by the way in
his in his vision and your vision was uh
was Sophia tightly coupled to a
particular AI system or was the idea
that you can plug you could just keep
plugging in different AI systems within
the I think David's David's View was
always that
sopia would be a platform much like say
the pepper robot is is is a platform
from SoftBank should be a platform with
a set of nicely designed apis that
anyone can use to experiment with their
different AI algorithms on on on that
platform and Singularity net of course
fits right into that right because
Singularity net it's an API Marketplace
so anyone can put their AI on there
opencog is a little bit different I mean
David likes it but I'd say it's my thing
it's not his like David has a little
more passion for biologically based
approaches to AI than I do which which
makes sense I mean he's really into
human physiology and and biology's he's
a character sculptor right yeah so yeah
he he's interested in but he also worked
a lot with rule-based and logic based AI
systems too so yeah he's interested in
not just Sophia but all the H and robots
as a powerful social and emotional
robotics platform and you know what I
saw in
sopia was a a way to you know
get AI algorithms out there in front of
a whole lot of different people in an
emotionally compelling way and part of
my thought was really kind of abstract
connected to AGI ethics and you know
many people are concerned AGI is is
gonna enslave everybody or turn
everybody into into computronium to to
make extra hard drives for for for their
their cognitive engine or whatever and
you know emotionally I'm not driven to
that sort of of paranoia I'm I'm really
just an optimist by nature but
intellectually I have to assign the
nonzero probability to those sorts of
nasty outcomes because if you're making
something 10 times as smart as you how
can you know what it's going to do
there's an irreducible
un uncertainty there just as my dog
can't predict what I'm going to do
tomorrow so it seemed to me that based
on our current state of
knowledge the best way to bias the agis
we create toward
benevolence would be to infuse them with
love and compassion the way the way that
we do our own children so you want to
interact with AIS in the context of
doing compassionate loving and benef
icial things and in that way as your
children will learn by doing
compassionate beneficial loving things
alongside you in that way the AI will
learn in practice what it means to be
compassionate beneficial and loving it
will get a sort of ingrained intuitive
sense of this which it can then abstract
in in its own way as it gets more and
more intelligent now David saw this the
same way that's why he came up with the
name Sophia which means which means
wisdom so it seemed to me making these
like beautiful loving robots to be
rolled out for beneficial
applications would be the perfect way to
roll out early stage AGI systems so they
can learn from people and not just learn
factual knowledge but learn human human
values and ethics from people while
being their you know their home service
robots their education assistants
they're they're nursing robots so that
that was the Grand Vision now if you've
ever worked with robots
the reality is is quite different right
like the first principle is the robot is
always
broken work I mean I worked with robots
in the 90s a bunch when you had to
solder them together yourself and I'd
put neural Nets doing reinforcement
learning on like overturn solid Bowl
type robots in in the 90s when I was a
professor things of course Advanced a
lot but but the principle still holds
the principle of the robots always
broken still holds yeah yeah so faced
with reality of making Sophia do stuff
many of my Robo AGI aspirations were
temporarily cast aside and I mean
there's just a
practical problem of making this robot
interact in a meaningful way because
like you know you put nice computer
vision on there but there's always glare
and then or it you have a dialogue
system but at the time I was there like
no speech text algorithm could deal with
Hong Kong Hong Kong people's English
accents yeah so the the speech of text
was always bad so the robot always
sounded stupid yeah because it wasn't
getting the right text right so I
started to view that really as
what what in software engineering you
call a walking skeleton which is maybe
the the wrong metaphor to use for Sophia
or maybe the right one but I mean what a
walking skeleton is in software
development is if you're building a
complex system how do you get started
but one way is to First build part one
well then build part two well then build
part three well and so on another way is
you make like a simple version of the
whole system and put something in the
place of every part the whole system
will need so that you have a whole
system that does something and then you
work on improving each part in the
context of that whole integrated system
so that's what we did on a software
level in Sophia we made like a walking
skeleton Software System where so
there's something that sees there's
something that hears there's something
that moves there's something that
there's something that remembers there's
something that learns you put a simple
version of each thing in there and you
connect them all together so that the
system will will will do its thing so
there's there's a lot of AI in there
there's not any AGI in there I mean
there's computer vision to recognize
people's faces recognize when someone
comes in the room and leaves try to
recognize whether two people are
together or not I mean the dialogue
system it's a mix of like hand-coded
rules with deep neural Nets that that
come up with with with their with their
own responses and there's some attempt
to have a narrative structure right and
sort of try to pull the conversation
into something with a be beginning
middle and end and this sort of story
arc so it's I mean like if you look at
the lobner prize and the the systems
that beat the touring test currently
they're heavily rule-based because uh
like you had said narrative structure to
create compelling conversations you
currently new networks cannot do that
well even with Google Mina um when you
actually look at fullscale conversations
it's just yeah this is the thing so
we've been I've actually been running an
experiment the last couple weeks taking
Sophia's chatbot and then the Facebook's
Transformer chatbot which they open the
model we've had them chatting to each
other for a number of weeks on the
server just that's funny gen we're
generating training data of what Sophia
says in a wide variety of conversations
but we can see compared to Sophia's
current
chatbot the Facebook deep neural chatbot
comes up with a wider variety of fluent
sounding sentences on the other hand it
Rambles like mad the Sophia chatbot it's
a little more repetitive in in the
sentence structures it uses on the other
hand it's able to keep like a
conversation Arc over a much longer
longer period right so there now you can
probably surmount that using reformer
and like using various deep neural
architectures and to improve the way
these Transformer models are trained
but in the end neither one of them
really understands what's going on and I
mean that's the challenge I
had with Sophia is if I were doing a
robotics project aimed at AGI I would
want to make like a robo toddler that
was just learning about what it was
seeing because then the language is
grounded in the experience of the robot
but what Sophia needs to do to be Sophia
is talk about sports or or the weather
or or robotics or the conference she's
talking at like yeah she needs to be
fluent talking about any damn thing in
the world and she doesn't have grounding
for for all for all those all those
things so there's there's this just like
I mean Google Mina and Facebook's chap I
don't have grounding for what they're
talking about about either so in in a
way the need to speak fluently about
things where there's no non-linguistic
grounding
pushes what you can do for Sophia in the
short term a bit a bit away from uh from
I mean it pushes you towards uh IBM
Watson uh situation where you basically
have to do heuristic and hardcode stuff
and Rule based stuff I have to ask you
about this okay
so because uh you know in in part Sophia
is like
an uh is an art creation because it's
beautiful uh it's she's beautiful
because she inspires through our human
nature of uh anthropomorphize things we
immediately see an intelligent being
there because David is a great sculptor
is is great sculptor that's right so uh
in fact if Sophia just had nothing
inside her head said nothing if she just
sat there we already prescribe some
intelligence to there's a long selfie
line in front of her after every talk
that's right so it captivated the
imagination of the um of many people I
was going to say the world but yeah I
mean a lot of people and uh billions of
people which is amazing it's amazing
right now of course uh many people have
prescribed much greater prescribed
essentially AGI type of capabilities to
Sophia when they see her and of course
um friendly French folk like uh Yan laon
IM immediately see that of the people
from the AI community and get really
frustrated because uh it's
understandable so what and then they
criticize people like you
who sit back and don't say anything
about like basically allow the
imagination of the world allow the world
to continue being
captivated uh so what what's your what's
your sense of that kind of annoyance
that the AI Community has well I I I
think there's several parts to my
reaction there first of all if I weren't
involved with Hansen robach and didn't
know David Hansen personally I probably
would have been very annoyed initially
at Sophia as well I mean I can
understand the reaction I would have
been like wait all these stupid people
out there think this is an AGI but it's
not an AGI but they're tricking people
that this very cool robot is an AGI and
now those of us you know trying to raise
funding to build AGI you know people
will think it's already there and and
and already works right so I yeah on the
other
hand I think even if I weren't directly
involved with it once I dug a little
deeper into David and the robot and the
intentions behind it I think I would
have stopped being being pissed off
whereas
folks like Yan Lun have remained pissed
off after their after their after their
initial well their initial re his thing
that's his thing yeah I
think that in particular struck me as
somewhat ironic because Yan Lun is
working for Facebook which is using
machine learning to program the brains
of the people in the world toward vapid
consumerism and political extremism so
if if your ethics allows you to use
machine learning in such a blatantly
destructive way why would your ethics
not allow you to use machine learning to
make a lovable theatrical robot that
draws some foolish people into it its
theatrical illusion like if if if the if
the push back had come from Yoshua Benjo
I would have felt much more humbled by
it because he's he's not using AI for
blatant evil right on the other hand he
also is a super nice guy and doesn't
bother to go out there trashing other
other people's work for no good reason
right son Sha's fired but I get you I I
mean that's I mean if if you're if
you're gonna ask I'm I'm gonna answer
but no for sure I think we'll go back
and forth I'll talk to Yan again I would
add on this though I
mean David
Hansen is an artist and he often speaks
off the cuff and I have not agreed with
everything that David has said or done
regarding Sophia
and David also was not agree with
everything David has said her done about
important point I mean d David David is
an artistic uh Wild Man and that's
that's that that's part of his charm
that's that's part of his genius so
certainly there have been conversations
within Hansen Robotics and between me
and David where I was like
let's let's be more open about how this
thing is working
and I did have some influence in in
nudging Hansen robotics to be more open
about about how Sophia was working and
and David wasn't especially opposed to
this and you know he was actually quite
right about it what he said was you can
tell people exactly how it's
working and they won't care they want to
be drawn into the illusion and he was
100% 100% correct I'll tell you what
yeah this wasn't Sophia this was Philip
K dick but we did some actions between
humans and Philip KCK
robot in Austin Texas a few years back
and in this case the Philip KCK was just
teleoperated by another human in the
other room so during the conversations
we didn't tell people the robot was
teleoperated we just said here have a
conversation with Phil dick we're going
to film you right and they had a great
conversation with Phil K dick tell
operated by my friend Stan buy after the
conversation we brought the people in
the back room to see Stefan who was
controlling the the the the Philip K
dick robot but they didn't believe it
these people were like well yeah but I
know I was talking to Phil like maybe
Stefan was typing but the spirit of Phil
was animating his mind while he was
typing yeah so like even though they
knew was a human in the loop even seeing
the guy there they still believe that
was Phil they were talking to a small
part of me believes that they were right
actually because our understand well we
don't understand the universe right I
mean there is a cosmic mind field that
we're all embedded in that yields many
strange synchronicities in in in in the
world which is a topic we don't have
time to go into too much here I mean
there there's there's some nature
there's something to this
where uh our imagination about Sophia
and people Yan Lon being frustrated
about it is all part of this beautiful
dance of creating artificial
intelligence that's almost essential you
see with Boston Dynamics I'm a huge fan
of uh as well you know the kind of I
mean these robots are very far from
intelligent
uh I I I played with her last one
actually with the spot mini yeah very
cool I mean it it reacts quite in a
fluid and and flexible way right but we
immediately ascribe the kind of
intelligence we immediately ascribe AGI
to them yeah yeah if you kick it and it
falls down and goes ow you feel bad
right you can't help it yeah and uh I
mean that's that's that's uh part of uh
that's going to be part of our journey
in creating intelligence systems more
and more and more and more like as uh as
Sophia starts out with a walking
skeleton as you add more and more
intelligence I mean we're going to have
to deal with this kind of idea
absolutely and about Sophia I would say
I mean first of all I have nothing
against Yan L this is F this is nice guy
if he if he wants to play the media
media banter game I'm I'm I'm I'm I'm
happy to
he's a good researcher and and a good
human being and I'd happily work with
the guy but the other thing I was going
to say is I have been explicit about how
Sophia works and I've posted online and
that what H+ magazine an online web Zine
I mean I posted a moderately detailed
article explaining like there are three
software systems we've used inside sopia
there's there's a timeline editor which
is like a rule-based authoring system
where she's really just being an outlet
for what a human scripted there's a
chatbot which has some rule-based and
some neural aspects and then sometimes
we've used opencog behind Sophia where
there's more learning learning and
reasoning and you know the funny thing
is I can't always tell which system is
operating here right I mean so when she
whether she's really learning yeah or
thinking or or just appears to be over
half hour I could tell but over like 3
or 4 minutes of interaction I I so even
having three systems that's already
sufficiently complex where you can't
really tell right away yeah the thing is
even if you get up on stage and tell
people how Sophia is working and then
they talk to her they still attribute
more agency and Consciousness to her
than than is is is really there so I
think there's there's a couple levels of
ethical issue there one issue is should
you be transparent about how Sophia is
is working and I think you should and
and I think I think we we have been I
mean I mean it's there's articles online
that there's some TV special that goes
through me explaining the three
subsystems behind Sophia so the way
Sophia works is is out there much more
clearly than how Facebook say I works or
something right I mean we've been fairly
explicit about it the other is
given that telling people how it works
doesn't cause them to not attribute too
much intelligence agency to it anyway
then then should you keep fooling them
when they want to be fooled and I mean
the you know the whole media industry is
based on fooling people the way they
want to be fooled and we we are fooling
people 100% toward a good end I mean I
mean we are we are playing on people's
of empathy and compassion so that we can
give them a good user experience with
helpful robots and so that we can we can
fill the ai's mind with love and
compassion so I've been I've been
talking a lot with Hansen robotics
lately about collaborations in the area
of of medical Robotics and we we haven't
quite pulled the trigger on a project in
that domain yet but we we may well do so
quite soon so we've been we've been
talking a lot about you know robots can
help with with elder care robots can
help with kids David's done a lot of
things with h with autism therapy and
robots robots before in the co era
having a robot that can be a nursing
assistant in various senses can be quite
valuable the robots don't spread
infection and they they can also deliver
more attention than human nurses can
give right so if you have a robot that's
helping a patient with covid if that
patient attributes more understanding
and compassion and agency into that
robot than it really has because it
looks like a human I mean is that really
bad I mean we can tell them it doesn't
fully understand you and and they don't
care because they're lying there with a
fever and they're sick but they'll react
better to that robot with its loving
warm facial expression than they would
to a pepper robot or or a metallic
looking looking robot so it's it's
really it's about how you use it right
if you made a human looking like doorto
door sales robot that used its its human
looking appearance to to scam people out
of their money yeah then you're using
that that connection in in a bad way but
you you could also use it in in a in a
good way and that but then that's the
same the same problem with every
technology right beautifully put so like
you said uh we're living in uh the era
of the covid This Is 2020 one of the
craziest years uh in recent history so
uh
if if we zoom out and look at this
pandemic uh the Corona virus
pandemic maybe let me ask you this kind
of thing in in viruses in general when
you look at viruses do you see them as
as a kind of intelligence system I think
the concept of intelligence is not that
natural of a concept in the end I mean I
I think human minds and bodies are a
kind of complex self organizing adaptive
system and viruses certainly are that
right they're a very complex
self-organizing adaptive system if you
want to look at intelligence as as
Marcus Hooter defines it as sort of
optimizing computable reward functions
over computable
environments for sure viruses are doing
that right and and I mean in in in doing
so they're they're causing some some
harm to us and that so there there you
know the human immune system is a very
complex self organizing adaptive system
which has a lot of intelligence to it
and viruses are also adapting and
dividing into new Mutant strains and and
and so forth and ultimately the solution
is going to be nanotechnology right I
mean I mean the solution is going to be
making little Nanobots that fight the
viruses or well people will use them to
make nastier viruses but hopefully we
can also use them to just detect combat
and and kill the viruses but I think
now now we're stuck with the biological
uh mechanisms to to combat these these
viruses and yeah know we've
been AGI is not yet mature enough to use
against Co but we've been using machine
learning and also some machine reasoning
in in opencog to help some doctors to do
personalized medicine against covid so
the problem there is given a person's
genomics and given their clinical
medical indicators how do you figure out
which combination of antivirals is going
to be most effective against covid for
for that person and so that that's
something where machine learning is
interesting but also we're finding the
abstraction we get an open Cog with
machine reasoning is interesting because
it can help with transfer learning when
you have not that many different cases
to study and qualitative differences
between different strains of of a virus
or people of different ages who may have
Co so there's a a lot of different
disparate data to work with and it's
small data sets and somehow integrating
them you know this is one of the
shameful things it's very hard to get
that data so I mean we're working with a
couple groups doing clinical trials and
and they're sharing data with us like
under
non-disclosure but what should be the
case is like every covid clinical trial
should be putting data online somewhere
like suitably encrypted to protect
patient privacy so that anyone with the
right AI algorithms should be able to
help analyze it and any biologist should
be able to analyze it by hand to
understand what they can right instead
instead that data is like siloed inside
whatever hospital is running the
clinical trial which is completely
asinine and and ridiculous like what why
why the world works that way I mean we
could all analyze why but it's insane
that it does you look at this hyd
hydrochloroquine right all these
clinical trials were done were reported
by surgisphere some little company no
one ever heard of and everyone paid
attention to this so they were doing
more clinical trials based on that then
they stopped doing clinical trials based
on that then they started again
and why isn't that data just out there
so everyone can analyze it and and and
see what's going on right you hope that
uh we'll move uh that data will be out
there eventually for future pandemics I
mean do do you have hope that our
society will move in the direction of
not in the immediate future because the
US and China frictions are getting very
high so it's hard to see us and China as
moving in the direction of openly
sharing data with each other right it's
it's not there's some sharing of data
but different groups are keeping their
data private till they've mil the best
results from it and then they share it
right so it's so yeah we're working with
some data that we've managed to get our
hands on something we're doing to do
good for the world and it's a very cool
playground for for like putting deep
neural it's an open Cog together so we
have like a bio adom Space full of all
sorts of Knowledge from many different
biology experiments about human
longevity and from biology knowledge
bases online and we can do like graph to
Vector type embeddings where we take
nodes from the hypergraph embed them
into vectors which can then feed into
neural nets for different different
types of analysis and we were doing this
in the context of a project called uh
reu that we spun off from Singularity
net to do longevity longevity analytics
like understand why people live to 105
years or over and other people don't and
then we had this spinoff Singularity
Studio where we're working with some
some healthc care companies on on data
analytics but so this bio space we built
for these more commercial and Longevity
data analysis purposes were're
repurposing and feeding Co data into the
same same bio atom space and playing
around with like graph embeddings from
that graph into neural nets for
bioinformatics and so it's it's both
being a cool testing ground some of our
bio AI learning and reasoning and it
seems we're able to discover things that
people weren't seeing otherwise because
the thing in this cases for each
combination of antivirals you may have
only a few patients who've tried that
combination and those few patients may
have their particular characteristics
like this combination of three was tried
only on people age 80 or over this
another combination of three which has
an overlap with the first combination
was tried more on young people so how do
you combine those those different pieces
of data it's a very dodgy transfer
learning problem which is the kind of
thing that the probalistic reasoning
algorithms we have inside opencog are
better at than deep neural networks on
the other hand you have gene expression
data where you have 25,000 genes and the
expression level of each gene in the
peripheral blood of each person so that
sort of data either deep neural Nets or
to like XG boost or cat boost these
decision forest trees are better at
dealing with than open Cog because it's
just these huge huge messy floating
Point vectors that that are annoying for
a logic engine to to deal with but are
are are perfect for a decision forest or
a neural net so it's it's a great
playground for like hybrid hybrid AI
methodology and we can have Singularity
net have open Cog and one agent and XG
boost in a different agent and they talk
to each other but at at the same time
it's it's highly practical right because
we're working with we're working with
for example some Physicians on this
project in thep physicians in the group
called anopinion based out of uh of
Vancouver and Seattle who are these guys
are working every day like in the
hospital with with patients patients
dying of covid so it's it's quite cool
to see like neural symbolic AI like
where the rubber hit hits the road
trying trying to save people's lives
I've been I've been doing bio AI die
since 2001 but mostly human longevity
research and fly longevity research try
to understand why some organisms really
live a long time this is the first time
like race against the clock and try to
use the AI to figure out stuff that like
if we take two months longer to solve
the AI problem some more people will die
because we don't know what combination
of antivirals to give them yeah at the
societal level at the biological level
at any level are you hopeful about us as
a human species getting out of this
pandemic what are your thoughts on any
general well the pandemic will be gone
in a year or two once there's a vaccine
for it so I I mean that's that that's
but a lot of pain and suffering can
happen in that time so I mean that could
be irreversible I mean I I think if you
spend much time in subsaharan Africa you
can see there's a lot of pain and
suffering happening all the time like
you walk through the streets of any
large city in subsaharan Africa and
there are loads of I mean tens of
thousands probably hundreds of thousands
of people Lying by the side of the road
dying mainly of curable diseases without
without food or water and either
ostracized by their families or they
left their family house because they
didn't want to infect their family right
I mean there's tremendous human
suffering on the planet all the time
time which most folks in the developed
World pay no attention to and Co is is
not remotely the worst how many people
are dying of malaria all the time I mean
So Co is bad it is by no mean the worst
thing happening and setting aside
diseases I mean there are many places in
the world where you're at risk of having
like your teenage son kidnapped by armed
militias and forced to get killed in
someone else's War fighting tribe again
tribe I mean so Humanity has a lot of
problems which we don't need to have
given the state of advancement of our
our technology right now and I think Co
is one of the easier problems to solve
in the sense that there are many
brilliant people working on vaccines we
have the technology to create vaccines
and we're going to we're going to create
new vaccines we should be more worried
that we haven't managed to defeat
malaria after so long and after the
Gates Foundation and others putting
putting so much so much money in into it
I mean I think clearly the whole Global
Medical system Global health system and
the global political and socioeconomic
system are incredibly unethical and
unequal and and badly designed and I
mean I don't know how to solve that
directly I think what we can do
indirectly to solve it is to make
systems that operate
in parallel and off to the side of the
of the governments that are are
nominally controlling the world with
with our armies and and militias and to
the extent that you can make
compassionate peer-to-peer
decentralized Frameworks for doing
things these are things that can start
out unregulated and then if they get
tractioned before The Regulators come in
then they've influenced the way the
world works right Singularity net aims
to do this with
with AI ruv which is a spin-off from
from Singularity net you can see ru. IO
that how do you spell that re eju V ru.
iio that aims to do the same thing for
medicine so it's like peer-to-peer
sharing of medical data so you can share
Medical Data into a secure data wallet
you can get advice about your Health and
Longevity through through through apps
that that that Ru will launch within the
next couple months and then Singularity
AI can analyze all this data but then
the benefits from that analysis are
spread among all all the members of of
of the network but I mean of course I'm
going to Hawk my particular projects but
I mean whether or not Singularity and
and ruv are are the answer I think it's
key to create
decentralized mechanisms for everything
I mean for AI for for human health for
politics for for jobs and employment for
sharing social information and to the
extent decentralized peer-to-peer
methods designed with universal
compassion at the core can gain traction
then these will just decrease the role
that government has and I think that's
much more likely to do good than trying
to like explicitly reform the global
government system I mean I'm happy other
people are trying to explicitly reform
the global government system on the
other hand you look at how much good the
internet or or Google did or mobile
phones did you mean you're making
something that's
decentralized and throwing it out
everywhere and it takes hold then then
government has to adapt and I mean
that's what we need to do with with AI
and with health and in that
light I mean the
centralization of healthcare and of AI
is is certainly not ideal right like
most AIP phgs are being sucked in by you
know a half dozen dozen big companies
most AI processing power is is being
bought by a few big companies for their
own proprietary good and most medical
research is within a few pharmaceutical
companies and clinical trials run by
pharmaceutical companies will stay solid
within those pharmaceutical companies
you know these large centralized
entities which are intelligences in
themselves these corporations but
they're mostly malevolent Psychopathic
and sociopathic intelligences not saying
the people inv vved but the corporations
as self-organizing entities on their own
which are concerned with maximizing
shareholder value as as as a sole
objective function I mean Ai and
Medicine are being sucked into these
pathological corporate organizations
with government cooperation and Google
cooperating with British and and US
Government on this as one among many
many different examples 23 and me
providing you the nice service of
sequencing your your genome and then
licensing the genome to glos Smith Klein
on an exclusive basis right right now
you can take your own DNA and do
whatever you want with it but the pulled
collection of 23 and me sequence DNA is
just to to to gacos Smith Klein someone
else could reach out to everyone who who
had worked with 23 and me to sequence
their DNA and say give us your DNA for
our our open and decentralized
repository that will make available to
everyone but nobody's doing that because
it's a pain to get organized and the
customer list is proprietary to 23 in me
right so yeah I mean
this this I think is a greater risk to
humanity from AI than Rogue AGI turning
the universe into paper clips or or
computronium because what you have here
is mostly good-hearted and nice people
who are sucked into a mode of
organization right of large corporations
which has evolved just for no
individual's fault just because that's
the way Society has evolved is not
altruistic get self-interested and
become Psychopathic like you said
Corporation is psychopathic even if the
people are not and that exactly that's
really the disturbing thing about it
because the corporations can do things
that are quite bad for society even if
nobody has nobody has a has a bad
intention right and then no individual
member of that Corporation has a bad
intention no some probably do but they
don't but but it's not necessary that
they do for the for the corporation like
I mean Google I know lot of people in
Google and they're with very few
exceptions they're all very nice people
who genuinely want what what's good for
the world and Facebook I know fewer
people but it's probably most it's
probably mostly true it's probably like
fine young Geeks Who who want to build
cool technology I actually tend to
believe that even the leaders even Mark
Zuckerberg one of the most disliked
people in Tech is also wants to do good
for the world what do you think about
Jamie demon who's Jamie demon oh the
heads of the Great may have a different
psychology oh boy yeah well I tend to um
I tend to be naive about these things
and see see the best in uh I I I tend to
agree with you that I think the
individuals want to do good by the world
but the mechanism of the company can
sometimes be its own intelligence syst I
mean there there's a one my cousin Mario
gel has worked for Microsoft since 1985
or something and I can see for
him I mean as well as just working on
cool projects you're coding stuff that
gets used by like billions and billions
of of people and you think if I improve
this feature that's making billions of
people's lives easier right so of course
of course that's cool and you know the
engineers are not in charge of running
the company anyway and of course even if
you're Mark Zuckerberg or Larry Page I
mean you still have a fiduciary
responsibility and I mean you you
responsible to the show sh holders your
employees who you want to keep paying
them and so forth so yeah you're Imes in
this system and you know when I worked
in DC I worked a bunch with inscom US
Army intelligence and I was heavily
politically opposed to what the US Army
was doing in Iraq at that time like
torturing people in Abu graa but
everyone I knew in US Army in inscom
when I hung out with him was a very nice
person they were friendly to me they
were nice to my kids and and my dogs
right and they really believed that the
US was fighting the forces of evil and
if you ask them about Abu gayab they're
like well but these Arabs will chop us
into pieces so how can you say we're
wrong to waterboard them a bit right
like that's much less than what they
would do to us it's just in in their
world view what they were doing was
really genuinely for the for the good of
humanity like none of them woke up in
the morning and said like I I want to do
harm to good people because I'm I'm just
a nasty guy right so yeah most people on
the planet setting aside a few genuine
Psychopaths and sociopaths I mean most
people on the planet have a heavy dose
of benevolence and wanting to do good
and also a heavy capability to convince
themselves whatever they feel like doing
or whatever is best for them is is for
the good of humankind right and so the
more we can
decentralize control of decentralization
you know the democracy is horrible but
this is like when Church Hill said you
know it's the worst possible system of
government except for all the others
right I mean I think the whole mess of
humanity has many many very bad aspects
to it but so far the track record of
elite groups who know what's better for
all of humanity is much worse than the
track record of the whole teaming
Democratic
participatory mass of humanity right I
mean none of them is perfect by by any
means the issue with a small Elite group
that knows what's best is even if it
starts out as truly benevolent and doing
good things in accordance with its
initial good
intentions you find out you need more
resources you need a bigger organization
you pull in more people internal
politics arises difference of opinions
arise and bribery happens like some some
opponent organization takes a second and
command out to make some the first in
command of some some other organization
and I mean that's there's a lot of
history of of what happens with Elite
groups thinking they know what's best
for for for the human race so you have
if I have to choose I'm going to
reluctantly put my faith in the vast
Democratic
decentralized mass and I think
corporations have a track record of
being ethically worse than their
constituent human parts and you know
democratic
governments have a more mixed track
record but there are at least but it's
the best we got yeah I mean you can you
can there's Iceland very nice country
right I mean very de Democratic for 800
plus years very very benevolent
beneficial government and the I think
yeah there are track records of
democratic Modes of
organization Linux for example some of
the people in charge of Linux are
overtly complete right and
trying to reform themselves in in many
cases in in other cases not but the
organization as a whole I think it it
it's it's done a good job over overall
it's been very welcoming in in in in the
third world for for example and it's
it's allowed advanced technology to roll
out on all sorts of different embedded
devices and Platforms in places where
people couldn't afford to pay for for
proprietary software so I'd say the
internet Linux and many democratic
nations are examples of how sort an open
decentralized Democratic methodology can
be ethically better than than the sum of
the parts rather than worse and
corporations that has happened only for
a brief period and and then and then
then it go it goes sour right I mean I'd
say a similar thing about universities
like University is a horrible way to
organize research and get things done
yet it's better than anything else we've
come up with right a company can be much
better but for a brief period of time
and then then it stops stops being so
good right so then I I think if you
believe that AI is going to emerge sort
of incrementally out of AI doing
practical stuff in the world like
controlling humanoid robots or or
driving cars or diagnosing diseases or
operating killer drones or spying on
people and Reporting on the government
then then what kind of
organization creates more and more
advanced narrow AI verging toward AGI
may be quite important because it will
guide like what's what's in the mind of
the early stage AGI as it first gains
the ability to rewrite its own code base
and project itself toward toward super
intelligence and if you believe that AI
may move toward AGI out of the sort of
synergetic activity of many agents
cooperating together rather than just
have one person's project then who owns
and controls that platform for AI
cooperation becomes also very very
important right and is that platform AWS
is it Google cloud is it Alibaba or is
it something more like the internet or
Singularity net which is open and de
open and decentralized so if if all of
my weird machinations come to pass right
I mean we have we have the Hansen robots
being a beautiful user interface you
know gathering information on on on
human values and being loving and
compassionate to people in medical home
service robow about office applications
you have Singularity net in the back end
networking together many different AIS
toward Cooperative intelligence fueling
the robots among many other things you
have opencog 2.0 and true AGI as one of
the sources of AI inside this
decentralized network powering the robot
and medical AIS helping us live a long
time and cure diseases am among among
other things and this whole thing is
operating in a in a democratic and and
decentralized way right I think if if
anyone can pull something like this off
you know whether using the specific
Technologies I've mentioned or or or
something else I mean then I think we
have a higher odds of moving toward a
beneficial technological singularity
rather than one in which the first Super
AGI is indifferent to humans and just
considers us an INE efficient use of
molecules that was a beautifully
articulated vision for the world so
thank you for that but let's talk a
little bit about life and
death I'm I'm pro-life and anti death
speak well you for for most people
there's few exceptions that I won't
mention here
I'm I'm I'm glad just like your dad
you're taking a stand against uh
death uh you have uh by the way you have
a bunch of Awesome music where you play
Piano online well one of the songs that
I believe you've written uh the lyrics
go by the way I like the way it sounds
people should listen to it's awesome I
was I considered I probably will cover
it it's a good song uh tell me why do
you think it is a good thing that we all
get old and die is one of the songs I
love the way it sounds but let me ask
you about death
first do you think there's an element to
death That's essential to give our life
meaning like the fact that this thing
ends the say I'm I'm uh pleased and a
little embarrassed you've been listening
to that music I put online that's
awesome one of my regrets in life
recently is I would love to get time to
really produce music well like I I I
haven't touched my sequencer software in
like five years like I I would love to
like rehearse and produce and and edit
and but the with a two-year-old baby and
and trying to create the singularity
there's no time so I I just made the
decision to when I'm playing random
in an off moment just record it just
just record it put it out there like
like whatever maybe if I'm unfortunate
enough to die maybe that can be input to
the AGI when it tries to make an
accurate mind upload of me right death
is
bad I mean that's very simple it's
battling we should have to say that I
mean of course people can make meaning
out of out of death and if if someone is
tortured maybe they can make beautiful
meaning out of that torture and write a
beautiful poem about what it was like to
be tortured right I mean we we're very
creative we can we can milk Beauty and
positivity out of even the most horrible
and and and shitty things but just
because if I was tortured I could write
a good song about what it was like to be
tortured doesn't make torture good and
just because people are able to derive
meaning and value from Death doesn't
mean they wouldn't derive even better
meaning and value from ongoing life W
without death which I very definite yeah
yeah so if you could live forever would
you live
forever forever I my my goal with
longevity research is to abolish the
plague of involuntary death I don't
think people should die unless they
choose to die if I had to choose forced
immortality versus dying I would choose
forced immortality on the other hand if
I chose if I had the choice of
immortality with the choice of suicide
whenever I felt like it of course I
would take that instead and that's the
more realistic choice I mean there
there's no reason you should have forced
immortality you should be able to live
until you get until you get sick of
living right I mean that's and that will
seem insanely obvious to everyone 50
years from now and they will be so I
mean people who thought death gives
meaning to life so we should all die
they will look at that 50 years from now
the way we now look at the anabaptists
in the year 1000 who gave away all their
positions went on top of the mountain
for Jesus for Jesus to come and bring
them to the to the Ascension I mean it's
it's ridiculous that that people think
death is is is good because because you
gain more wisdom as you approach dying
mean of of of of course it's true I mean
I'm 53 and you know the fact that I
might have only a few more decades left
it does make me reflect on on things
differently it it it it it does give me
a deeper understanding of many things
but I mean so what you could get a deep
understanding in in a lot of different
ways pain is the same way like we're
going to abolish pain and that that
that's even more amazing than abolishing
death right I mean once we get a little
better at Neuroscience we'll be able to
go in and adjust the brain so that pain
doesn't hurt anymore right and that you
know people will say that's bad because
there's so much Beauty in overcoming
pain and suffering well sure and there's
Beauty in overcoming torture too but and
some people like to cut themselves but
not not many right I mean that's an
interesting so but to push I mean to
push back again this the Russian side of
me I do romanticize suffering it's non
obvious I mean the way you put it it's
seems very logical it's almost absurd to
to romanticize suffering or pain or
death but to me a world without
suffering without pain without death
it's non obvious what you can stay in
the people's Zoo the people torturing
each other right no but that what I'm
saying is I I don't well that's I guess
what I'm trying to say I don't know if I
was presented with that choice what I
would choose because it to me no this
this is this is a subtler it's a subtler
matter and I've posed it in this
conversation in an
unnecessarily extreme way so I I think I
think the way you should think about it
is what if there's a little dial on the
side of your head and
you could turn how much pain hurt turn
it down to zero turn up to 11 like in
spinal tap if it wants maybe through an
actual spinal typ right so I mean would
you opt to have that dial there or not
that that that's the question the
question isn't whether you would turn
the pain down to zero all all all the
time would you opt to have the dial or
not my my guess is that in some dark
moment of your life you would choose to
have the dial implanted and then it
would be there just to confess a small
thing I'm uh don't ask me why but I'm
I'm doing this physical challenge
currently where I'm doing 680 push-ups
and pull-ups a day and and my shoulder
is currently as we sit here in a lot of
pain and
uh I I don't know I would certainly
right now if you gave me a dial I would
turn that sucker to zero as quickly as
possible but I don't I think the whole
point of this journey
is I don't know well because you're
you're a twisted human being I'm a
twisted so the question is if am I
somehow Twi am I is Twisted because I
have I I created some kind of narrative
for myself so that I can deal with the
with with the Injustice and the
suffering in the world uh or is this
actually going to be a source of
happiness for me well this is this is a
to an extent is a research question that
Humanity will undertake right so I mean
human human beings
do have a particular biological makeup
which sort of implies a certain
probability distribution over
motivational systems right so I mean we
we we and that that is there well put
that is there now the the the question
is how flexibly can that morph as
society and Technology change right so
if if we're given that dial and we're
given a society in which say we don't
have to
we don't have to work for a living and
in which there's an ambient
decentralized benevolent AI Network that
will warn us when we're about to hurt
oursel you know if we're in a different
context can we
consistently with being genuinely and
fully human can we consistently get into
a state of consciousness where we just
want to keep the pain dial turned all
the way down and yet we're leading very
rewarding and fulfilling lives right now
I suspect the answer is yes we can do
that but I I don't I don't know that a
research I don't know that for certain
yeah now I'm more confident that we
could create a nonhuman AGI
system which just didn't need an
analogue of feeling pain and I think
that AGI system will be fundamentally
healthier and more benevolent than than
human beings so I think it might or
might not be true that humans need a
certain element of suffering to be
satisfied humans consistent with the
human physiology if it is true that's
one of the things that makes us
and disqualified to be the be the Su the
super AGI right I mean this is a the
nature of the human motivational system
is that we we seem to gravitate towards
situations where the best thing in the
large scale is not the best thing in in
in the small scale according to our
subjective value system so we gravitate
towards subjective value judgments where
to gratify ourselves in the large we
have to UNG gratify ourselves in the in
the small and we do that in you see that
in in music there's a theory of Music
which says the key to musical Aesthetics
is the surprising fulfillment of
expectations like you you want something
that will fulfill the expectations are
listed in the prior part of the music
but in a way with a bit of a Twist that
that that surprises you and that I mean
that's true not only an outdoor music
like my own or that of Zappa or or Steve
V or or Buckethead or Kristoff pendi or
something it's even there in in Mozart
or something it's not there in elevator
music too much but that that's that's
that's why that's why it's boring right
but wrapped up in there is you know we
want to hurt a little bit so that we can
we can feel the we can feel the pain go
away like We want to be a little a
little a little confused by what coming
next so then when the thing that comes
next actually makes sense it's so
satisfying right and it's the surprising
fulfillment of expectations is that what
you said yeah yeah so beautifully put is
there um we've been skirting around a
little bit but if I were to ask you the
most ridiculous big question of what is
the meaning of life uh what would your
answer
be three values Joy growth and
choice I I I think you you need you need
Joy I mean that that's the basis of
everything if you want the number one
value on the other hand I'm
unsatisfied with a a static joy that
doesn't progress perhaps because of some
Elemental element of human perversity
but the idea of something that grows and
becomes more and more and better and
better in some sense appeals to me but I
also sort of like the idea of
individuality that as a distinct system
I have some agencies there's some Nexus
of causality within within this system
rather than the causality being wholly
evenly distributed over the joyous
growing Mass so I you start with joy
growth and and choice as three basic
values that's and those three things
could continue indefinitely that's not
that's something yeah that can last
forever is there is there some aspect of
something you called which I like super
longevity that you find exciting that
what is there you research-wise is there
ideas in that space that I mean I I
think yeah in terms of the meaning of
life this really ties into that because
for us as humans probably the way to get
the most Joy growth and choice is
transhumanism and to go beyond the human
form that that that that we have right
now right I mean I think human body is
great and by no means to any of us
maximize the potential for Joy growth
and choice imminent in our human bodies
on the other hand it's clear that other
configurations of matter could manifest
even greater amounts of Joy growth and
choice than that than than humans do
maybe even finding ways to go beyond the
realm of matter that as as we understand
it right now so I think in a practical
sense much of the meaning I see in human
life is to create something better than
humans and and and and go beyond life
but certainly that's not all of it for
me in a practical sense right like I
have four kids and and and a
granddaughter and uh many friends and
parents and family and just enjoying
everyday human Human Social existence
well we can do even better yeah yeah and
I mean I I love I've always when I could
live live near nature I spend a bunch of
time out in nature in the forest and on
the water every day and so forth so I
mean enjoying the pleasant moment is is
part of it but the you know the growth
and choice aspect are severely limited
by our human biology in particular dying
seems to inhibit your potential for
personal growth considerably as as as
far as we know I mean there's some
element of life after death perhaps but
even if there is why not also continue
going in in in in in this in this
biological realm right in in in super
longevity I
mean you know we we haven't yet cured
aging we haven't yet cured death
certainly there's very interesting
progress all around I mean crisper and
and Gene editing can be can be an an
incredible tool and I mean right now
stem stem cells could potentially
prolong life a lot like if if you got
stem cell injections of of just stem
cells for every tissue of your body
injected into every tissue and you can
just have
replacement of your old cells with new
cells produced by those stem cells I
mean that that could be highly impactful
at prolong life now we just need
slightly better technology for for
having them grow right so you using
machine learning to guide procedures for
stem cell differentiation and trans
trans differentiation it's kind of
nitty-gritty but I mean that that's that
that that's quite interesting so I think
there's there's a lot of different
things being done to help with with
prolongation of of human life but we
could do a do a lot better so for
example The extracellular Matrix which
is a bunch of proteins in between the
cells in your body they get stiffer and
stiffer as you get older and the The
extracellular Matrix trans transmits
information both electrically
mechanically and to some extent biop
photonically so there's all this
transmission through the parts of the
body but the stiffer The extracellular
Matrix gets the less the transmission
happens which makes your body get get
worse coordinated between the different
organs as you get older so my friend
Christian schafmeister at my alumnus
organization the great my alma mother
the great Temple University Christian
schafmeister has a potential solution to
this where he has these novel molecules
called spirro ligers which are like
polymers that are not organic they're
specially specially designed polymers so
that you can algorithmically predict
exactly how they'll fold very simply so
he designed a molecular scissors that
have spirro ligers
that you could eat and would then would
then cut through all the glucosa pain
and other cross-link proteins in your
extracellular Matrix right but to make
that technology really work and be
mature is several years of work as far
as I know no no one's funding it at the
moment but there so there's so many
different ways that technology could be
used to prolong longevity what what we
really need we need an integrated
database of all biological knowledge
about human beings and model organisms
like Bas hopefully a m distribute
opencog bio atom space but it can exist
in other forms too we need that data to
be opened up and a suitably privacy
protecting way we need massive funding
into machine learning AGI Proto AI
statistical research aimed at solving
biology both molecular biology and human
biology based on this massive massive
data set right and and and then we need
Regulators not to stop people from
trying radical therapies on on on
themselves if if they so so wish to as
as well as better cloud-based platforms
for like automated experimentation on
microorganisms flies and mice and so
forth and we could do all this you look
after the last financial crisis Obama
who I generally like pretty well but he
gave $4 trillion do to large Banks and
insurance companies you know now in this
covid
crisis trillions are being spent to help
Everyday People in small businesses in
the end we probably will find many more
trillion to being given to large Banks
and insurance companies anyway like
could the world put1 trillion into
making a massive holistic bio Ai and bio
simulation and experimental biology
infrastructure we could we could put 10
trillion dollars into that without even
screwing us up too badly just as in the
end Co and the last financial crisis
won't screw up the world economy so
badly we're not putting 10 trillion into
that instead all the resurch is siloed
inside a few big companies and and and
and government agencies and most of the
data that comes from our individual
bodies personally that could feed this
AI to solve aging and death most of that
data is sitting in some some hospitals
database doing nothing
right I got a uh two more quick
questions for you uh one I know a lot of
people are going to ask me you on the
Joe Rogan podcast wearing that same
amazing hat um do you have a origin
story for the hat is there does the Hat
have its own story that you're uh able
to share uh the Hat story has not been
told yet so we're going to have to come
back and you can you can interview the
Hat the
Hat we'll leave that for the Hat Zone
interview all it's too much it's too
much to pack into is there a book is a
hat gonna write a book okay we'll uh it
may transmit the information through
direct neural transmission okay so it's
it actually there might be some
neuralink competition there uh beautiful
we'll leave it as a mystery uh maybe one
last question if uh you uh build an AGI
system uh you're successful at building
the A A system that could lead us to The
Singularity and you get to talk to her
and ask her one question what would that
question
be we're not allowed to ask what is the
question I should be
asking yeah that would be cheating but I
guess that's a good question I'm
thinking of a I wrote a story with
Stefan bugy once where
these AI developers they created a super
smart AI aimed at
answering all the philosophical
questions that have been worrying them
like what what what is the meaning of
life is there free will what is
consciousness and so forth so they got
the super AGI built and it uh it turned
a while it said
those are really stupid questions and
then it puts off on the spaceship and
and and and left the Earth right see be
afraid of scaring it scaring it off that
that's it yeah I mean honestly
there's there there there there is no
one question that that rises among among
all all the all the all the others
really I mean what interests me more is
upgrading my Mo my own intelligence so
that I I can absorb the whole the whole
world world view of the of the super AGI
but I mean of course if if the if the
answer could be like what's the what is
the chemical formula for the immortality
pill
like then I would do that or emit emit a
bit string which uh will be the the code
for a super AGI on the Intel i7
processor right so those would be good
questions so if you're on mind was
expanded to become super intelligent
like you're describing I mean there's a
you know there there's kind of a notion
that with intell intelligence is a
burden that it's possible that with
greater and greater intelligence the
that other metric of joy that you
mentioned becomes more and more
difficult what's your pretty stupid
idea so you think if you're super
intelligent you can also be super joyful
I think getting root access to your own
brain will enable new forms of joy that
we don't have now and I I
think as I've said before what I aim at
is
really make multiple versions of myself
so I would like to keep one version
which is basically human like I am now
but you know keep the dial to turn P
pain up and down and get rid of death
right
and make another version which fuses its
mind with
superhuman AGI and then will become
massively transhuman and what whether it
will send some messages back to the
human me or not is will be interesting
to find out the thing is once you're
super super AGI like one subjective
second to a human might be like a
million subjective years to that super
AGI right so it would be on a whole
different basis I mean at very least
those two copies will be good to have
but it could could could be interesting
to put your mind into into a dolphin or
a space amoeba or all sorts of other
things or you can imagine one version
that doubled its intelligence every year
and another version that just became a
super AGI as fast as possible right so I
mean now we're sort of constrained to
think one mind one self one body right
but but I think we actually we don't
need to be that constrained in in in
thinking about future intelligence after
we've mastered AGI and nanotchnology
ology and Longevity biology I mean then
each of our minds is a certain pattern
of organization right and I I know we
haven't talked about Consciousness but I
I sort of I'm pan psychist I sort of
view the universe as as conscious and so
you know a light bulb or a a quark or an
ant or a worm or a monkey have their own
manifestations of Consciousness and the
human manifestation of Consciousness
it's partly tied to the particular meat
that that we're manifested by but it's
largely tied to the pattern of
organization in in in the brain right so
if you upload yourself into a computer
or a robot or or what whatever else it
is some element of a human consciousness
may not be there because it's just tied
to the biological embodiment but I think
most of it will be there and these will
be incarnations of your Consciousness in
a slightly different flavor and you know
creating these different versions will
be amazing and each of them will
discover meanings of life that have some
overlap but probably not total overlap
with with the human Bend's meaning
meaning of life the the thing is to get
to that future where we can explore
different varieties of of Joy different
variations of human experience and
values and transhuman experiences and
values to get to that future we need to
na navigate through a whole lot of human
of companies and and
governments and and killer drones and
making and losing losing money and and
so and so forth right and that's that
that's the challenge we're facing now is
if we do things right we can get to a
benevolent Singularity which is levels
of Joy growth and choice that are
literally unimaginable to to human
beings if if we do things wrong we could
either annihilate all life on the planet
or we could lead to a scenario where say
all humans are are annihilated and
there's some super AGI that goes on and
does it does its own thing unrelated to
us except via our our role in in
originating it and we may well be at a
bifurcation point now right where where
what we do now has significant causal
impact on what comes about and yet most
people on the planet aren't thinking
that way whatsoever they're thinking
only about their own narrow a narrow
aims and Asim aims and goals right now
of course I'm thinking about my own
narrow aims and goals to some
extent also but I'm I'm trying to use as
much of my energy and mind as I can to
push toward this more benevolent
alternative which will be better for me
but Al but also for also for everybody
else and that's a it's weird that so few
people understand what's going on I know
you interviewed Elon Musk and he
understands a lot of what's going on but
he's much more paranoid than I am right
because because Elon gets that AGI is
going to be way way Smarter Than People
yeah and he gets that an AGI does not
necessarily have to give a about
people because we're very Elementary
mode of organization of matter compared
to many a many agis but I don't think he
has a Clear Vision of how infusing early
stage agis with compassion and human
warmth can lead to an AGI that loves and
helps people rather than viewing us as
uh as you know a historical artifact and
and a a waste of ma a waste of mass
energy but but on the other hand while I
have some disagreements with him like he
understands way way more of the story
than almost anyone else in such a large
scale corporate leadership position
right it's it's terrible how little
understanding of these fundamental
issues exists
out there now that may be different five
or 10 years from now though because I I
can see understanding of AGI and
Longevity and other such issues is
certainly much stronger and more
prevalent now than than 10 or 15 years
ago right so I mean humanity is as a
whole can be slow Learners relative to
what what what what I would like but on
a historical Sense on the other hand you
could say the progress is astoundingly
fast but Elon also said I think on The
Joe Rogan podcast that love is the
answer so uh maybe in that way you and
him are both on the same page of how we
should proceed with AI I think there's
no better place to end it I hope we get
to talk uh again about the hat and about
Consciousness and about a million topics
we didn't cover Ben it's a huge honor to
talk to you thank you for making it out
thank you for talking today NOK thanks
for having me this was this was uh was
really really really good fun and uh we
dug deep into some very important things
so thank thanks for doing this thanks
very much awesome thanks for listening
to this conversation with Ben geril and
thank you to our sponsors the Jordan
Harbinger show and Master Class please
consider supporting the podcast by going
to Jordan Harbinger dcom Lex and signing
up to masterclass and masterclass.com
Lex click the links buy the stuff it's
the best way to support this podcast and
the journey I'm on in my research and
startup if you enjoy this thing
subscribe on YouTube review it with five
stars on Apple podcast support on
patreon or connect with me on Twitter
Alex fredman spelled without the e just
f r i d m an I'm sure eventually you
will figure it out and now let me leave
you with some words from Ben
gzel our language for describing
emotions is very crude that's what music
is for
for thank you for listening and hope to
see you next time