Ayanna Howard: Human-Robot Interaction & Ethics of Safety-Critical Systems | Lex Fridman Podcast #66
J21-7AsUcgM • 2020-01-17
Transcript preview
Open
Kind: captions
Language: en
the following is a conversation with
Ayane Howard she's a roboticist
professor Georgia Tech and director of
the human automation systems lab with
research interests in human robot
interaction assisted robots in the home
therapy gaming apps and remote robotic
exploration of extreme environments like
me in her work she cares a lot about
both robots and human beings and so I
really enjoyed this conversation this is
the artificial intelligence podcast if
you enjoy it
subscribe on YouTube give it five stars
an Apple podcast follow on Spotify
supported on patreon or simply connect
with me on Twitter Alex Friedman spelled
Fri D ma a.m.
I recently started doing ads at the end
of the introduction I'll do one or two
minutes after introducing the episode
and never any ads in the middle that can
break the flow of the conversation I
hope that works for you and doesn't hurt
the listening experience this show is
presented by cash app the number one
finance app in the App Store I
personally use cash app to send money to
friends but you can also use it to buy
sell and deposit a Bitcoin in just
seconds cash app also has a new
investing feature you can buy fractions
of a stock say $1 worth no matter what
the stock price is brokers services are
provided by cash up investing a
subsidiary of square and member si PC
I'm excited to be working with cash app
to support one of my favorite
organizations called first best known
for their first robotics and Lego
competitions they educate and inspire
hundreds of thousands of students in
over 110 countries and have a perfect
rating and charity navigator which means
that donated money is used to maximum
effectiveness when you get cash app from
the App Store Google Play and use code
Lex podcast you'll get $10 and cash app
will also donate $10 to the first which
again is an organization that I've
personally seen inspire girls and boys
the dream of engineering a better world
and now here's my conversation with
Ayane Howard
what or who is the most amazing robot
you've ever met or perhaps had the
biggest impact on your career I haven't
met her but I grew up with her
but of course Rosie so and I think it's
because also who's Rosie Rosie from the
Jetsons she is all things to all people
right think about it like anything you
wanted it was like magic it happened so
people not only anthropomorphize but
project whatever they wish for the robot
to be onto but also I mean think about
it she was socially engaging she every
so often had an attitude right she kept
us honest she would push back sometimes
when you know George was doing some
weird stuff but she cared about people
especially the kids she was like the the
perfect robot and you've said that
people don't want their robots to be
perfect can you elaborate that what do
you think that is just like you said
Rosie pushed back a little bit every
once in a while yeah so I I think it's
that so you think about robotics in
general we want them because they
enhance our quality of life and usually
that's linked to something that's
functional right even if you think of
self-driving cars why is there a
fascination because people really do
hate to drive like there's the like
Saturday driving where I can just be but
then there was the I have to go to work
every day and I'm in traffic for an hour
I mean people really hate that and so
robots are designed to basically enhance
our ability to increase our quality of
life and so the perfection comes from
this aspect of interaction if I think
about how we drive if we drove perfectly
we would never get anywhere right so
think about how many times you had to
run past the light because you see the
car behind you is about to crash into
you or that little kid kind of runs into
the street and so you have to cross on
the other side because there's no cars
right like if you think about it we are
not perfect drivers some of it is
because it
our world and so if you have a robot
that is perfect in that sense of the
word they wouldn't really be able to
function with us can you linger a little
bit on the word perfection so from the
robotics perspective what does that word
mean and how is sort of the optimal
behaviors you're describing different
than what we think that's perfection
yeah so perfection if you think about it
in the more theoretical point of view
it's really tied to accuracy right so if
I have a function can I complete it at
100% accuracy with zero errors and so
that's kind of if you think about
perfection in the size of the word and
in a self-driving car realm do you think
from a robotics perspective we kind of
think that perfection means following
the rules perfectly sort of defining
staying in the lane changing lanes when
there's a green light you go and there's
a red light you stop and that that's the
and be able to perfectly see all the
entities in the scene that's the limit
of what we think of as perfection and I
think that's where the problem comes is
that when people think about perfection
for robotics the ones that are the most
successful are the ones that are quote
unquote perfect like I said Rosie is
perfect but she actually wasn't perfect
in terms of accuracy but she was perfect
in terms of how she interacted and how
she adapted and I think that's some of
the disconnect is that we really want
perfection with respect to its ability
to adapt to us we don't really want
perfection with respect to 100% accuracy
with respect to the rules that we just
made up anyway right and so I think
there's this disconnect sometimes
between what we really want and what
happens and we see this all the time
like in my research right like the the
optimal quote unquote optimal
interactions are when the robot is
adapting based on the person not 100%
following what's optimal based on the
roles just to linger on autonomous
vehicles for a second just your thoughts
maybe off the top of her head
is how hard is that problem do you think
based on what we just talked about you
know there's a lot of folks in the
automotive industry they're very
confident from Elon Musk two-way mode
all these companies how hard is it to
solve that last piece did the gap
between the perfection and the human
definition of how you actually function
in this world so this is a moving target
so I remember when all the big companies
started to heavily invest in us and
there was a number of even roboticists
as well as you know folks who were
putting in the VCS and and corporations
Elon Musk being one of them that said
you know self-driving cars on the road
with people you know within five years
that was a little while ago and now
people are saying five years ten years
twenty years some are saying never right
I think if you look at some of the
things that are being successful is
these basically fixed environments where
you still have some anomalies wait you
still have people walking you still have
stores but you don't have other drivers
right like other human drivers are is a
dedicated space for the for the cars
because if you think about robotics in
general where has always been successful
is I mean you can say manufacturing like
way back in the day right it was a fixed
environment humans were not part of the
equation we're a lot better than that
but like when we can carve out scenarios
that are closer to that space then I
think that it's where we are so a closed
campus where you don't have self-driving
cars and maybe some protection so that
the students don't jet in front just
because they want to see what happens
like having a little bit I think that's
where we're gonna see the most success
in the near future and be slow-moving
right not not you know 55 60 70 miles an
hour but the the speed of a golf cart
right so that said the most successful
in the automotive industry robots
operating today in the hands of
real people are ones that are traveling
over 55 miles an hour and in our
constrains environment which is Tesla
vehicles so we'll test the autopilot so
I just I would love to hear of your just
thoughts of two things so one I don't
know if you've gotten to see you've
heard about something called smart
summon wait what Tesla system part
Apollo system where the car drives zero
occupancy no driver in the parking lot
slowly sort of tries to navigate the
parking lot to find itself to you and
there's some incredible amounts of
videos and just hilarity that happens as
it awkwardly tries to navigate this
environment but it's it's a beautiful
nonverbal communication between machine
and human that I think is a from it's
like it's some of the work that you do
in this kind of interesting human robot
interaction space so what are your
thoughts in general water so I I do have
that feature new driver Tesla I do
mainly because I'm a gadget freak right
so I it's a gadget that happens to have
some wheels and yeah I've seen some of
the videos but what's your experience
like I mean your your human robot
interaction roboticist you're legit sort
of expert in the field so what does it
feel for machine to come to you it's one
of these very fascinating things but
also I am hyper hyper alert right like
I'm hyper alert like my but my thumb is
like okay I'm ready to take over even
when I'm in my car or I'm doing things
like automated backing into so there's
like a feature where you can do this
automating backing into our parking
space our bring the car out of your
garage or even you know pseudo autopilot
on the freeway right
I am hyper sensitive I can feel like as
I'm navigating like yeah that's an error
right there like I am very aware of it
but I'm also fascinated by it and it
does get better like it
I look and see it's learning from all of
these people who are cutting it on like
every
come on it's getting better right and so
I think that's what's amazing about it
is that this nice dance of you're still
hyper-vigilant so you're still not
trusting it at all yeah yeah you're
using it what on the highway if I were
to like what as a roboticist we'll talk
about trust a little bit what how do you
explain that you still use it is it the
gadget freak part like where you just
enjoy exploring technology or is that
the right actually balance between
robotics and humans is where you use it
but don't trust it and somehow there's
this dance that ultimately is a positive
yes so I think I'm I just don't
necessarily trust technology but I'm an
early adopter right so when it first
comes out I will use everything but I
will be very very cautious of how I use
it
do you read about or do you explore but
just try it they do like it's crudely to
put a crew they do you read the manual
or do you learn through exploration I'm
an explorer if I have to read the manual
then you know I do design then it's a
bad user interface it's a failure Elon
Musk is very confident that you kind of
take it from where it is now to full
autonomy so from this human robot
interaction you don't really trust and
then you try and then you catch it when
it fails to it's going to incrementally
improve itself into full full way you
don't need to participate what's your
sense of that trajectory is it feasible
so the promise there is by the end of
next year by the end of 2020 it's the
current promise what's your sense about
that journey that test is on so there's
kind of three three things going on now
I think in terms of will people go like
as a user as a adopter will you trust
going to that point I think so right
like there are some users and it's
because what happens is when technology
at the beginning and then the technology
tends to work your apprehension slow
slowly goes away and as people we tend
to swing to the other extreme right
because like oh I was like hyper hyper
fearful or hypersensitive and was
awesome and we just tend to swing that's
just human nature and so you will have I
mean it is a scary notion because most
people are now extremely untrusting of
autobot they use it but they don't trust
it and it's a scary notion that there's
a certain point where you allow yourself
to look at the smartphone for like 20
seconds and then there'll be this phase
shift will be like 20 seconds 30 seconds
1 minute 2 minutes this is scary it's
opposition but that's people right
that's human that's humans I mean I
think of even our use of I mean just
everything on the internet right like
think about how relying we are on
certain apps and certain engines right
20 years ago people have been like oh
yeah that's stupid like that makes no
sense like of course that's false
like now it's just like oh of course
I've been using it it's been correct all
this time of course aliens I didn't
think they existed but now it says they
do obvious nth earth is flat so okay but
you said three things so one is okay so
one is the human and I think there would
be a group of individuals that will
swing right I just teenagers gene it I
mean it'll be clean it'll be adults
there's actually an age demographic
that's optimal for a technology adoption
and you can actually find them and
they're actually pretty easy to find
just the based on their habits based on
so someone like me who wouldn't wasn't
no robot Isis or probably be the optimal
kind of person right early adopter okay
with technology very comfortable and not
hyper sensitive right I'm just the hyper
sensitive because I designed this stuff
yeah so there is a target demographic
that will swing the other one though is
you still have these hue
that are on the road that one is a
harder harder thing to do and as long as
we have people that are on the same
streets that's going to be the big issue
and it's just because you can't possibly
know well so you can't possibly map the
some of the silliness of human drivers
right like as an example when you're
next to that car that has that big
sticker called student driver right like
you are like oh either I am going to
like go around like we are we know that
that person is just gonna make mistakes
that make no sense right how do you map
that information or if I'm in a car and
I look over and I see you know two
fairly young looking individuals and
there's no student driver bumper and I
see them chit-chatting to each other I'm
like oh yeah that's an issue right so
how do you get that kind of information
and that experience into basically an
autopilot yeah and there's millions of
cases like that where we take little
hints to establish context I mean you
said kind of beautifully poetic human
things but there's probably subtle
things about the environment about is
about it being maybe time for commuters
start going home from work and therefore
you can make some kind of judgment about
the group behavior of pedestrians or
even cities right like if you're in
Boston how people cross the street like
lights are not an issue versus other
places where people will will actually
wait for the crosswalk or somewhere
peaceful and but what I've also seen so
just even in Boston that intersection
the intersection is different so every
intersection has a personality of its
own so that certain neighborhoods of
Boston are different so we kind of end
the based on different timing of day at
night it's all it's all there's a
there's a dynamic to human behavior that
would kind of figure out ourselves we're
not
be able to we're not able to introspect
and figure it out but somehow we our
brain learns it we do and so you're
you're saying is there so that's the
shortcut that's their shortcut though
for everybody is there something that
could be done you think that you know
that's what we humans do it's just like
bird flight right this example they give
for flight do you necessarily need to
build the bird that flies or can you do
an airplane is there shortcut so I think
the the shortcut is and I kind of I talk
about it as a fixed space where so
imagine that there is a neighborhood
that's a new smart city or a new
neighborhood that says you know what we
are going to design this new city based
on supporting self-driving cars and then
doing things knowing that there's
anomalies knowing that people are like
this right and designing it based on
that assumption that like we're gonna
have this that would be an example of a
shortcut so you still have people but
you do very specific things to try to
minimize the noise a little bit as an
example and the people themselves become
accepting of the notion that there's
autonomous cars right right like they
move into so right now you have like a
you will have a self-selection bias
right like individuals will move into
this neighborhood knowing like this is
part of like the real estate pitch right
and so I think that's a way to do a
shortcut when it allows you to deploy it
allows you to collect then data with
these variances and anomalies because
people are still people but it's it's a
safer space and it's more of an
accepting space ie when something in
that space might happen because things
do because you already have the self
selection like people would be I think a
little more forgiving than other places
and you said three things that would
cover all of them the third is legal
liability which I don't really want to
touch but it's still it's it's still of
concern in the mishmash with like with
policy as well sort of government all
that that whole that
big ball of mess yeah gotcha so that's
so we're out of time what do you think
from robotics perspective you know if
you if you're kind of honest of what
cars do they they kind of kind of
threaten each other's life all the time
so cars are very us I mean in order to
navigate intersections there's an
assertiveness there's a risk-taking and
if you were to reduce it to an objective
function there's a probability of murder
in that function meaning you killing
another human being and you're using
that first of all yeah it has to be low
enough to be acceptable to you on an
ethical level as a individual human
being but it has to be high enough for
people to respect you to not sort of
take advantage of you completely and
jaywalking front knee and so on so I
mean I don't think there's a right
answer here but what's how do we solve
that how how do we solve that from a
robotics perspective one danger and
human life is at stake
yeah as they say cars don't kill people
people kill people people right
so I think now robotic algorithms would
be killing right so it will be robotics
algorithms that are prone oh it will be
robotic algorithms don't kill people
developers of the right account or there
was kill people right I mean one of the
things as people are still in the loop
and at least in the near and midterm I
think people will still be in the loop
at some point even if it's a developer
like we're not necessarily at the stage
where you know robots are programming
autonomous robots with different
behaviors quite yet not so scary notion
sorry to interrupt that a developer is
has some responsibility in in it in the
death of a human being this uh I mean I
think that's why the whole aspect of
ethics in our community is so so
important right like because it's true
if if you think about it you can
basically say I'm not going to work on
weaponized AI right like people can say
that's not what I'm
but yet you are programming algorithms
that might be used in healthcare
algorithms that might decide whether
this person should get this medication
or not and they don't and they die you
okay so that is your responsibility
right and if you're not conscious and
aware that you do have that power when
you're coding and things like that I
think that's that's that's just not a
good thing like we need to think about
this responsibility as we program robots
and and computing devices much more than
we are yes so it's not an option to not
think about ethics I think it's a
majority I would say of computer science
sort of there it's kind of a hot topic
now I think about bias and so on but
it's and we'll talk about it but usually
it's kind of you it's like a very
particular group of people that work on
that and then people who do like
robotics or like well I don't have to
think about that you know there's other
smart people thinking about it it seems
that everybody has to think about it
it's not you can't escape the ethics
well there is bias or just every aspect
of ethics that has to do with human
beings everyone so think about I'm gonna
age myself but I remember when we didn't
have like testers right and so what did
you do as a developer you had to test
your own code right like you had to go
through all the cases and figure it out
and you know and then they realize that
you know like we probably need to have
testing because we're not getting all
the things and so from there what
happens is like most developers they do
you know a little bit of testing but is
usually like okay - my compiler bug out
and you look at the warnings okay is
that acceptable or not right like that's
how you typically think about as a
developer and you'll just assume that is
going to go to another process and
they're gonna test it out but I think we
need to go back to those early days when
you know you're a developer you're
developing there should be like they say
you know okay let me look at the ethical
outcomes of this because there isn't a
second like testing ethical testers
right it's you we did it back in the
early coding days I think that's where
we are with respect
to ethics like this go back to what was
good practice isn't only because we were
just developing the field
yeah and it's uh it's a really heavy
burden I've had to feel it recently in
the last few months but I think it's a
good one to feel like I've gotten a
message more than one from people you
know I've unfortunately gotten some
attention recently and I've got messages
that say that I have blood on my hands
because of working on semi autonomous
vehicles so the idea that you have semi
autonomy means people will become would
lose vigilance and so on as actually be
humans as we described and because of
that because of this idea that we're
creating automation there will be people
be hurt because of it and I think that's
a beautiful thing I mean it's you know
it's many nights where I wasn't able to
sleep because of this notion you know
you really do think about people that
might die because it's technology of
course you can then start rationalizing
saying well you know what 40,000 people
die in the United States every year and
we're trying to ultimately try to save
us but the reality is your code you've
written might kill somebody and that's
an important burden to carry with you as
you design the code I don't even think
of it as a burden if we train this
concept correctly from the beginning and
I use and not to say that coding is like
being a medical doctor the thing about
it medical doctors if they've been in
situations where their patient didn't
survive right do they give up and go
away no every time they come in they
know that there might be a possibility
that this patient might not survive and
so when they approach every decision
like that's in their back of their head
and so why isn't that we aren't teaching
and those are tools though right they're
given some of the tools to address that
so that they don't go crazy but we don't
give those tools so that it does feel
like a burden versus something of I have
a great gift and I can do great awesome
good but with it comes great
responsibility I mean that's what we
teach in terms of you think about
medical schools right great gift great
responsibility I think if we just
changed the messaging a little great
gift being a developer great
responsibility and this is how you
combine those but do you think and this
is really interesting
it's it's outside I actually have no
friends or sort of surgeons or doctors I
mean what does it feel like to make a
mistake in a surgery and somebody to die
because of that
like is that something you could be
taught in medical school sort of how to
be accepting of that risk so because I
do a lot of work with health care
robotics I I have not lost a patient for
example the first one's always the
hardest right but they really teach the
value right so they teach responsibility
but they also teach the value like
you're saving 40,000 mm but in order to
really feel good about that when you
come to a decision you have to be able
to say at the end I did all that I could
possibly do right versus a well I just
picked the first widget and right like
so every decision is actually thought
through it's not a habit is not a let me
just take the best algorithm that my
friend gave me right it's a is this it
this this the best have I done my best
to do good right and so you're right and
I think burden is the wrong word if it's
a gift but you have to treat it
extremely seriously
correct so on a slightly related note
yeah in a recent paper the ugly truth
about ourselves and our robot creations
you you discuss you highlight some
biases that may affect the function in
various robotics systems can you talk
through if you remember examples or some
there's a lot of examples I use what is
bias first of all yes so bias is this
and so bias which is different than
prejudice so bias is that we all have
these preconceived notions about
particular everything from particular
groups for to habits to identity right
so we have these
predispositions and so when we address a
problem we look at a problem make a
decision those preconceived notions
might affect our our outputs or outcomes
so they're the bias could be positive or
negative and then it's prejudice the
negative courage is the negative right
so prejudice is that not only are you
aware of your bias but you are then take
it and have a negative outcome even
though you are aware wait and there
could be gray areas too that's the
challenging aspect of all questions
actually so I always like so there's
there's a funny one and in fact I think
it might be in the paper because I think
I talked about self-driving cars but
think about this we for teenagers right
typically we insurance companies charge
quite a bit of money if you have a
teenage driver so you could say that's
an age bias right but no one will click
I mean parents will be grumpy but no one
really says that that's not fair
that's interesting we don't that's right
that's right
it's a everybody in human factors and
safety research almost I mean it's quite
ruthlessly critical of teenagers and we
don't question is that okay is that okay
to be ageist in this kind of way it is
and it is agent right is that really
there's no question about it and so so
these are these this is the gray area
right cuz you you know that you know
teenagers are more likely to be an
accident and so there's actually some
data to it but then if you take that
same example and you say well I'm going
to make the insurance hire for an area
of Boston because there's a lot of
accidents and then they find out that
that's correlated with socio economics
well then it becomes a problem right
like that is not acceptable
but yet the teenager which is age it's
against age is right so we figure that I
was I
by having conversations by the discourse
let me throw out history the definition
of what is ethical or not has changed
and hopefully always for the better
correct correct
so in terms of bias or prejudice in
robotic in algorithms what what examples
do sometimes think about so I think
about quite a bit the medical domain
just because historically right the
healthcare domain has had these biases
typically based on gender and ethnicity
primarily a little an age but not so
much you know historically if you think
about FDA and drug trials it's you know
harder to find a woman that you know
aren't childbearing and so you may not
test on drugs at the same level right so
there there's these things and so if you
think about robotics right something as
simple as I'd like to design an
exoskeleton right what should the
material be what should the way P which
should the form factor be are you who
are you going to design it around I will
say that in the US you know women
average height and weight is slightly
different than guys so who are you gonna
choose like if you're not thinking about
it from the beginning as you know okay I
when I design this and I look at the
algorithms and I design the control
system and the forces and the torques if
you're not thinking about well you have
different types of body structure you're
gonna design to you know what you're
used to oh this fits my all the folks in
my lab right so think about it from the
very beginning it's important what about
sort of algorithms that train on data
kind of thing the sadly our society
already has a lot of negative bias and
so if we collect a lot of data even if
it's a balanced weight that's going to
contain the same bias that a society
contains and so yeah was is there is
there things there that bother you
yeah so you actually said something you
ain't said how
we have biases but hopefully we learn
from them and we become better right and
so that's where we are now right so the
data that we're collecting is historic
it's so it's based on these things when
we knew it was bad to discriminate but
that's the data we have and we're trying
to fix it now but we're fixing it based
on the data that was used in the first
place most right and so and so the
decisions and you can look at everything
from the hope the whole aspect of
predictive policing criminal recidivism
there was a recent paper that had the
healthcare algorithms which had kind of
a sensational titles I'm not pro
sensationalism in titles but um but you
read it right so yeah make sure read it
but I'm like really like what's the
topic of the sensationalism I mean
what's underneath it what if you could
sort of educate me and what kind of bias
creeps into the healthcare space yes so
he's already kind of oh this one was the
headline was racist AI algorithms okay
like okay that's totally a clickbait
title yeah oh and so you looked at it
and so there was data that these
researchers had collected I believe I
want to say was either science or nature
he just was just published but they
didn't have the sensational tiger
it was like the media and so they had
looked at demographics I believe between
black and white women right and they
were showed that there was a discrepancy
in in the outcomes right and so and it
was tied to ethnicity tied to race the
piece that the researchers did actually
went through the whole analysis but of
course I mean they're the journalists
with AI a problematic across the board
rights sake and so this is a problem
right and so there's this thing about
oai it has all these problems we're
doing it on historical data and the
outcomes aren't even based on gender or
ethnicity or age but I am always saying
is like yes
we need to do better right we need to do
better it is our duty to do better but
the worst AI is still better than us
like like you take the best of us and
we're still worse than the worst AI at
least in terms of these things and
that's actually not discussed right and
so I think and that's why the
sensational title right and it's so it's
like so then you can have individuals go
like oh we don't need to use this hey
I'm like oh no no no no I want the AI
instead of the the doctors that provided
that data cuz it's still better than
that yes right I think it's really
important to linger on the idea that
this AI is racist it's like well
compared to what sort of the we that I
think we set unfortunately way too high
of a bar for AI algorithms and in the
ethical space where perfect is I would
argue probably impossible then if we set
the bar of perfection essentially if it
has to be perfectly fair whatever that
means
is it means we're setting it up for
failure but that's really important to
say what you just said which is well
it's still better yeah and one of the
things I I think that we don't get
enough credit for just in terms of as
developers is that you can now poke at
it right so it's harder to say you know
is this hospital is the city doing
something right until someone brings in
a civil case right well were they I it
can process through all this data and
say hey yes there there's some an issue
here but here it is we've identified it
and then the next step is to fix it I
mean that's a nice feedback loop versus
like waiting for someone to sue someone
else before it's fixed right and so I
think that power we need to capitalize
on a little bit more right instead of
having the sensational titles have the
okay
this is a problem and this is how we're
fixing it and people are putting money
to fix it because we can make it better
now you look at like facial recognition
how joy she basically called out
the companies and said hey and most of
them were like Oh embarrassment and the
next time it had been fixed right it had
been fixed better right and then I was
like oh here's some more issues and I
think that conversation then moves that
needle to having much more fair and
unbiased and ethical aspects as long as
both sides the developers are willing to
say okay I hear you yes we are going to
improve and you have other developers
are like you know hey AI it's wrong
but I love it right yes so speaking of
this really nice notion that AI is maybe
flawed but better than humans
so just made me think of it one example
of flawed humans is our political system
do you think or you said judicial as
well do you have a hope for AI sort of
being elected for president or running
our Congress or being able to be a
powerful representative of the people
so I mentioned and I truly believe that
this whole world of AI is in
partnerships with people and so what
does that mean I I don't believe or and
maybe I just don't I don't believe that
we should have an AI for president but I
do believe that a president should use
AI as an adviser right like if you think
about it every president has a cabinet
of individuals that have different
expertise that they should listen to
right like that's kind of what we do and
you put smart people with smart
expertise around certain issues and you
listen I don't see why a I can't
function as one of those smart
individuals giving input so maybe
there's an AI on health care maybe
there's an AI on education and right
like all these things that a human is
processing right because at the end of
the day there's people that are human
that are going to be at the end of the
decision and I don't think as a world as
a culture as
xiety that we would totally be and this
is us like this is some fallacy about us
but we need to see that leader that
person as human and most people don't
realize that like leaders have a whole
lot of advice right like when they say
something is not that they woke up well
usually they don't wake up in the
morning and be like I have a brilliant
idea
right it's usually a ok let me listen I
have a brilliant idea but let me get a
little bit of feedback on this like ok
and then it's saying yeah that was an
awesome idea or it's like yeah let me go
back already talked to a bunch of them
but are there some possible solutions to
the biases presence in our algorithms
beyond what we just talked about so I
think there's two paths one is to figure
out how to systematically do the
feedback in corrections so right now
it's ad hoc right it's a researcher
identify some outcomes that are not
don't seem to be fair right they publish
it they write about it and the either
the developer or the companies that have
adopted the algorithms may try to fix it
right and so it's really ad hoc and it's
not systematic there's it's just it's
kind of like I'm a researcher that seems
like an interesting problem which means
that there's a whole lot out there
that's not being looked at right because
it's kind of researcher driven I and I
don't necessarily have a solution but
that process I think could be done a
little bit better
one way is I'm going to poke a little
bit at some of the corporations right
like maybe the corporations when they
think about a product they should
instead of in addition to hiring these
you know bug they give these oh yeah
yeah yeah wait you think Awards when you
find a bug yeah yes Joey bug yeah you
know let's let's put it like we will
give the whatever the award is that we
give for the people who finally secure
holls find an ethics hole right like
find an unfairness hole and we will pay
you X for each one you find I mean why
can't they do that one is a win-win they
show that they're concerned about it
that this is important and they don't
have to necessarily dedicate it their
own like internal resources and it also
means that everyone who has like their
own bias lens like I'm interested in age
and so I'll find the ones based on age
and I'm interested in gender and right
which means that you get like all of
these different perspectives but you
think of it in a data-driven way so like
go see sort of if we look at a company
like Twitter it gets it's under a lot of
fire for discriminating against certain
political beliefs correct and sort of
there's a lot of people this is the sad
thing because I know how hard the
problem is and I know the Twitter folks
are working with a heart at it even
Facebook that everyone seems to hate I
worked in really hard of this it you
know the kind of evidence that people
bring is basically anecdotal evidence
well me or my friend all we said is X
and for that we got banned and and
that's kind of a discussion of saying
well look that's usually first of all
the whole thing is taken out of context
so they're they present sort of
anecdotal evidence and how are you
supposed to as a company in a healthy
way have a discourse about what is and
isn't ethical what how do we make
algorithms ethical when people are just
blowing everything like they're outraged
about a particular and a godel evident
piece of evidence that's very difficult
to sort of contextualize in the big
data-driven way
do you have a hope for companies like
Twitter and yeah so I think there's a
couple of things going on right first
off the remember this whole aspect of we
are becoming reliant on technology
we're also becoming reliant on a lot of
these the the apps and the resources
that are provided right so some of it is
kind of anger like I need you right and
you're not working for me
but I think and so some of it and I and
I wish that there was a little bit of
change and rethinking so some of it is
like oh we'll fix it in house no that's
like okay I'm a fox and I am going to
watch these hens because I think it's a
problem that foxes eat hens No right
like use like be good citizens and say
look we have a problem and we are
willing to open ourselves up for others
to come in and look at it and not try to
fix it in house because if you fix it in
house there's conflict of interests if I
find something I'm probably going to
want to fix it and hopefully the media
won't pick it up right and that then
caused this distrust because someone
inside is going to be mad at you and go
out and talk about how yeah they can the
resume survey because it's rightly the
best people like just say look we have
this issue community help us fix it and
we will give you like you know the bug
finder fee if you do did you have a hope
that the community us as a human
civilization on the whole is good and
can be trusted to guide the future of
our civilization into positive direction
I think so so I'm an optimist right and
you know we there were some dark times
in history always I think now we're in
one of those dark times I truly do and
which aspect the polarization and it's
not just us right so if it was just us
I'd be like yeah say us thing but we're
seeing it like worldwide this
polarization and so I worry about that
but I do fundamentally believe that at
the end of the day people are good right
and why do I say that because any time
there's a scenario where people are in
danger and I would use I saw Atlanta we
had Snowmageddon and people can laugh
about that people at the time so the
city closed for you know little snow but
it was ice and the city closed down but
you had people opening up their homes
and saying hey you have nowhere to go
come
to my house right hotels were just
saying like sleep on the floor like
places like you know the grocery stores
were like hey here's food there was no
like oh how much are you gonna pay me it
was like this such a community and like
people who didn't know each other
strangers were just like can I give you
a ride home and that was a point I was
like you know I like that that there
reveals that the deeper thing is is
there's a compassion or love that we all
have within us it's just that when all
that is taken care of and get bored
we love drama and that's I think almost
like the division is the sign of the
time is being good is that it's just
entertaining under some unpleasant
mammalian level to watch to disagree
with others and Twitter and Facebook are
actually taking advantage of that in the
sense because it brings you back to the
platform and their advertisers are
driven so they make a lot of money love
doesn't sell quite as well in terms of
advertisement so you've started your
career NASA Jet Propulsion Laboratory
but before I'd ask a few questions there
have you happen to have ever seen Space
Odyssey 2001 Space Odyssey yes okay do
you think Hal 9000 so we're talking
about ethics do you think how did the
right thing by taking the priority of
the mission over the lives of the
astronauts do you think Cal is good or
evil easy questions yeah
Hal was misguided you're one of the
people that would be in charge of an
algorithm like Hal yes so how would you
do better if you think about what
happened was there was no failsafe right
so we perfection right like what is that
I'm gonna make something that I think is
perfect but if my assumptions are wrong
it'll be perfect based on the wrong
assumptions all right that's something
that you don't know until you deploy and
like oh yeah messed up but what that
means is that when we design software
such as in Space Odyssey when we put
things out that there has to be a
failsafe there has to be the ability
that once it's out there you know we can
grade it as an F and it fails and it
doesn't continue right if there's some
way that it can be brought in and and
removed and that's aspect because that's
what happened with what how it was like
assumptions were wrong
it was perfectly correct based on those
assumptions and there was no way to
change change it change the assumptions
at all and the change the fallback would
be to humans so you ultimately think
like humans should be you know it's not
Turtles or AI all the way down it's at
some point there's a human that actually
don't think that and again because I do
human robot interaction I still think
the human needs to be part of the
equation at some point so what just
looking back what are some fascinating
things in robotic space that NASA was
working at the time or just in general
what what have you gotten to play with
and what are your memories from working
at NASA yes so one of my first memories
was they were working on a surgical
robot system that could do eye surgery
right and this was back in oh my gosh it
must have been Oh maybe 92 93 94 so it's
like almost like a remote operation oh
yeah it was it was a remote operation in
fact that you can even find some old
tech reports on it so think of it you
know like now we have da Vinci right
like think of it but these are like the
late 90s right and I remember going into
the lab one day and I was like what's
that right and of course it wasn't
pretty right because the technology but
it was like functional and you had as
this individual that could use version
of haptics
to actually do the surgery and they had
this mock-up of a human face and like
the eyeballs
you can see this little drill and I was
like oh that one I vividly remember
because it was so outside of my like
possible thoughts of what could be done
the kind of precision and uh hey what
what's the most amazing of a thing like
that I think it was the precision it was
the kind of first time that I had
physically seen this robot machine human
interface right versus because
manufacturing have been you saw those
kind of big robots right but this was
like oh this is in a person there's a
person in a robot like in the same space
the meeting them in person I like for me
it was a magical moment that I can't as
a life-transforming that I recently met
spot mini from Boston Dynamics Elysee I
don't know why but on the human robot
interaction for some reason I realized
how easy it is to anthropomorphize and
it was I don't know it was uh it was
almost like falling in love this feeling
of meeting and I've obviously seen these
or was a lot on video and so on but
meeting in person just having that
one-on-one time it's different so do you
have you had a robot like that in your
life that was made you maybe fall in
love with robotics sort of odds like
meeting in person I mean I mean I I
loved robotics yeah that was a 12 year
old like I would be a roboticist
actually was I called it cybernetics but
so my my motivation was Bionic Woman I
don't know if you know that is um and so
I mean that was like a seminal moment
but I didn't me like that was TV right
like it wasn't like I was in the same
space and I meant I was like oh my gosh
you're like real just linking I'm Bionic
Woman which by the way because I've read
that about you I watched a bit bits of
it and it's just so no offence terrible
I've seen a couple of reruns lately it's
uh but of course at the time is probably
disgusted the imagination
especially when you're younger just
catch you but which aspect did you think
of it you mentioned cybernetics did you
think of it as robotics or did you think
of it as almost constructing artificial
beings like is it the intelligent part
that that captured your fascination or
was it the whole thing like even just
the limbs and just so for me it would
have in another world I probably would
have been more of a biomedical engineer
because what fascinated me was the by on
it was the parts like the Bionic parts
the limbs those aspects of it are you
especially drawn to humanoid or
human-like robots I would say human-like
not humanoid right and when I say
human-like I think it's this aspect of
that interaction whether it's social and
it's like a dog right like that's
human-like because it's understand us it
interacts with us at that very social
level - you know humanoids are part of
that but only if they interact with us
as if we are human but just to linger on
NASA for a little bit what do you think
maybe if you have other memories but
also what do you think is the future of
robots in space will mention how but
there's incredible robots and NASA's
working on in general thinking about in
art as we venture out human civilization
ventures out into space what do you
think the future of robots is there yes
so I mean there's the near term for
example they just announced the the
rover that's going to the moon which you
know that's kind of exciting but that's
like near-term you know my favorite
favorite favorite series is Star Trek
right you know I really hope and even
Star Trek like if I calculate the years
I wouldn't be alive but I would really
really love to be in that world like
even if it's just at the beginning like
you know like voyage
like adventure one so basically living
in space yeah with what what robots
would a robots do data were roll the
data would have to be even though that
wasn't you know that was like later but
so data is a robot that has human-like
qualities right without the emotion ship
yeah you don't like emotion well they
know what the emotion ship was kind of a
mess right it took a while for for that
thing to adapt but and and so why was
that an issue the issue is is that
emotions make us irrational agents
that's the problem and yet he could
think through things even if it was
based on an emotional scenario right
based on pros and cons but as soon as
you made him emotional one of the
metrics he used for evaluation was his
own emotions not people around him right
like and so we do that as children right
so we're very egocentric we're very
egocentric and so isn't that just an
early version of the emotion ship then I
haven't watched much Star Trek I have
also met adults right and so that is
that is a developmental process and I'm
sure there's a bunch of psychologists
that can go through like you can have a
six-year-old dolt who has the emotional
maturity of a ten-year-old right and so
there's various phases that people
should go through in order to evolve and
sometimes you don't so how much
psychology do you think a topic that's
rarely mentioned in robotics but how
much the psychology come to play when
you're talking about HRI human robot
interaction when you have to have robots
that actually interact with you tons so
we like my group as well as I read a lot
in the cognitive science literature as
well as the psychology literature
because they understand a lot about
human human relations and developmental
milestones
things like that and so we tend to look
to see what what's been done out there
sometimes what we'll do is we'll try to
match that to see is that human human
relationship the same as human robot
sometimes it is and sometimes is
different and then when it's different
we have to we try to figure out okay why
is it different in this scenario but
it's the same in the other scenario
right and so we try to do that quite a
bit would you say that's if we're
looking at the future of human robot
interaction would you say the psychology
piece is the hardest like if it's I mean
it's a funny notion for you as I don't
know if you consider yeah I mean one way
to ask it do you consider yourself for
roboticist or psychologists oh I
consider myself a robot is's that plays
the act of a psychologist but if you
were look at yourself sort of you know
20 30 years from now do you see yourself
more and more wearing the psychology hat
another way to put it is are the h
Resume
Read
file updated 2026-02-13 13:24:59 UTC
Categories
Manage