Will Our Robots Harm Us?

Will Our Robots Harm Us?


Summary:  We are approaching a time when we need to be concerned that our AI robots may indeed harm us.  The rapid increase in the conversation about what ethics should apply to AI is appropriate but needs to be focused on the real threats, not just the wild imaginings of the popular press.  Here are some data points to help you in thinking about this, what our concerns should be today, and what our concerns should be in the future.
 
If you are a data scientist reading this the answer may seem obvious.  And yet there is recently an explosion of institutes, conferences, and articles devoted to the ethics and ethical implications of artificial intelligence that imply by their very existence that this plausible..
Ethics is the branch of philosophy that seeks to define or recommend right and wrong conduct.  What’s missing from this definition, presumably because it’s so obvious is that these are human judgements about human behaviors.  It’s the human interpretation of the consequences of our action toward others.  So if these are human-to-human issues, why is there even a conversation about ethics in AI?
 
Where Should We Look to See If There’s a Problem
In the title of this article I take a small liberty in identifying robots and AI.  Normally we think of robots as physical devices which in one way or another are designed to assist in human (and sometimes non-human) tasks.  However, it’s fair to say that all our non-mechanical AIs from chatbots to facial recognition applications, and smart game playing apps from chess to World-of-Warcraft are also robots, and are equally imbued with AI.
In fact when asked to describe AI to any audience I like to fall back on this anthropomorphic description that aligns what we expect robots to do: 
See:  this is still and video image recognition.
Hear: receive input via text or spoken language.
Speak:  respond meaningfully to our input either in the same language or even a foreign language.
Make human-like decisions:  offer advice or new knowledge.
Learn:  change its behavior based on changes in its environment.
Move: and manipulate physical objects.
And from here, it’s a very short and valuable jump to mapping these capabilities to the various types of deep learning and related techniques that together make up the corpus of what we call artificial intelligence.
 
These are the actual data science components of AI.  So to be specific it is in the current or future capabilities of deep learning (CNNs and RNNs), Question Answering Machines, Generative Adversarial Neural Nets, or Reinforcement Learning that we have to look for signs that our AI robots might not be looking out for our well-being.
 
What the Public and the Press Perceives
Judging from the word count, the majority of concern seems to be devoted to two memes:
Bias:  Somehow our AI enabled systems will spontaneously develop bias against some individuals or segments of our society.
Robot Overlords:  Our AI systems will self-evolve to become smarter than their developers and take actions that disadvantage humans.
From a technical perspective, it important to understand that these robots are in fact just machines – dumb machines no matter how smart they may sound.
It is fair that in the issue of bias we look to the data scientists behind the models to see if unintended consequences result.  Does our model unfairly discriminate against some group that should be judged by modified standards?
Our response, particularly in credit, lending, and insurance has been to require that we must be able to tell any applicant exactly why they were accepted or rejected.  This limits those industries to the use of the most simple and transparent modeling techniques, decision trees and linear regression.  The result is that we are forced to trade accuracy for transparency.  Sounds like a good thing but maybe not so.
Equifax (OK not our favorite company right now) recently published a study that found if they were allowed to use more accurate and advanced modeling techniques which are less explainable, they would have been able to approve loans for many more people. 
So transparency and explainability may actually be working against the very people who find themselves rejected.  Also, many of those rejected for example for low income or unpaid bills have no way to change their circumstance.  All we can say is that they knew why, not that they were helped.
 
Can Our AI Robots Harm Us Today?
Chatbots are a good place to look.  Chatbots have indeed been developed that exceed the requirements of the Turing Test.  Some wags have suggested that the new criteria should be the ‘beer test’, that is would I want to take this chatbot down to the pub for a pint. 
Not only are there competent chatbots that can handle your customer service questions, but only a matter of days ago Andrew Ng announced the release of Facebook’s Woebot.  Woebot is a chatbot that gives one-on-one psychological counseling for depression and by all reports does a pretty good job.
Given this level of sophistication it’s easy to see why the public might be fooled into thinking that there is human motivation behind the machine.  But in truth we understand that chatbots, similar also to IBM’s Watson Question Answering Machine are merely (not to understate the technical accomplishment) clever NLP front ends developed mostly with Recurrent Neural Nets and exposed to extremely large bodies of knowledge of successful conversations about the topic on which they are to respond.  Like all ‘robots’, if you change their sensors, actuators, or if their ‘body of knowledge’ contains false or inaccurate information, they are quickly seen to fail.
Think back to 2016 when Microsoft released its chatbot Tay in the US.  Some young people figured out that Tay learned from the conversations it was given.  They fed Tay a constant stream of sexist, Nazi, and anti-Semitic conversation which she incorporated into her training set of appropriate responses.  To Microsoft’s extreme embarrassment Tay had to be taken down within 16 hours of its release.
But here is the core of the issue.  Tay or Woebot or Alexa or Siri or any customer service or translation device, or any other chatbot has no understanding of what its output means or its impact on humans. 
Contrast that for example with a young child raised in a home where hate speech and violent language are the norm and that child will soon begin to incorporate those themes into its own speech.  The difference being that the child is seeking social approval from its ‘family’ which translates into love and security. 
We can see and judge whether these human-to-human activities are ‘right’ and ‘good’ according to our societal norms.  The chatbot however has no rules to follow that create any desire to influence humans for either good or bad.
 
Are There Examples of AIs That Can or Are Harming Humans?
Unfortunately there are, however this is about AI as tools that are being applied to the wrong goals or with insufficient understanding.  Four examples:
Detecting Criminal with Facial Recognition:  In November 2016 two research academics out of Universities in Canada and China published a peer reviewed study showing 89.5% accuracy in identifying criminals from non-criminals based solely on facial recognition.  We’re not talking about individuals who have already been convicted.  We’re talking about looking at a group of faces and picking out those most likely to be criminal.  Shades of the Minority Report.
Detecting Sexual Preference:  In September 2017 two research academics from Stanford published a peer reviewed study showing 91% accuracy in identifying sexual preference in men and 83% accuracy for women again based solely on facial recognition.  In the US the LGBT community has gained a lot of acceptance so this may seem like a minor concern.  However, in Turkey as we speak there is a newly renewed campaign to identify homosexuals where that preference is still a criminal offense.
Will Married Couples in Counseling Succeed in Saving Their Marriage:  A recently published study conducted over two years shows that digitized voice signals analyzed with deep learning techniques shows strong correlation with which couples would and would not succeed at saving their marriages after two years of therapy.  Will therapists divert those predicted to fail into lesser types of therapy or just send them directly to divorce lawyers?
Outing Porn Stars on Social Media:  Last year in Russia a group intent on outing sex workers used pictures from their porn sites and the facial recognition software in a Russian version of Facebook to identify and out those women to their real life communities.  That facial recognition software generated many false positives and many innocent women were victimized, quite aside from the intentional harm to revealing the confidential information of those actually involved in the sex industry.
So yes, there are examples of how AI is currently harming us.  This may the first time in history that we’ve found it necessary to apply the restrictions of right and wrong conduct to a non-human entity because this the first time a mechanical non-human entity has been able to interact with humans in such a way as to cause them societal harm.
This is also the reason the topic is now urgent.
 
What Are the Risks If No Action is Taken
There is no evidence as yet that police or foreign governments are using AI to these ends but as they say in physics, if it can happen it will happen.
Without the technical interpretation of data scientists, untrained public and private users will miss the most obvious facts:
Correlation is not causation.  Just because you have the facial characteristics identified statistically as criminal, homosexual, or any other characteristic you might want to keep private, does not mean that you are one.
Error Rates:  Even the most transparent models have error rates that need to be clearly understood.  False positives are clearly a problem but in some problem types false negatives (that for example fail to eliminate you from a group) may be just as troublesome.  We know that model results can be adjusted to emphasize one error type over the other.  I doubt that members of the public grasp this.
 
What Should AI Ethics Conversations Focus on Today
A quick survey of the organizations engaged in this conversation today shows that their goals are all over the place.  Many are focusing on the consumer level privacy issues of not being tracked by click or by geo through our devices.  In my opinion this is small potatoes.
The serious types of abuse, call it ethics or not, are being enabled by the rapid and ubiquitous deployment of cameras and other sensors that record features of ourselves that we cannot readily change.  That can be our faces for facial recognition, our voices, our left-behind DNA, even potentially our breath.
Of these, clearly the rapidly growing number of cameras is the most pressing concern.  If you want to make an impact in AI ethics, focus on the intrusions we cannot protect ourselves from and restricting the use of that data by organizations from whom we wish to remain private.
Second, follow the money.  Many of these think tanks are being funded by the big players like Google.  You may think this is because they have deep knowledge as well as deep resources.  But let’s not forget they also have a vested interest in guiding commercial development to their own benefit.  So be a little cautious here.
Third is that it will be very tempting for government to step in with regulations.  It’s always easy to suggest an answer if you know very little about the problem.  Regulation that is too simplistic or too over reaching may cause us to abandon or dramatically slow down some very valuable areas of development.
 
Robots Do Not (Yet) Dream of Electric Sheep
Of the AI that exists today, only image, text, and voice recognition are ready for prime time.  CNNs and RNNs are providing us with the most basic building blocks of AI but they are dumb tools.
So when if ever, and through what element of data science will we ever have to fear that some robot Willie Loman will call us up and try to sell us an annuity, favoring its own success over our actual need?  When will we have AI robots that are imbued with motives that may specifically work against human well-being?
I’d like to tell you that this will never be the case but the techniques in reinforcement learning will probably make this possible within 5 or 10 years.
What make reinforcement learning different from other techniques is that it is goal seeking.  The data scientist designs the RL to have a specific goal such as win the game (even though there may be hundreds of intervening moves) or drive the car safely from A to B.  It’s this goal seeking that’s missing in all our other AI techniques.
Despite the hoopla about self-driving cars and RL systems beating us at GO, RL is quite early in its development.  It’s more about a group of similar problems (having a goal beyond the next move) than about a particular technology.  In fact there are two separate techniques currently available for RL problems, Q-Learning and TD-Learning which are quite different from one another.  It’s likely that we will first have to finish out the tool set before RL becomes a regularly available solution.
At some point in the future a data scientist may be able to set a goal for an RL robot to behave like a human AND to consider its internal goal without consideration for its impact on humans.  Just defining the goal of ‘learning to think like a human’ is beyond our current grasp but it may not always be.
Another data science technology that will probably accelerate this problem is neuromorphic computing.  Also in its infancy, neuromorphic chips that compute in a way much more similar to the human brain will herald in the period when RL systems can learn from one system and apply that learning to a completely separate system.  There are working examples of this today, but only just barely.
Behind all of these current and future developments is the data scientist.  Until we learn how to have one AI system create other AI systems there will always be the human who set goals for the system.  Our concern for ethics in AI then should be three-fold. 

Ensuring that the unintended consequences of applying our models is not used by the uninformed to cause harm.
Ensuring that we can control or preferably opt-out of types of exploding data sources like ubiquitous surveillance cameras and microphones.  This may go beyond what is happening in the EU today with its right-to-privacy, and right-to-be-forgotten.  It may need to extend to a right to be anonymous.
Watching carefully the future in which our AI robots may be given goals so human-like that they prioritize their own success over the well-being of humans.

 
About the author:  Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist since 2001.  He can be reached at:
Bill@DataScienceCentral.com
 

Link: Will Our Robots Harm Us?