Unethical AI

Blog!

Recently there has been an increase in media attention given to unethical behavior by autonomous IT systems. Consider for example Microsoft’s learning chatbot, that within a day learned to tweet racist slurs. Or Facebooks learning algorithms that, amongst other things, actively keep people within a filter bubble and offer lower status and income job openings to women and minorities. Or Google’s search engine giving wildly different and stereotypical answers to search queries depending on the use of ethnically descriptive keywords.

Recently there has been an increase in media attention given to unethical behavior by autonomous IT systems. Consider for example Microsoft’s learning chatbot, that within a day learned to tweet racist slurs. Or Facebooks learning algorithms that, amongst other things, actively keep people within a filter bubble and offer lower status and income job openings to women and minorities. Or Google’s search engine giving wildly different and stereotypical answers to search queries depending on the use of ethnically descriptive keywords.

These technologies were not designed to be unethical. They are also not deemed unethical from a principal point of view. However it is obvious to many that the behaviors of these systems are not ethical. Defending them by arguing that they merely reflect human behavior would be exemplary for a condition that Chris Bateman describes as “cyberfetish”: an obsession with technological advancement that causes us to be blind towards their ethical ramifications.

The inherent problem with these examples seems to be that they were designed to learn, imitate or infer the “right” course of action by observing what humans do. Yet the results of those actions often raise ethical concerns. Does that mean we are simply not as ethical as we would like to think?

Perhaps so. Discrimination has been widely studied within many different fields of science, such as psychology, sociology and neurobiology. Current research concludes that such behavior stems from an unconscious bias against outgroups, which has been an important evolutionary mechanism for the larger part of our existence as species. For adult humans it takes a very conscious effort not to give in to this bias. This may also be why racism debates are always so heated; those accused of racism more often than not do this unconsciously. In fact those accusing others of racism often unconsciously have the same preconceptions towards different outgroups.

In theory such evolutionary mechanisms can be controlled by a conscious effort, since these do not serve us in modern society anymore and are widely considered unethical in most of the western world. However the reality is that these are still very much in effect, as is demonstrated by all efforts that are put into workplace diversity and the limited effect these efforts are having. Even in a globalized world we still think and act in a very tribalistic way. Only the tribe is no longer a group of nomads, a small village or a religious community, but an informal group of like-minded individuals, an “old-boys network” if you will. In fact it is considered quite modern to apply this line of thinking within innovative companies as well, as the wide adoption of the Spotify model has proven.

However this does raise an important question. Do we want our artificial intelligence to think along those same lines? If autonomous algorithms can learn bias towards specific groups, it’s not unthinkable they may at some point learn bias towards mankind. Evolutionary mechanisms designed for competition amongst individuals may not be a safe basis for AI ethics. Not safe for us humans at least, since we’d have to compete with autonomous agents possessing far greater computational abilities.

With the advent of fields such as robotics and big data, ethical dilemma’s regarding such values as responsibility and privacy have already become a topic of interest amongst testing professionals. This is also reflected within the Agile manifesto. After all it defines certain values as more important than others, which can be interpreted as a normative ethical statement. As a result colleagues of mine have recently posed the question whether ethicalness should be a quality attribute within scope of testing professionals.

This does raise a number of questions. Defining ethicalness as a quality attribute suggests we can determine the ethical quality of a system. However there is currently no globally shared agreement of what it means to be ethical. I won’t give an overview of moral theories but consider for example that there is no one principal answer to the question whether the ends justify the means. If there were we would need to think long and hard about whether we would want those same ethical rules to apply to a society containing both humans and AI. Finally there’s the question how morally mature we would like our systems to be: do we program strict obedience, adherence to societal norms or do we allow AI to have an individual perspective?

The examples I gave above already prove that for autonomous systems determining their ethicalness may not even be sufficient. Programming learning algorithms to give users tailor-made assistance keeps leading to unethical situations. Therefore we should already be thinking about testing the ethics of autonomous actors.

This is in fact not a new field. Moral psychology already deals with describing the ethics of people. We should extend the scope of this science to include the realm of AI as well. That way we can specify and validate not only the ethicalness of the system being built, but also the very principles underlying the ethical reasoning that we design into it. Then during testing we can not only observe and test the ethicalness of the system’s results, but also the moral reasoning behind its decision making in the field.

This may even teach us to think more critically about how we ourselves come to ethical decisions within our own society. If we don’t like the way learning algorithms treat us, what does that tell us about the way we treat each other? By raising the ethical bar for AI we may be creating a more ethical society for ourselves as well.

Published: 21 April 2017

Author: Niek Fraanje