can be defined. First, the evidence may be inconclusive. For example, when an
algorithm predicts that I am a terrorist with 43.4 percent probability, what does it
mean? Second, evidence may be inscrutable and not open for inspection which is
often the case for a no-fly list decision. Third, evidence can be misguided, meaning
that the underlying data is incomplete or unreliable. Actions, decided upon
evidence, may have problems too, since they can be unfair, e.g. discriminatory. In
addition, they can have transformative effects, for example that they change people's
behavior which can happen when Facebook orders your personal news feed. These
concerns then lead to typical patterns with ethical implications. For example,
transformative effects can lead to loss of autonomy when a search engine
manipulates you with advertisements, inconclusive evidence can lead to unjustified
actions, and inscrutable evidence can lead to opacity. Overall, many concerns lead to
a loss of privacy, and for any algorithmic decision-making situation attributing
responsibility for the decisions can be quite complicated.
As a complement to this taxonomy, I developed52 another way to look at the
potential (ethical) impact of algorithms, now ordered by what the algorithm can do,
or in general terms their level of autonomy. This results in five broad algorithm
classes which have clearly defined capabilities and corresponding ethical issues.
Algorithms that interpret
The first type consists of algorithms that reason, infer and search. These algorithms
can be quite complex in what they do, but they all compute answers based on data as
it is. The more complex they are, the more information they can extract from that
data. Examples include translation53 and spatial language understanding54 but also
poetry generation.55 Visual information processing now includes examples in
recognizing56 what is on a picture, evaluating picture's aesthetics57, generating 3D
face58 models, augmented reality with IKEA59 furniture and even recognizing kids in
Halloween60 costumes by Google's autonomous cars. The interpretation of sound
includes better-than-human speech recognition61, lip reading62, and real-time Skype
translations.63 General data science can for example be used to infer64 when people
get into love relations. Ethical concerns about such algorithms are typically about
privacy since more ways become available to interpret and link more kinds of data.
A second member of this group are search algorithms like Google. They not only
rank and filter information, but they increasingly so use knowledge and learning to
understand what the user wants (Metz, 2016a). Search engines also try to answer
queries like "how high is the Eiffel tower" instead of delivering source documents.
The ethical issues with search engines are typically about the transformative effects
they have on user autonomy, because of their enormous power (Granka, 2010; van
Otterlo, 2016a). Search engines are key gatekeepers and influence the minds of
billions of people every day. They have been shown to be capable of influencing65
elections (Anthes, 2016), which is a serious ethical problem. Answering queries is
an important issue too in so-called conversational agents and social bots (Ferrara et
al., 2016). Social bots can influence discussion on forums, or act as genuine users on
platforms such as Twitter. An ethical issue is that bots could be used for malicious66
purposes, such as steering a debate towards a particular outcome, or providing false
support for election candidates. This raises threats for autonomy again as a
transformative effect. A second type of conversational agent are the voice-controlled
assistants67 such as Cortana, Siri and Alexa, which perform tasks like agenda
keeping, creating shopping list, and answering questions. Assistants are increasingly
used, especially in China68, and have already appeared69 in legal70 situations (as a
"witness").
Algorithms that learn
The second class of algorithms goes beyond the first and can learn, and find
generalized patterns in the data. These inductive algorithms perform statistical
inference to derive patterns, models, rules, profiles, clusters and other aggregated
knowledge fragments that allow for statistical predictions of properties that may not
be explicitly in the data. Overall, these are typically adaptive versions of the inference
algorithms I have discussed, i.e. search engines typically adapt over time, and
algorithms that interpret text, images and sound are often trained on such data.
Applications range from predicting sounds for video71, to training self-driving cars
using video game data72, even to predicting social security numbers.73 Once
algorithms start to learn (Domingos, 2015; Jordan and Mitchell, 2015) from data
concerns about inconclusive evidence are justified because most methods use
statistical predictions. In addition, outcomes may change over time with the data,
making outcomes unstable. Most powerful contemporary learning algorithms, such
as deep learning74 75, are purely statistical algorithms and very much like black boxes,
which entails they are non-transparent and the evidence they produce inscrutable
(with some exceptions76). When algorithms are used for profiling and
personalization (van Otterlo, 2013; De Hert and Lammerant, 2016), something that
happens everywhere on the internet, algorithms influence the user's choices and
therefore affect his autonomy of choice. If profiles are learned from data, algorithms
archives in liquid times
52 In the context of my course on the ethics of algorithms, see http://martijnvanotterlo.nl/teaching.html
53 https://translate.google.com/?hl=nl
54 https://www.wordseye.com/
55 http://www.wired.co.uk/article/google-artificial-intelligence-poetry
56 https://www.theverge.com/2017/6/15/15807096/google-mobile-ai-mobilenets-neural-networks
57 https://petapixel.com/2016/10/08/keegan-online-photo-coach-critiques-photos/
58 https://petapixel.com/2017/09/20/ai-tool-creates-3d-portrait-single-photo/
59 IKEA augmented reality https://www.youtube.com/watch?v=UudV1VdFtuQ
60 http://www.dailymail.co.uk/sciencetech/article-3301013/Google-teaches-self-driving-cars-drive-slowly-
children-dressed-up.html
61 https://www.technologyreview.com/s/544651/baidus-deep-learning-system-rivals-people-at-speech-
recognition/
62 https://www.technologyreview.com/s/602949/ai-has-beaten-humans-at-lip-reading/
63 https://futurism.com/skype-can-now-translate-your-voice-calls-into-10-different-languages-in-real-time/
64 https://www.facebook.com/notes/facebook-data-science/the-formation-of-love/101520646092538 59/
280
martijn van otterlo from intended archivists to intentional algivists.
ethical codes for humans and machines in the archives
65 https://algorithmwatch.org/en/watching-the-watchers-epstein-and-robertsons-search-engine-
manipulation-effect/
66 A funny example of a malfunctioning bot: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-
chatbot-racist
67 http://www.businessinsider.com/siri-vs-google-assistant-cortana-alexa-2016-
11?international=true&r=US&IR=T
68 https://www.technologyreview.com/s/608841/why-500-million-people-in-china-are-talking-to-this-ai/
69 See also the hilarious Southpark episode on these assistants: http://www.ibtimes.com/south-park-season-
premiere-sets-amazon-echo-google-home-speakers-2590169
70 https://www.wired.com/2017/02/murder-case-tests-alexas-devotion-privacy/
71 https://www.engadget.com/2016/06/13/machines-can-generate-sound-effects-that-fool-humans/
72 https://www.youtube.com/watch?v=JGAIfWG2MQQ
73 https://www.wired.com/2009/07/predictingssn/
74 https://www.wired.com/2017/04/googles-dueling-neural-networks-spar-get-smarter-no-humans-required/
75 https://machinelearningmastery.com/inspirational-applications-deep-learning/
76 http://www.sciencemag.org/news/2017/07/how-ai-detectives-are-cracking-open-black-box-deep-learning
281