Recently some of it has been disclosed11 but generally it is unclear who decides upon
them. Facebook is also active in detecting utterances related to terrorism12, Google
aims to tackle fake news by classifying13 news sources and marking them, effectively
implementing a "soft" version of censorship, and Twitter targets14 "hate-speech",
thereby implementing language (and possibly thought) monitoring on the fly. Big
technology companies are starting to recognize the ethical15 issues, even causing
Google to revive Wiener's16 idea of an emergency button17 to turn off autonomous
systems. Ethical concerns about algorithms, or more generally artificial intelligence
(AI) (Nilsson, 2010), are still relatively new and come from many directions. Open
expressions of concerns by Stephen Hawking, Elon Musk and Bill Gates warn18 for
the unforeseen consequences of widespread use of AI. A letter19 of concern with
"research priorities for robust and beneficial AI" was quickly signed by more than
8000 researchers and practitioners. Individual top AI researchers speak out, such as
Tom Dietterich20. Big tech companies such as Google, Amazon, IBM and Microsoft
announced that they are forming an alliance21 which "aims to set societal and
ethical best practice for AI research". Various academic initiatives22 arise around the
broad topic of "societal implications of algorithms" and the scientific literature on
the topic is growing quickly (Mittelstadt et al., 2016). Various authors try to explain
the complex interactions between algorithmic technology and society. Van Otterlo
(2014a) links behaviorist psychology to the way technology now has the means to
implement behavioral conditioning on a large scale. Zuboff (2015) introduces the
"Big Other" as a metaphor to point to the combined logic of capitalism, surveillance
and digital technologies such as AI. Morozov23 sees similar patterns of information
capitalism undermining our human democracy. All these analyses go beyond
relatively simpler, more isolated, issues such as privacy and data protection, and see
the potential influence of algorithms on society as a whole, with profound
implications for democracy and free will.
In this essay I explore ethical implications of algorithms in archives, with
consequences for access. One of my goals is to introduce recent developments in the
ethical study of artificial intelligence algorithms to the reader and survey important
issues. One argument I develop in this essay is that since "we", as humans are
creating these future algivists, we should study their ethical implications before,
during and after creation. However, I also argue that maybe it is better to try to create
them in such a way that we can ensure that they will behave according to our own moral
values. How to construct this ethical algivist, and how does this fit into more general,
scientific developments?
(2) The Digitalization and Algorithmization of Society and Archives
One of the hype terms of this decennium24 is big data. Everywhere around us
everything is turned into digital data which is thought to be good for health, the
economy, the advancement of knowledge, and so on (Mayer-Schönberger, 2013).
The promise is that data will allow us to understand, predict and optimize any
domain (van Otterlo and Feldberg, 2016). For example, patient data allows us to
build statistical models to predict diseases, and to experiment with novel treatments
based on the insights of data, to cure more diseases. Another promise of big data is
that it allows one to throw25 away typical "hypothesis-driven" science, which works
top-down, and to adopt a more bottom-up strategy, which starts with the data and
tries to find patterns. Big data is not entirely new: big data "avant-la-lettre" can for
example be found in the Cybersyn project in Chile in the seventies which was aimed
at controlling the economy of a complete country (Medina, 2015), something
which sounds like modern "smart city"26 endeavours. Data has always27 been
gathered and analysed but the scale of today is new. Modern data-driven technology
induces a new28 machine age, or an industrial revolution (see also Floridi, 2014).
After the rationalization of both human labour and cognitive labour, we now enter a
new phase where much of our society gets turned into data, and processed by
autonomous, artificial entities.
The digitalization which turns our world into data is depicted in the figure
(p. 272): each square represents an object, each triangle a document and each circle a
person. Traditionally, all relations and interactions between any of these groups were
physical. In our modern age, all such interactions are becoming digitalized step-by-
step and produce data entering the data area. If we consider shopping, long ago, one
could go to a store, fit some jeans, pay them and only the sales person (and the
customer) would have a faint memory of who just bought which jeans. Nowadays,
traces of security cameras, online search behavior on the store's website, Wi-Fi-
tracking in the store, and the final payment, all generate a data trace of all
interactions with the store and its products. A major consequence of that
digitalization process is that a permanent memory of all those specific interactions is
archives in liquid times
270
martijn van otterlo from intended archivists to intentional algivists.
ethical codes for humans and machines in the archives
11 https://www.theguardian.com/news/2017/may/21/revealed-facebook-internal-rulebook-sex-terrorism-
violence
12 http://www.telegraph.co.uk/news/2017/06/16/facebook-using-artificial-intelligence-combat-terrorist-
propaganda/
13 https://www.theguardian.com/technology/2017/apr/07/google-to-display-fact-checking-labels-to-show-if-
news-is-true-or-false
14 https: //www.forbes.com/sites/kalevleetaru/2017/02/17/how-twitter s-new-censorship-tools-are-the-
pandoras-box-moving-us-towards-the-end-of-free-speech/
15 https://www.wired.com/2016/09/google-facebook-microsoft-tackle-ethics-ai/
16 Wiener was, however, skeptical: "Again and again I have heard the statement that learning machines cannot
subject us to any new dangers, because we can turn them off when we feel like it. But can we? To turn a
machine off effectively, we must be in possession of information as to whether the danger point has come.
The mere fact that we have made the machine does not guarantee that we shall have the proper information
to do this." (N. Wiener (1948, 1961): Cybernetics, or control and communication in the animal and the
machine).
17 http://www.dailymail.co.uk/sciencetech/article-3624671/Google-s-AI-team-developing-big-red-button-
switch-systems-pose-threat.html
18 http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-
intelligence/
19 https://futureoflife.org/ai-open-letter/
20 https://academic.oup.com/nsr/article/doi/10.1093/nsr/nwx045/3789514/Machine-learning-challenges-
and-impact-an
21 https://www.theguardian.com/technology/2016/sep/28/google-facebook-amazon-ibm-microsoft-
partnership-on-ai-tech-firms
22 https://www.nytimes.com/2016/11/02/technology/new-research-center-to-explore-ethics-of-artificial-
intelligence.html?mcubz=1
23 (In German) http://www.sueddeutsche.de/digital/alphabet-google-wird-allmaechtig-die-politik-schaut-
hilflos-zu-1.3579711
24 The start of this direction was only roughly ten years ago
The Petabyte Age https://www.wired.com/2008/06/pb-intro/ (Mitchell 2009)
Mining Our Reality http://www.cs.cmu.edu/~tom/pubs/Science2009_perspective.pdf (Anderson 2008)
25 This phenomenon is called "the end of theory" since it breaks with standard scientific methodology.
26 See for example Barcelona (http://www.smartcityexpo.com/barcelona) and other cities.
27 See for example East-Germany's Stasi and the great movie about it 30 http://www.imdb.com/title/
tt0405094/
28 See the Rathenau Report on "Working in the Robot Society (2015) https://www.rathenau.nl/nl/node/766
The Rathenau Institute publishes many reports on the digital society and its implications, see
https://www.rathenau.nl/nl/publicaties
271