Superintelligence The fifth class of algorithms goes beyond the algorithms as we know them now (digital or in physical form) all the way to superintelligent algorithms, which surpass our human-level intelligence. Once we have reached that point, questions of conscience and moral decisions, and with that responsibility of algorithms, will play a role. Most of this discussion falls beyond the scope of this text. A general remark is that the more intelligent, autonomous or conscience an algorithm will become, the more moral values will be attributed to it, and the more ethical reasoning and behavior will be expected of it. However, as Richards and Smart (2016) elegantly show using the android fallacy it will take still a long time before robots are even capable of deserving that. According to many scholars, a so-called (technological) singularity (Vinge, 1993; Shanahan, 2015) will come, which is88 "the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization". For some already the point of getting algorithms to become "smarter" than humans (whatever that may mean) will trigger an explosion of unstoppable AI growth that could dominate the human race entirely even. Ethical concerns about such algorithms are discussed by Bostrom and Yudkowsky (2011) and many other people, like Kurzweil.89 Many straightforward ethical concerns are about whether machines will overpower us, whether they still need "us", and what it means to be human in a society dominated by machines (see Shanahan, 2015 for some pointers). These five groups of algorithms show the many sides of the ethics of algorithms. Depending on the type of algorithm, task, setting and data, many kinds of ethical issues arise that must be addressed. (5) Towards the Intentional Archivist Algorithmic versions of virtually all current professions will appear, eventually. The basic, human, question is how to ensure that all these algorithms respect our human values. In this section I will sketch the considerations in ensuring algorithms like Paul, the algivist from the scenario at the beginning of this essay, will have the right moral behavior if we actually build them. Solving ethical issues using AI The previous section has described many potential ethical issues and they would all apply to algivists, but so far not many effective solutions exist. Literature on governance of algorithms (Diakopolous, 2016) focuses on transparency and human involvement, and on making algorithmic presence known. A challenge is that so far algorithms are largely unregulated (van der Sloot et al., 2016). However, there are laws and rules for data, such as the data protection act (DPA; Dutch: WBP90) from 1998. In 2018 new European regulation will take effect as a replacement of the directive of91 1995 in the form of the general data protection regulation (GDPR92) which will cover several forms of algorithmic decision making (see also Mittelstadt et al., 2016). Outside the law, solutions include privacy-by-design, and encryption. Individual users can often protect their privacy to some extent by using privacy- friendlier software or services. A solution shared by many is data minimization (see e.g. Medina, 2015): only gather data that is really necessary. Another set of solutions is obfuscation (Brunton and Nissenbaum, 2013) in which users deliberately sabotage algorithmic systems. An alternative though, is to employ AI itself. That is, one can utilize the same power of algorithms to deal with ethical issues. For example, recent advances in machine learning remove discriminatory biases by adapting training methods, or implement privacy-aware techniques. Etzioni and Etzioni (2016) propose general AI Guardians to help us cope with the government algorithms. Since AI systems more and more become opaque (black box), adaptive (using ML) and autonomous, it becomes undoable for humans to check what they are doing and AI systems can do that for us. AI guardians are oversight systems using AI technology, and come in various sorts: interrogators can investigate e.g. a drone crash, and a monitor can keep an eye on other AI systems, or even enforce compliance with the law. A special type is the ethics bot which is concerned with ensuring that the operational AI systems obey ethical norms. These norms can be set by the individual, but can also come from a community. An ethics bot could guide another operational AI system, for example to ensure a financial AI system only invests in socially responsible corporations. Learning the right values Ethics bots will have to learn moral preferences, either by explicit instruction or from observed behavior. An intuitive idea would be to let algivists learn their moral behavior, for example, from watching a human archivist do their work. AI has developed many ways to do that, for example using imitation, or learning from demonstrations, however it is not that simple. A key challenge is generalization: which parts of the task need to be imitated exactly, and which not? "We're always learning from experience by seeing some examples and then applying them to situations that we've never seen before. A single frightening growl or bark may lead a baby to fear all dogs of similar size - or, even animals of every kind. How do we make generalizations from fragmentary bits of evidence? A dog of mine was once hit by a car, and it never went down the same street again - but it never stopped chasing cars on other streets." (Minsky, 1985, Society of Mind, Section 19.8). Based on the advances I described in the previous sections, AI would be capable of recognizing and interpreting the actions of a human archivist in action, and also replicating them in a robotic body, but it would still be a challenge to do learn how to sort out documents and to appraise the documents in the boxes, but to not learn how to scratch a nose, or fingertap while waiting for the printer to finish. An effective alternative is to learn the underlying reward function. As we know from optimization algorithms, a reward function determines what is important and what not. Now assume the algivist could learn the reward function according to which the archivist does his job. In that case, the algivist would be able to replicate the archivist's behavior, including all the right ethical decisions. The technical term for this type of learning is inverse reinforcement learning (Wiering and van Otterlo, 2012) which is based on solid theories for behavior learning. For specialized tasks, especially in robotics, many successful applications exist. Equally so, it could form the basis for AI systems that act in alignment with human goals and values, which is an interesting option for ethical algivists. The core challenge then is how to learn these human values, sometimes framed as the value learning problem (Soares, 2015). archives in liquid times 88 https://en.wikipedia.org/wiki/Technological_singularity 89 https://en.wikipedia.org/wiki/The_Singularity_Is_Near 90 http://wetten.overheid.nl/BWBR0011468 91 https://www.autoriteitpersoonsgegevens.nl/nl/onderwerpen/europese-privacywetgeving/algemene- verordening-gegevensbescherming 92 General Data Protection Regulation (GDPR) http://www.eugdpr.org/more-resources-1.html 284 martijn van otterlo from intended archivists to intentional algivists. ethical codes for humans and machines in the archives 285

Periodiekviewer Koninklijke Vereniging van Archivarissen

Jaarboeken Stichting Archiefpublicaties | 2017 | | pagina 144