The challenge is that human values are typically difficult to learn, since they can be
based on complex mental processes, can be working on multiple timescales, can be
difficult to put on one value scale, can involve both intuition and reasoning and may
involve other interactions such as signalling and trust-building. Furthermore, they
require ontological agreement between human and machine: do they see the world in
the same way? Many of these problems are shared with technical AI work (e.g.
computer vision) but for use in ethical systems much more work is needed.
Against learning from scratch
The value learning problem is difficult for many reasons. In addition, any type of
purely statistical learning procedure faces other difficulties related to opacity and
the limited possibilities to employ knowledge one might have about a domain.
However, there are machine learning techniques that allow for the insertion of
knowledge as a bias for learning, and the extraction of learnt knowledge after
learning. Consider the robot learning technique by Moldovan et al. (2012) where a
robot needs to learn from demonstration how physical objects are to be manipulated
and how they behave when manipulating. Without any prior knowledge, the robot
would have quite a challenging learning problem, mapping the pixels of its cameras
all the way to motor commands in its hands. Instead, by adding some common-
sense knowledge about the world, like "if you move object A, and object B is far away,
then you can safely assume B will not be affected", or "if you want to manipulate an
object, you can either push, tap, or grab". This type of knowledge will make the
learning problem easier and at the same time it focuses (or: biases) the learning
efforts on the things that really matter. Other, general common-sense knowledge
could also help in choosing the right behavior (based on a reward function) such as
"green objects are typically heavy", and "one cannot place an object on a ball-shaped
object". In machine learning we call this kind of bias declarative, since it is
knowledge that can be explicitly used, stored, and "looked at". Declarative models
have been used before in ethical reasoning in AI (Anderson and Anderson, 2007)
and other ethical studies (van Otterlo, 2014a).
In order for inserting knowledge to work, we need to solve the ontological issue:
knowledge should be at the right level and meanings should mean the "same" for AI
and for humans. To bridge AI and human (cognitive) thinking, the rational agent
view is a suitable view. In AI, a rational agent is "one that acts so as to achieve the
best outcome or, when there is uncertainty, the best expected outcome" (Russell and
Norvig, 2009). In cognitive science we can take the intentional stance view
introduced by Daniel Dennett (2013). The intentional stance sees entities as
rational agents having mental notions such as beliefs, goals and desires. Using this
viewpoint, we assume the agent takes into account such beliefs and desires to
optimize its behavior. For people this is the most intuitive form of description of
other people's behavior. But, it is also common to use it to talk about algorithms:
I can say that Google believes I like Lego and therefore it desires to feed me
advertisements about it and sets a goal to prioritize websites referring to Lego. I can
also say that Google believes that I want pizza when I enter "food" as a query since it
knows from my profile it is my favourite food.
Code of ethics as a moral contract between humans and machines
Coming back to the archivist singularity mentioned in the introduction, I propose
a simple strategy to construct Paul, the Intentional Algivist as a robotic,
algorithmic agent for the archives that has moral principles just like human
archivists. What could be better declarative, human knowledge about ethical values
in the archival domain than the previously discussed archival codes of ethics? Indeed,
these hold general consensus ideas on how an archivist should behave ethically,
dealing with issues such as privacy, access, and fair use of the archive. In addition,
they are full of intentional descriptions, see for example: "The Archivist should
endeavour to promote access to records to the fullest extent consistent with the
public interest, but he should carefully observe any proper restrictions on the use of
records". This is clearly a bias on how the algivist should behave and it contains
intentional constructs such as a goal, a desire and several (implicit) beliefs. Codes
of ethics are solid knowledge bases of the most important ethical guidelines for the
profession, and typically they are defined to be transparent, human-readable and
public. Using codes of ethics as a knowledge bias in adaptive algivists that learn
ethical behavior is natural, since it merely translates (through the rational agent
connection) an ethical code that was designed as a bias for human behavior, and
uses that as a guide or constraint, or: as a moral contract between man and machine.
I see a practical way to go in which an algivist is endowed with the ethical values
contained in the code of ethics, after which it observes human archivists at work to
fine-tune its behavior based on their example. Human archivists will slowly
transform into trainers and coaches of algivists: the more advanced algivists become,
the more humans will guide them and leave the archival work to them. But, before
this happens, much still needs to be done, both by AI researchers as well as by
archivists themselves.
What does the field of AI need to do?
AI needs to keep on progressing as always, but more research is needed on several
aspects specifically. Language understanding and formalization of human
(common-sense) knowledge needs to be improved to translate codes of ethics
automatically in forms that the algivist can use for acting, and for reasoning. We
know that even the impossible Roadrunner cartoon logic has at some point been
formalized (McCartney and Anderson, 1996), so nothing is impossible.
Furthermore, robotic skills need to improve a lot. Manipulation skills are somewhat
sufficient for laboratory conditions (e.g. Moldovan et al., 2012) and there has been
some progress in - for archivists, related - environments such as libraries93, but
obtaining general movement and object manipulation skills in any physical archive
will take enormous efforts still. Once parts of the archive have been made digital,
many of the archival selection, ordering and description tasks can be handled well,
although also there much improvement is possible in the semantic understanding of
documents, images, and other items.
What do archivists need to do?
Archivists will need to assist AI researchers as experts in archives, and they need to
decide at least two things.
The ethics of choosing THE code of ethics: The core idea is to inject ethical
codes into machines. Out of the many possible versions, which one should be
picked? And who decides upon that? Archivists, committees of experts,
programmers, or more general democratic methods? For this to work, we may
also need to investigate more which kinds of values hold in professions as held
by archivists and librarians.
archives in liquid times
286
martijn van otterlo from intended archivists to intentional algivists.
ethical codes for humans and machines in the archives
93 https://phys.org/news/2016-06-automated-robot-scans-library-shelves.html
287