typically learn statistical models from many users and apply them to a single user.
This may render inconclusive evidence which may be right on average but not for
that single individual. A new privacy risk of learning algorithms is that they can also
reveal new knowledge (van Otterlo, 2013; Schwartz et al., 2013; Youyou et al., 2015;
Kosinski et al., 2013), predicting personal traits from language use, Facebook like's
or just a photo.77 Such algorithms obviously have effects on privacy, but certainly
also transformative effects related to autonomy.
A more general consequence of adaptive algorithms is that we move in the direction
of "the end of code" (Tanz, 2016). In the near future, increasingly many algorithmic
decision-making tasks will be learned from data, instead of hardcoded by
programmers. This has consequences for society, and for people, who will more
often be assigned the role of trainer, instead of programmer.
Algorithms that optimize
The third class of algorithms consists of algorithms that optimize, incorporate
feedback, and experiment. These typically employ reward functions that represent
what are good outcomes, which can be, for example, a sale in a web shop, or obtaining
a new member on a social network. Reward definitions tell an algorithm what is
important to focus on. For example, advertising algorithms on webpages get +1
reward for each time a user clicks on an offer. Optimization algorithms will, based
on all that is known about statistical aspects and based on all data about a problem,
compute the best expected solution. The most prominent system currently comes
from Google's DeepMind. It combines reasoning, learning and optimization, beat
the world best Go player (Metz, 2016b) and is now tackling the complex computer
game Starcraft-2.78 Optimization algorithms feature two kinds of rewards. One is
used by the algorithm to optimize and represents clicks, sales, or other things which
are valuable. The other type are rewards for users (e.g. a sale), with the goal of
nudging79 them into doing something (e.g. buying something). Manipulating users'
behavior obviously has transformative effects on autonomy. Worse, just like
learning algorithms, optimization works well on average and could deliver nudges to
the wrong users too, which would make the outcomes discriminating and unfair.
Optimization algorithms typically iterate the optimizations by experimenting with
particular decisions, through interactions with the problem (see Wiering and van
Otterlo, 2012). A good example are algorithms that determine the advertisements
on the web: they can "try out" (experiment) with various advertisements for
individual users, and use the feedback (clicking behavior) of individuals to optimize
advertisement placings. So, instead of a one-pass optimization, it becomes an
experimentation loop in which data is collected, decisions are made, feedback and
new data is collected, and so on. Platforms with large user bases are ideal
laboratories for experimentation. For example, Netflix experiments with user
suggestions to optimize their rewards which are related to how much is being
watched (Gomez-Uribe and Hunt, 2015). Optimization algorithms are generally
used to rank things or people. In the ranked society in which we now live everything
gets ranked, with examples such as Yelp, Amazon, Facebook (likes), TripAdvisor,
Tinder (swiping) and OkCupid, all to find "the best" restaurant, lover, holiday trip,
or book. Also in our work life, ranking and scoring becomes the norm (called:
workplace monitoring80). The ultimate example is China's 2020 plan (Chin and
Wong, 2016) to rank everyone in society to find out "how good a citizen are you".
Scores are computed from many things ranging from school results to behavior on
social media, to credit score, and combined into one overall score. The higher that
score, the more privileges the citizen gets (from easier car rental and bank loans, to
visa to other countries). The ethics of experimentation has many aspects
(Puschmann and Bozdag, 2014). Most important here are the choice of reward
function (who decides has great power) and the fact that (especially on the
internet) we often do not know we are part of an experiment, and maybe we need
new forms of consent.
Physical manifestations
A fourth class of algorithms concerns physical manifestations such as robots and
sensors (internet-of-things). These algorithms go beyond the digital world and have
physical presence and agency in our physical world, which may jeopardize human
safety. A first manifestation is the internet-of-things (Ng and Wakenshaw, 2017) in
which many appliances and gadgets get connected and where increasingly sensors
are being placed everywhere81, creating data traces of once physical activities. The
programmable world (Wasik, 2013) will feature all digital (and intelligent) items
around us as being one giant computer (or: algorithm) that can assist us and
manipulate us. For example, if your car and refrigerator and microwave could work
together, they could - with the right predictions on the weather, your driving mood
and speed, and possible traffic jams - have your diner perfectly cooked and warm the
moment you get home from work. The ubiquity of such systems will raise ethical
issues since they will be influential, but often unnoticeable. Also, privacy concerns
are raised. A similar big development will be physical robots82 in our society. "A robot
is a constructed system that displays both physical and mental agency, but is not
alive in the biological sense" (Richards and Smart, 2016). Many types of robots exist,
ranging from simple vacuum cleaners, to humanoids (with human-like appearance83
84) to robots capable of manipulating their physical environments in hospital or
manufacturing situations. Robots are not yet part of our daily lives, but the literature
on the ethics of robots is rich (Lichocki et al. 2011; Smart and Richards, 2016).
Steinert (2014) frames the ethics of robots into four main85 categories: robots as
tools (or instruments), robots as recipients of moral behavior, robots as moral actors,
and robots as part of society. The difference between the first and the latter two is
mainly one of responsibility. The introduction of increasing numbers of robotic
agent in society (the fourth type) will also have socio-economic consequences we
can only partially imagine, most obviously for work which will86 increasingly being
taken (or not87) over by robots (Ford, 2013). Robots are also expected to have
(ethical) impact on things like law enforcement, the military, traffic (Kirkpatrick,
2015), healthcare and even prostitution (Richardson, 2016).
archives in liquid times
282
martijn van otterlo from intended archivists to intentional algivists.
ethical codes for humans and machines in the archives
77 https://www.theguardian.com/technology/2017/sep/12/artificial-intelligence-face-recognition-michal-
kosinski
78 https://deepmind.com/blog/deepmind-and-blizzard-release-starcraft-ii-ai-research-environment/
79 https://en.wikipedia.org/wiki/Behavioural_Insights_Team
80 https://harpers.org/archive/2015/03/the-spy-who-fired-me/
81 https://www.iamexpat.nl/lifestyle/lifestyle-news/hidden-cameras-dutch-advertisement-billboards-ns-train-
stations-can-see-you
82 https://en.wikipedia.org/wiki/R.U.R.
83 https://en.wikipedia.org/wiki/Uncanny_valley
84 https://www.wired.com/2017/04/robots-arent-human-make/
85 The article also includes a fifth type which refers to the influence of robots on ethics itself (meta-ethics).
86 https://www.wired.com/brandlab/2015/04/rise-machines-future-lots-robots-jobs-humans/
87 https://www.wired.com/2017/08/robots-will-not-take-your-job/
283