There is a knowledge already implicit, „dormant" within the electronic images, a
kind of compressed virtual knowledge, which - different from external inscriptions
(metadata) - waits to be uncovered from within.
Digital databanks of images, when cleverly addressed, render a kind of knowledge
which has otherwise been unimaginable in culture. Digital images render aspects of
visual knowledge which only the medium knows, virtually in the „unconscious" of
the data. The media-archaeological program is to uncover such virtual visual
knowledge.
Any archival record, as opposed from being looked at individually, gets its meaning
from a relational structure (which is the archival structure), the relationship to
other documents. But opposed to the archival algorithms (its taxonomies), In most
media archives, navigation through large amounts of images still requires verbal or
alphabetical metadating. To get a videotape from a media library, we still must type a
verbal search term into the interface.
Most videotape extraction in film archives has been done by the grip on the whole
tape, the storage medium - entering the archive, but not accessing its smallest
elements (what the Greeks called stocheia, the name for both alphabetic letters and
mathematical numbers). The computerisation of such media archives now promises
that data objects that traditionally resisted the human attempts to describe them
truly analytically will be finally opened - now that images are being understood
themselves as data sets, as clusters of grey and colour values.
Addressing and sorting nonscriptural media remains an urgent challenge which,
since the arrival of fast-processing computers allowed for digitising analogue
audiovisual material. The result is not necessarily better image quality but, rather,
the option to address not just images (by frames) but every single picture element,
each pixel.
Images and sounds have become calculable and thus capable of being exposed to
pattern recognition algorithms. Such procedures will not only media-
archaeologically "excavate" but as well generate unexpected optical statements and
perspectives from an audiovisual archive that can, for the first time, organise itself
not just according to metadata but according to its proper criteria - visual memory
in its own medium (endogenic).
Contrary to traditional semantic or iconological research in the history of ideas,
such an endogenic visual archive will no longer list images and sequences according
to their authors, subject, and time and space of recording. Instead, digital image data
banks will allow visual sequences to be algorithmically systematised according to
genuinely iconic notions and mediatic rather than iconological commonplaces,
revealing new insights into their non-symbolical characteristics. A predominantly
scripture directed culture still lacks the competence of a genuine visual dictionary as
made possible for digitally annotated video analysis which allows, e. g., for
procedures of dynamic reorganisation." (Ekman, Friesen, 1969)
wolfgang ernst order by fluctuation? classical archives and their
audiovisual counterparts
The real "iconic turn" in addressing photographic images in archives is still to come
- a visual sorting on the threshold of digital image processing and retrieval. While
visual and acoustic sources contain types of information and aesthetics a text can
never convey, the book or the digital text as a verbal research tool have been much
easier to handle comparatively than large amounts of images and sounds; that is
why the library is still the dominating metaphor of cultural memory. Since
calculating and storage capacities of computers have increased significantly, whole
audiovisual archives thus become calculable - at least on the level of pixel or scan
values. Images and soundtracks can therefore be made accessible in their own
medium, with perfectly adequate algorithms of shape and pattern recognition being
available. Suddenly, images can be retrieved according to their own properties - that
is, not only by the grace of the accompanying text. The mathematician David
Mumford (1999) reduced the vocabulary of picture elements in Western visual
culture down to twenty-three elements - almost like the letters of the (Greek)
alphabet. Image-endogenous systems of classification replace metadating, such as
geometric topologies of image or even cinematographic sequences.
Computing thereby offers the possibility of applying non-semantical image sorting
programs which create a strictly form-based image assortment - as envisioned by
Heinrich Wölfflin in his Kunstgeschichtliche Grundbegriffe a century ago. Image-
based image retrieval operates in harmony with the mediality of electronic images,
for techno-mathematical memory can open up images according to their genuine
optical enunciations.
In his film called Eye Machine, Harun Farocki directed attention to operative
images. So-called intelligent weapons are data-driven by matching images. But visual
search engines that can deal with semantic queries are not restricted to military or
commercial usage any more, but have become culturally driven in "Digital
Humanities". Calculating images, MPEG-7 allows for "layered" image composites
and discrete 3D computer generated spaces; according to Lev Manovich the shift is
from a "low-level" to "high-level" metadata that describes the structure of a media
composition or even its semantics. Digital technologies liberate images from
cultural contentism.
For monitoring to process large amounts of electronic images such as human faces,
such systems have to get rid of semantic notions of Gestalt. The IBM Query By Image
Content software did not try to radically decide in the quarrel between semantic
versus non-semantic information, but rather to distribute the task according to the
respective strength in the human-machine interface:
"Humans are much better than computers at extracting semantic descriptions from
pictures. Computers, however, are better than humans at measuring properties and
retaining these in long-term memory. One of the guiding principles used by QBIC is
to let computers do what they do best - quantifiable measurements - and let
humans do what they do best - attaching semantic meaning" (Flickner, 1997, p.8).
- which establishes a cybernetic feedback-loop between man and machine, between
analogous and digital dataprocessing, thus not trying to efface, but to creatively
enhance the human-computer-difference where they meet on the interface.
archives in liquid times
164
165