Annotation Demo Showcasing the Liber Floridus
Proof-of-concept for the visualisation of annotations on complex materials (like books), based on IIIF.
The option to add IIIF annotations to digital images is relatively new. The development of this aspect is ongoing, and a few questions have yet to be resolved, among others on the visualisation of annotations. As a contribution to this development, we have designed a proof-of-concept for the visualisation of annotations on complex materials, i.c. medieval manuscripts. Unlike "simple" objects (e.g. a photo), a manuscript is "complex". It is a set of images of the binding, spine, and - often numerous - pages. On top of this, manuscripts are studied from a wide variety of disciplines or perspectives, for instance for their material and chemical properties, text, script, decoration, binding and provenance marks. It can be overwhelming to see such variety and volume of information gathered in the setting of a viewer.
This proof-of-concept examines how we can present various layers of information in a structured and user-friendly way in the Mirador IIIF viewer.
Legend to the demo
Layers: various types of images (e.g. normal light, infrared light)
Categories: information categorized per discipline or perspective
- iconography: identification of content of images
- pigments: identification of pigments in ink and paint
- transcriptions: transcription of the text as it occurs in the manuscript
- translations: translations into Dutch (available only for ff. 6v-13r)
- material technical information: identification of materials and methods
Pin an annotation by means of the tack symbol.
For this demo we also developed the possibility to cite sources and to mention the authors of the annotations, and to link annotations (translations and transcriptions).
Liber Floridus as our guinea pig
For this demo we have selected ff. 6v-13r, ff. 61v-62r, ff. 88v-89r en 92v-93r of the famous Liber Floridus. Browse through the leaves by means of the navigation arrows below the image. It is not possible to zoom in on the images in this html environment. The annotations are not exhaustive. Go to the manuscript's page on Mmmonk.be or to the Liber Floridus website to explore this beautiful manuscript at your heart's desire.
Since this proof-of-concept is intended as a contribution to the development of IIIF annotations, we have asked developer Bauke van der Laan to document his choices and methods. Read his report below and find the open source repository on Github.
The primary objective of the annotation demo is to explore how both experts and the general public can consult short and long form annotations from a wide range of topics within a common IIIF viewer such as Mirador 3.
The constraints of the project prevented us from building a fully functional plugin for Mirador that can handle arbitrary IIIF-manifests, but instead made us focus on finding the optimal information architecture and user experience necessary for consulting (different categories of) annotations. To that end, we appropriated the visual language of Mirador 3 using Material UI and simulated a basic Mirador viewer and built our custom interface on top. Because of this, standard features such as zooming and panning are not possible in this demo.
Another choice we had to make based on the constraints of the project and the expected target audience, was to skip adapting the user interface to smaller screens. As a result, this demo expects the user to interact using a desktop or laptop monitor.
The annotations in the demo follow the Web Annotation model as outlined by W3C in https://www.w3.org/TR/annotation-model/. An annotation consists of a target (an SVG form indicating one place on one view of the manifest) that is displayed on the canvas, and a body containing the content of the annotation. Once an annotation is selected from the list or by selecting its target on the canvas, the body and metadata will be displayed.
In the case when a single view has a multiple choice of images – or layers, such as described in https://iiif.io/api/cookbook/recipe/0033-choice/, the demo assumes an ambiguity of images, meaning that in the demo, an annotation is always shown regardless of which image layer is selected.
An open annotation will close when opening another annotation. Open annotations are also closed on view navigation, because we assume that an annotation – which is inherently connected to a single view – will lose its relevance on the next view in the majority of cases. A user can prevent an annotation from being closed by pinning the annotation using the pin-button.
- Annotation Content
The content of an annotation is described by the annotation body. We used the MIME-types text/plain and text/html for the annotations in the demo. For the richer HTML annotations, we composed the body out of a subset of HTML elements, mainly the <p>, <h1>, <h2>, <a>, and <img> elements. This allowed us to create engaging long form annotations and link to external sources. Typographical styling of these HTML elements is applied by the application. We selected a serif typeface for annotation content, to 1. contrast it with the ‘functional’ interface elements that typically use sans-serif faces in Mirador and 2. create an inviting reading environment for the more long form annotations.
- Citing Sources
As the Web Annotation model doesn’t specify metadata for references, bibliographies and other ways of citing sources, we included any references as part of the annotation body using the doc-bibliography role (see: https://www.w3.org/TR/dpub-aria-1.1/#doc-bibliography).
Some annotations in the demo are only superficially or more closely related to other annotations, to the point where some annotations are best consulted together. Take for example a transcription of a text and a translation of the same text. While both are independent and stem from different disciplines, they share the same target and consulting them together can be useful. Cross-referencing these annotations would thus be of value. To our knowledge, the Web Annotation model doesn’t specify a way to do this, so we implemented our own cross-reference extension for the purpose of this demo: any annotation can refer to another annotation‘s ID. When displaying the annotation, a list of cross-references is shown, letting the user easily navigate to a referenced annotation.
Every annotation in the demo is assigned to one category. Categories are not part of the Web Annotation Model and are an extension created for the purpose of this demo. First, a list of possible categories is configured application-wide. To add an annotation to a category, we add a reference to the category ID in the annotation.
We discussed assigning different colours to the different categories, visible in the annotation target forms. Ultimately we decided against this for the following reasons:
A UI dependency on different colours ignores colour-blind or otherwise visually impaired users.
If it proves already difficult to select one colour that sufficiently contrasts with all views in a manifesto, this problem becomes more apparent with every added colour.
The expectation is that a majority of users will select a very limited subset of categories anyway, rendering the use of multiple colours mostly irrelevant.
Annotations in the demo can be filtered by enabling and disabling categories. This way, a user interested in the pigments used in a manuscript doesn’t need to be bothered by annotations regarding the transcriptions and translations of the same manuscript and vice versa.
Toggling categories filters both the annotation list and the targets visible on the canvas. The selection of categories that the user has set remains active while navigating the different views of the manifest, a decision we made after user testing.