The Media Ecology Project’s Semantic Annotation Tool and Knight Prototype Grant

Mark Williams, John Bell, Dimitrios Latsis, Lorenzo Torresani

We will introduce 1) The Semantic Annotation Tool (SAT), a drop-in module that facilitates the creation and sharing of time-based media annotations on the Web, and 2) new research pursued for a Knight News Challenge Prototype Grant to make film and video housed in libraries more searchable and discoverable.

SAT is composed of 1) a jQuery plugin that wraps an existing media player to provide an intuitive authoring and presentation environment for time-based video annotations; and 2) a linked data-compliant Annotation Server that communicates with the plugin to collect and disseminate user-generated comments and tags using the W3C Open Annotation specification. This system creates an end-to-end open source video annotation workflow that can be used as either an off the shelf or customizable solution for a wide variety of applications.

SAT is designed to address significant functionality gaps in existing online time-based media annotation tools recognized during pilot studies conducted by its parent project, The Media Ecology Project (MEP). This new, standards-based annotation tool can be easily dropped into a variety of classroom and research environments to extend their support for collaborative close reading of video and audio texts. The resulting annotations can then be shared across software platforms using an Annotation Server to connect scholars to archives, students to experts, or human annotators to machine vision algorithms.

SAT’s annotation interface was designed by the University of Maine’s Virtual Environments and Multimodal Interaction Lab (VEMI Lab) to be compatible with screen readers, a key requirement for extending the benefits of annotation to users with impaired vision. Due to the ease with which its annotations can be accessed and shared via API, SAT is an ideal environment for human generation of training data. SAT’s value in this role is enhanced by its use of Onomy.org, a taxonomy generation and sharing tool that allows controlled vocabularies to be applied to tagging across platforms.

By leveraging software that recognizes speech, objects, and actions in moving images, the Knight Prototype intends to provide end to end access to film collections: from digitization to discoverability.

Tools that allow people to easily search film and video in the same way that they can search through the full text of a document are still beyond the reach of most libraries. Dartmouth College’s Media Ecology Project and Visual Learning Group are developing a novel interface that will enable text-based ondemand search in a rich collection of educational films held by Dartmouth Library and the Internet Archive. The interface operates by translating in real-time text-queries provided by users into contentbased classifiers that recognize speech, audio, objects, locations, and actions in the video to identify the desired segments in the film. When implicitly validated by users (by viewing), the search results and the original text queries will be fed into our Semantic Annotation Tool (SAT) which will add these annotations (built upon W3C open annotation standards) to each film for permanent semantic browsing and search. What was once a roll of film, indexed only by its card catalog description, will become semantically tagged video, searchable scene-by-scene, adding immense value for library patrons, scholars and the visually impaired.

This project brings together three cross-curricular groups at Dartmouth to collaborate on applying modern artificial intelligence and machine learning to historic film collections held by libraries. This project will pull together a variety of human- and machine-generated metadata on a selected set of digitized educational films and combine them into a single, searchable format. By improving the cutting edge algorithms used to create time-coded subject-tags (e.g. http://vlg.cs.dartmouth.edu/c3d/), we aim to lay the foundation for a fully-searchable visual encyclopedia and to share our methods and open source code with film archives everywhere.