-->

Metadata: What You Need to Know (And Why You Need to Know It)

What Is the Difference Between Static and Temporal Metadata?
Metadata traditionally has been used for static content, such as Word documents or Acrobat PDF files. With the advent of Spotlight on Mac OS X, Google Desktop, Microsoft Vista, and the new Windows 7 operating systems, users could search not only the general metadata in a file’s properties (such as file name, date created, and folder structure), but they could also search the content itself by adding keywords or even whole phrases into the search window.

As beneficial as static metadata is, however, there are content types such as audio and video that require not only general metadata about the file but also time-specific metadata about particular locations within the file. The closest example in terms of static metadata is the ability of Mac OS X’s Spotlight to not just find words in a particular PDF but also to open the PDF and highlight the actual words (zooming in as well, if necessary).

Figure 2
Figure 2. Digitalsmiths’ VideoSense metadata product helps major motion picture studios track and manage their assets.(Confidential customer information has been obscured in this image.)

"Digitalsmiths provides time-based metadata," says Berry. "Unlike standard attribute-level metadata (e.g., name, time, date), time-based metadata gives you deeper insight into the video because you’re managing it at a DNA level."

This "DNA level" involves not just rapid access to a particular point in an audio or video file but also the ability to layer metadata "tracks" or fields of information upon each other. Products such as Pictron’s Video Gateway provide up to 32,000 fields of information per single frame of video.

Why Create Temporal Metadata?
Organizations that work with streaming media create temporal metadata for four reasons: They want to make audio and video data easy to find, and they want to organize it, rank it, and restrict it.

Finding data
What good does it do to have data if you can’t find it? As an industry executive I interviewed a few years ago said, "In the old days, you’d see a video tape box with a label telling you what was on the tape. You could differentiate each tape from the other based on where it resided in the video tape rack. You could assess that much information just by looking around. Now that’s all gone, and a common thing we hear from people in a modern station is that it’s difficult for them to actually know what’s going on since so much of what is happening ‘beneath the surface’ is content that only resides on the server or desktop."

We all know the pain of scrubbing through an hours-long file just to find the one quote we were looking for … as well as the pleasure when a search system "just works." As more and more video content is captured on disc or flash memory and placed on the computer, finding content quickly and accurately requires superb search tools.

Indexing data
How do we find the right file in the first place? The best indexing, cataloging, and classification systems are transparent to users, but how to achieve that functional transparency is a question that keeps those of us who live and breathe metadata up at night. How do we solve the many variables in metadata classification and entry to arrive at an automated result? Think of it as playing chess on three boards simultaneously, when a move on one board generates a simultaneous move on the other two, based on an unknown pattern.

Because cataloging is a simpler matter when there are guidelines to follow, taxonomies such as the Dublin Core initiative are gaining ground. Dublin Core is a methodology to create a digital "library card catalog" for the web using 15 metadata elements to catalog information, improving document indexing for search engines.

Simple taxonomies are fine for static content that is finely structured. But what about video clips, which are by nature only partially structured?

Jörg Waitelonis, a Ph.D. student in Germany, is creating Yovisto, an academic video search engine to catalog semistructured content such as recordings of lectures and seminars.

"The problem with academic lecture videos," Waitelonis says, "is that they are very long (more than 1 hour) and geared very much toward a TV viewing experience. In other words, there are no tables of content, keyword index, [which are of] relative importance of particular parts of video." Yovisto is specifically designed to catalog both simple videos and videos integrated with PowerPoint slides or webpages.

As you’ll read later in the Challenges section of this article, indexing is the key to widespread metadata adoption, even if the methods of indexing are often at odds with one another.

Ranking data
Until the advent of social media sites, ranking data or data objects (such as images or audio and video clips) or commenting on specific parts of a video weren’t really a major use for temporal metadata. With the proliferation of YouTube-like online video platforms and social sites such as Facebook and MySpace, ranking data is becoming both more pertinent and more pronounced.

Have you seen a post you like on someone’s Facebook page? Click "Like" and the thumbs-up icon gains a point; if you change your mind, click "Unlike," and the thumbs-up meter goes down a point. Do you really dislike what you’re seeing? Hide the poster’s content or even unfriend the person. Each of these functions relies on metadata rankings of some sort.

Restricting data
Another area of business growth is in the area of using metadata to restrict access to content. Content owners can enforce licensing contracts and rights management by geography more effectively thanks, in part, to metadata.

"If you are syndicating a particular video or clip," says Berry, "you want to know with certainty that the Coldplay song that runs during minutes 16–18 is cleared for distribution in the U.S. and the U.K., but not Japan. Our customers can define and manage this and the business rules associated with their assets from a single system."

What will it mean to the metadata industry that Gracenote and AMG, the two major media-centric metadata companies, have been acquired by companies at the forefront of data restriction? Macrovision, the owner of AMG, got its start in blocking copying of VHS movies; it moved into DVD copy protection and is now involved in a variety of online anti-piracy efforts. Sony, whose ATRAC audio compression format heavily restricted the use of content on multiple machines, saw limited success with the format; its new Gracenote technology, by contract, is used in iTunes, Winamp, and mobile music players from Panasonic, Samsung, and others. At the time of its acquisition, Gracenote was working with MySpace to develop a method to limit illegal downloads, having previously acquired Philips’ audio fingerprinting technology.

Within these four areas, there are a variety of permutations, but the uses remain fairly consistent for both static and temporal metadata.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

DigitalSmiths Releases VideoSense 3.0; Metadata Geeks Rejoice

User interface optimized to let users drill down through customized time slices