Categories
Agility

Seamless: Structural Metadata for Multimodal Content

Chatbots and voice interaction are hot topics right now. New services such as Facebook Messenger and Amazon Alexa have become popular quickly. Publishers are exploring how to make their content multimodal, so that users can access content in varied ways on different devices. User interactions may be either screen-based or audio-based, and will sometimes be hands-free.

Multimodal content could change how content is planned and delivered. Numerous discussions have looked at one aspect of conversational interaction: planning and writing sentence-level scripts. Content structure is another dimension relevant to voice interaction, chatbots and other forms of multimodal content. Structural metadata can support the reuse of existing web content to support multimodal interaction. Structural metadata can help publishers escape the tyranny of having to write special content for each distinct platform.

Seamless Integration: The Challenge for Multimodal Content

In-Vehicle Infotainment (IVI) systems such as Apple’s CarPlay illustrate some of challenges of multimodal content experiences. Apple’s Human Interface Guidelines state: “On-screen information is minimal, relevant, and requires little decision making. Voice interaction using Siri enables drivers to control many apps without taking their hands off the steering wheel or eyes off the road.” People will interact with content hands-free, and without looking. CarPlay includes six distinct inputs and outputs:

  1. Audio
  2. Car Data
  3. iPhone
  4. Knobs and Controls
  5. Touchscreen
  6. Voice (Siri)

The CarPlay UIKit even includes “Drag and Drop Customization”. When I review these details, much seems as if it could be distracting to drivers. Apple states with CarPlay “iPhone apps that appear on the car’s built-in display are optimized for the driving environment.” What that iPhone app optimization means in practice could determine whether the driver gets in an accident.

CarPlay screenshot
CarPlay: if it looks like an iPhone, does it act like an iPhone? (screenshot via Apple)

Multimodal content promises seamless integration between different modes of interaction, for example, reading and listening. But multimodal projects carry a risk as well if they try to port smartphone or web paradigms into contexts that don’t support them. Publishers want to reuse content they’ve already created. But they can’t expect their current content to suffice as it is.

In a previous post, I noted that structural metadata indicates how content fits together. Structural metadata is a foundation of a seamless content experience. That is especially true when working with multimodal scenarios. Structural metadata will need to support a growing range of content interactions, involving distinct modes. A mode is form of engaging with content, both in terms of requesting and receiving information. A quick survey of these modes suggests many aspects of content will require structural metadata.

Platform Example Input Mode Output Mode
Chatbots Typing Text
Devices with Mic & Display Speaking Visual (Video, Text, Images, Tables) or Audio
Smart Speakers Speaking Audio
Camera/IoT Showing or Pointing Visual or Audio

Multimodal content will force content creators to think more about content structure. Multimodal content encompasses all forms of media, from audio to short text messages to animated graphics. All these forms present content in short bursts. When focused on other tasks, users aren’t able to read much, or listen very long. Steven Pinker, the eminent cognitive psychologist, notes that humans can only retain three or four items in short term memory (contrary to the popular belief that people can hold 7 items). When exploring options by voice interaction, for example, users can’t scan headings or links to locate what they want.  Instead of the user navigating to the content, the content needs to navigate to the user.

Structural metadata provides information to machines to choose appropriate content components. Structural metadata will generally be invisible to users — especially when working with screen-free content. Behind the scenes, the metadata indicates hidden structures that are important to retrieving content in various scenarios.

Metadata is meant to be experienced, not seen. A photo of an Amazon customer’s Echo Show, revealing  code (via Amazon)

Optimizing Content With Structural Metadata

When interacting with multimodal content, users have limited attention, and a limited capacity to make choices. This places a premium on optimizing content so that the right content is delivered, and so that users don’t need to restate or reframe their requests.

Existing web content is generally not optimized for multimodal interaction — unless the user is happy listening to a long article being read aloud, or seeing a headline cropped in mid-sentence. Most published web content today has limited structure. Even if the content was structured during planning and creation, once delivered, the content lacks structural metadata that allows it to adapt to different circumstances. That makes it less useful for multimodal scenarios.

In the GUI paradigm of the web, users are expected to continually make choices by clicking or tapping. They see endless opportunities to “vote” with their fingers, and this data is enthusiastically collected and analyzed for insights. Publishers create lots of content, waiting to see what gets noticed. Publishers don’t expect users to view all their content, but they expect users to glance at their content, and scroll through it until users have spotted something enticing enough to view.

Multimodal content shifts the emphasis away from planning delivery of complete articles, and toward delivering content components on-demand, which are described by structural metadata. Although screens remain one facet of multimodal content, some content will be screen-free. And even content presented on screens may not involve a GUI: it might be plain text, such as with a chatbot. Multimodal content is post-GUI content. There are no buttons, no links, no scrolling. In many cases, it is “zero tap” content — the hands will be otherwise occupied driving, cooking, or minding children. Few users want to smudge a screen with cookie dough on their hands. Designers will need to unlearn their reflexive habit of adding buttons to every screen.

Users will express what they want, by speaking, gesturing, and if convenient, tapping. To support zero-tap scenarios successfully, content will need to get smarter, suggesting the right content, in the right amount. Publishers can no longer present an endless salad bar of options, and expect users to choose what they want. The content needs to anticipate user needs, and reduce demands on the user to make choices.

Users will aways want to choose what topics they are interested in. They may be less keen on actively choosing the kind of content to use. Visiting a website today, you find articles, audio interviews, videos, and other content types to choose from. Unlike the scroll-and-scan paradigm of the GUI web, multimodal content interaction involves an iterative dialog. If the dialog lasts too long, it gets tedious. Users expect the publisher to choose the most useful content about a topic that supports their context.

screenshot of Google News widget
Pattern: after saying what you want information about, now tell us how you’d like it (screenshot via Google News)

In the current use pattern, the user finds content about a topic of interest (topic criteria), then filters that content according to format preferences. In future, publishers will be more proactive deciding what format to deliver, based on user circumstances.

Structural metadata can help optimize content, so that users don’t have to choose how they get information. Suppose the publisher wants to show something to the user. They have a range of images available. Would a photo be best, or a line drawing? Without structural metadata, both are just images portraying something. But if structural metadata indicates the type of image (photo or line diagram), then deeper insights can be derived. Images can be A/B tested to see which type is most effective.

A/B testing of content according to its structural properties can yield insights into user preferences. For example, a major issue will be learning how much to chunk content. Is it better to offer larger size chunks, or smaller ones? This issue involves the tradeoffs for the user between the costs of interaction, memory, and attention. By wrapping content within structural metadata, publishers can monitor how content performs when it is structured in alternative ways.

Component Sequencing and Structural Metadata

Multimodal content is not delivered all at once, as is the case with an article. Multimodal content relies on small chunks of information, which act as components. How to sequence these components is important.

photo of Echo Show
Alexa showing some cards on an Echo Show device (via Amazon)

Screen-based cards are a tangible manifestation of content components. A card could show the current weather, or a basketball score. Cards, ideally, are “low touch.” A user wants to see everything they need on a single card, so they don’t need to interact with buttons or icons on the card to retrieve the content they want. Cards are post-GUI, because they don’t rely heavily on forms, search, links and other GUI affordances. Many multimodal devices have small screens that can display a card-full of content. They aren’t like a smartphone, cradled in your hand, with a screen that is scrolled. An embedded screen’s purpose is primarily to display information rather than for interaction. All information is visible on the card [screen], so that users don’t need to swipe or tap. Because most of us are accustomed to using screen-based cards already, but may be less familiar with screen-free content, cards provide a good starting point for considering content interaction.

Cards let us consider components both as units (providing an amount of content) and as plans (representing a purpose for the content). User experiences are structured from smaller units of content, but these units need have a cohesive purpose. Content structure is more than breaking content into smaller pieces. It is about indicating how those pieces can fit together. In the case of multimodal content, components need to fit together as an interaction unfolds.

Each card represents a specific type of content (recipe, fact box, news headline, etc.), which is indicated with structural metadata. The cards also present information in a sequence of some sort.1 Publishers need to know how various types of components can be mixed, and matched. Some component structures are intended to complement each other, while other structures work independently.

Content components can be sequenced in three ways. They can be:

  1. Modular
  2. Fixed
  3. Adaptive

Truly modular components can be sequenced in any order; they have no intrinsic sequence. They provide information in response to a specific task. Each task is assumed to be unrelated. A card providing an answer to the question of “What is the height of Mount Everest?” will be unrelated to a card answering the question “What is the price of Facebook stock?”

The technical documentation community uses an approach known as topic-based writing that attempts to answer specific questions modularly, so that every item of content can be viewed independently, without need to consult other content. In principle, this is a desirable goal: questions get answered quickly, and users retrieve the exact information they need without wading through material they don’t need. But in practice, modularity is hard to achieve. Only trivial questions can be answered on a card. If publishers break a topic into several cards, they should indicate the relations between the information on each card. Users get lost when information is fragmented into many small chunks, and they are forced to find their way through those chunks.

Modular content structures work well for discrete topics, but are cumbersome for richer topics. Because each module is independent of others, users, after viewing the content, need to specify what they want next. The downside of modular multimodal content is that users must continually specify what they want in order to get it.

Components can sequenced in a fixed order. An ordered list is a familiar example of structural metadata indicating a fixed order. Narratives are made from sequential components, each representing an event that happens over time. The narrative could be a news story, or a set of instructions. When considered as a flow, a narrative involves two kinds of choices: whether to get details about an event in the narrative, or whether to get to the next event in the narrative. Compared with modular content, fixed sequence content requires less interaction from the user, but longer attention.

Adaptive sequencing manages components that are related, but can be approached in different orders. For example, content about an upcoming marathon might include registration instructions, sponsorship info, a map, and event timing details, each as a separate component/card. After viewing each card, users need options that make sense, based on content they’ve already consumed, and any contextual data that’s available. They don’t want too many options, and they don’t want to be asked too many questions. Machines need to figure out what the user is likely to need next, without being intrusive. Does the user need all the components now, or only some now?

Adaptive sequencing is used in learning applications; learners are presented with a progression of content matching their needs. It can utilize recommendation engines, suggesting related components based on choices favored by others in a similar situation. An important application of adaptive sequencing is deciding when to ask a detailed question. Is the question going to be valuable for providing needed information, or is the question gratuitous? A goal of adaptive sequencing is to reduce the number of questions that must be asked.

Structural metadata generally does not explicitly address temporal sequencing, because (until now) publishers have assumed all content would be delivered at once on a single web page. For fixed sequences, attributes are needed to indicate order and dependencies, to allow software agents to follow the correct procedure when displaying content. Fixed sequences can be expressed by properties indicating step order, rank order, or event timing. Adaptive sequencing is more programmatic. Publishers need to indicate the relation of components to parent content type. Until standards catch up, publishers may need to indicate some of these details in the data-* attribute.

The sequencing of cards illustrates how new patterns of content interaction may necessitate new forms of structural metadata.

Composition and the Structure of Images

One challenge in multimodal interaction is how users and systems talk about images, as either an input (via a camera), or as an output. We are accustomed to reacting to images by tapping or clicking. We now have the chance to show things to systems, waving an object in front of a camera. Amazon has even introduced a hands-free voice activated IoT camera that has no screen. And when systems show us things, we may need to talk about the image using words.

Machine learning is rapidly improving, allowing systems to recognize objects. That will help machines understand what an item is. But machines still need to understand the structural relationship of items that are in view. They need to understand ordinary concepts such as near, far, next to, close to, background, group of, and other relational terms. Structural metadata could make images more conversational.

Vector graphics are composed of components that can represent distinct ideas, much like articles that are composed of structural components. That means vector images can be unbundled and assembled differently. The WAI-ARIA standard for web accessibility has an SVG Graphics Module that covers how to markup vector images. It includes properties to add structural metadata to images, such as group (a role indicating similar items in the image) and background (a label for elements in the image in the background). Such structural metadata could be useful for users interacting with images using voice commands. For example, the user might want to say, “Show me the image without a background” or “with a different background”.

Photos do not have interchangeable components the way that vector graphics do. But photos can present a structural perspective of a subject, revealing part of a larger whole. Photos can benefit from structural metadata that indicates the type of photo. For example, if a user wants a photo of a specific person, they might have a preference for a full-length photo or for a headshot. As digital photography has become ubiquitous, many photos are available of the same subject that present different dimensions of the subject. All these dimensions form a collection, where the compositions of individual photos reveal different parts of the subject. The IPTC photo metadata schema includes a controlled vocabulary for “scenes” that covers common photo compositions: profile, rear view, group, panoramic view, aerial view, and so on. As photography embraces more kinds of perspectives, such as aerial drone shots and omnidirectional 360 degree photographs, the value of perspective and scene metadata will increase.

For voice interaction with photo images to become seamless, machines will need to connect conversational statements with image representations. Machines may hear a command such as “show me the damage to the back bumper,” and must know to show a photo of the rear view of a car that’s been in an accident. Sometimes users will get a visual answer to a question that’s not inherently visual. A user might ask: “Who will be playing in Saturday’s soccer game?”, and the display will show headshots of all the players at once. To provide that answer, the platform will need structural metadata indicating how to present an answer in images, and how to retrieve player’s images appropriately.

Structural metadata for images lags behind structural metadata for text. Working with images has been labor intensive, but structural metadata can help with the automated processing of image content. Like text, images are composed of different elements that have structural relationships. Structural metadata can help users interact with images more fluidly.

Reusing Text Content in Voice Interaction

Voice interaction can be delivered in various ways: through natural language generation, through dedicated scripting, and through the reuse of existing text content. Natural language generation and scripting are especially effective in short answer scenarios — for example, “What is today’s 30 year mortgage rate? ” Reusing text content is potentially more flexible, because it lets publishers address a wide scope of topics in depth.

While reusing written text in voice interactions can be efficient, it can potentially be clumsy as well. The written text was created to be delivered and consumed all at once. It needs some curation to select which bits work most effectively in a voice interaction.

The WAI-ARIA standards for web accessibility offer lessons on the difficulties and possibilities of reusing written content to support audio interaction. By becoming familiar with what ARIA standards offer, we can better understand how structural metadata can support voice interactions.

ARIA standards seek to reduce the burdens of written content for people who can’t scan or click through it easily. Much web content contains unnecessary interaction: lists of links, buttons, forms and other widgets demanding attention. ARIA encourages publishers to prioritize these interactive features with the TAB index. It offers a way to help users fill out forms they must submit to get to content they want. But given a choice, users don’t want to fill out forms by voice. Voice interaction is meant to dispense with these interactive elements. Voice interaction promises conversational dialog.

Talking to a GUI is awkward. Listening to written web content can also be taxing. The ARIA standards enhance the structure of written content, so that content is more usable when read aloud. ARIA guidelines can help inform how to indicate structural metadata to support voice interaction.

The ARIA encourages publishers to curate their content: to highlight the most important parts that can be read aloud, and to hide parts that aren’t needed. ARIA designates content with landmarks. Publishers can indicate what content has role=“main”, or they can designate parts of content by region. The ARIA standard states: “A region landmark is a perceivable section containing content that is relevant to a specific, author-specified purpose and sufficiently important that users will likely want to be able to navigate to the section easily and to have it listed in a summary of the page.” ARIA also provides a pattern for disclosure, so that not all text is presented at once. All of these features allow publishers to indicate more precisely the priority of different components within the overall content.

ARIA supports screen-free content, but it is designed primarily for keyboard/text-to-speech interaction. Its markup is not designed to support conversational interaction — schema.org’s pending speakable specification, mentioned in my previous post, may be a better fit. But some ARIA concepts suggest the kinds of structures that written text need to work effectively as speech. When content conveys a series of ideas, users need to know what are major and minor aspects of text they will be hearing. They need the spoken text to match the time that’s available to listen. Just like some word processors can provide an “auto summary” of a document by picking out the most important sentences, voice-enabled text will need to identify what to include in a short version of the content. The content might be structured in an inverted pyramid, so that only the heading and first paragraph are read in the short version. Users may even want the option of hearing a short version or a long version of a story or explanation.

Structural metadata and User Intent in Voice Interaction

Structural metadata will help conversational interactions deliver appropriate answers. On the input side, when users are speaking, the role of structural metadata is indirect. People will state questions or commands in natural language, which will be processed to identify synonyms, referents, and identifiable entities, in order to determine the topic of the statement. Machines will also look at the construction of the statement to determine the intent, or the kind of content sought about the topic. Once the intent is known — what kind of information the user is seeking — it can be matched with the most useful kind of content. It is on the output side, when users view or hear an answer, that structural metadata plays an active role selecting what content to deliver.

Already, search engines such as Google rely on structural metadata to deliver specific answers to speech queries. A user can ask Google the meaning of a word or phrase (What does ‘APR’ mean?) and Google locates a term that’s been tagged with structural metadata indicating a definition, such as with the HTML element <dfn>.

When a machine understands the intent of a question, it can present content that matches the intent. If a user asks a question starting with the phrase Show me… the machine can select a clip or photograph about the object, instead of presenting or reading text. Structural metadata about the characteristics of components makes that matching possible.

Voice interaction supplies answers to questions, but not all answers will be complete in a single response. Users may want to hear alternative answers, or get more detailed answers. Structural metadata can support multi-answer questions.

Schema.org metadata indicates content that answers questions using the Answer type, which is used by many forums and Q&A pages. Schema.org distinguishes between two kinds of answers. The first, acceptedAnswer, indicates the best or most popular answer, often the answer that received most votes. But other answers can be indicated with a property called suggestedAnswer. Alternative answers can be ranked according to popularity as well. When sources have multiple answers, users can get alternative perspectives on a question. After listening to the first “accepted” answer, the user might ask “tell me another opinion” and a popular “suggested” answer could be read to them.

Another kind of multi-part answer involves “How To” instructions. The HowTo type indicates “instructions that explain how to achieve a result by performing a sequence of steps.” The example the schema.org website provides to illustrate the use of this type involves instructions on how to change a tire on a car. Imagine car changing instructions being read aloud on a smartphone or by an in-vehicle infotainment system as the driver tries to change his flat tire along a desolate roadway. This is a multi-step process, so the content needs to be retrievable in discrete chunks.

Schema.org includes several additional types related to HowTo that structure the steps into chunks, including preconditions such as tools and supplies required. These are:

  • HowToSection : “A sub-grouping of steps in the instructions for how to achieve a result (e.g. steps for making a pie crust within a pie recipe).”
  • HowToDirection : “A direction indicating a single action to do in the instructions for how to achieve a result.”
  • HowToSupply : “A supply consumed when performing the instructions for how to achieve a result.”
  • HowToTool : “A tool used (but not consumed) when performing instructions for how to achieve a result.”

These structures can help the content match the intent of users as they work through a multi-step process. The different chunks are structurally connected through the step property. Only the HowTo type ( and its more specialized subtype, the Recipe) currently accepts the step property and thus can address temporal sequencing.

Content Agility Through Structural Metadata

Chatbots, voice interaction and other forms of multimodal content promise a different experience than is offered by screen-centric GUI content. While it is important to appreciate these differences, publishers should also consider the continuities between traditional and emerging paradigms of content interaction. They should be cautious before rushing to create new content. They should start with the content they have, and see how it can be adapted before making content they don’t have.

A decade ago, the emergence of smartphones and tablets triggered an app development land rush. Publishers obsessed over the discontinuity these new devices presented, rather than recognizing their continuity with existing web browser experiences. Publishers created multiple versions of content for different platforms. Responsive web design emerged to remedy the siloing of development. The app bust shows that parallel, duplicative, incompatible development is unsustainable.

Existing content is rarely fully ready for an unpredictable future. The idealistic vision of single source, format free content collides with the reality of new requirements that are fitfully evolving. Publishers need an option between the extremes of creating many versions of content for different platforms, and hoping one version can serve all platforms. Structural metadata provides that bridge.

Publishers can use structural metadata to leverage content they have already that could be used to support additional forms of interaction. They can’t assume they will directly orchestrate the interaction with the content. Other platforms such as Google, Facebook or Amazon may deliver the content to users through their services or devices. Such platforms will expect content that is structured using standards, not custom code.

Sometimes publishers will need to enhance existing content to address the unique requirements of voice interaction, or differences in how third party platforms expect content. The prospect of enhancing existing content is preferable to creating new content to address isolated use case scenarios. Structural metadata by itself won’t make content ready for every platform or form of interaction. But it can accelerate its readiness for such situations.

— Michael Andrews


  1. Dialogs in chatbots and voice interfaces also involve sequences of information. But how to sequence a series of cards may be easier to think about than a series of sentences, since viewing cards doesn’t necessarily involve a series of back and forth questions. ↩︎