Categories
Content Engineering

Metadata for Appreciation and Transparency

Who supports your work? If you work in a non-profit or a university, that’s an important question. These organizations depend on the generosity of others. They should want the world know who is making what they do possible. Fortunately, new standards for metadata will make that happen.

Individuals and teams who work in the non-profit and academic sectors, who either do research or deliver projects, can use online metadata to raise their profiles. Metadata can help online audiences discover information about grants relating to advancing knowledge or helping others. The metadata can reveal who is making grants, who is getting them, and what the grants cover.

Grants Metadata

A new set of metadata terms is pending in the schema.org vocabulary relating to grants and funding. The terms can help individuals and organizations understand the funding associated with research and other kinds of goal-focused projects conducted by academics and non-profits. The funded item (property: fundedItem) could be anything. While it will often be research (a study or a book), or it could be delivery of a service such as training, curriculum development, environmental or historical restoration, inoculations, or conferences and festivals. There is no restriction on what kind of project or activity can be indicated.

The schema.org vocabulary is the most commonly used metadata standard for online information, and is used in Google search results, among other online platforms. So the release of new metadata terms in schema.org can have big implications for how people discover and assess information online.

A quick peek at the code will show how it works. Even if you aren’t familiar with what metadata code looks like, it is easy to understand. This example, from the schema.org website, shows that Caroline B Turner receives funding from the National Science Foundation (grant number 1448821). Congratulations, Dr. Turner! How cool is that?

  1. <script type=“application/ld+json”>
  2. {
  3.   “@context”: “http://schema.org”,
  4.   @type“: “Person“,
  5.   “name”: “Turner, Caroline B.”,
  6.   “givenName”: “Caroline B.”,
  7.   “familyName”: “Turner”,
  8.   “funding”: {
  9.      “@type”: “Grant”,
  10.      “identifier”: “1448821”
  11.      “funder”: {
  12.        “@type”: “Organization”,
  13.        “name”: National Science Foundation“,
  14.        “identifier”: “https://doi.org/10.13039/100000001”
  15.      }
  16.    }
  17. }
  18. </script>

 

The new metadata anticipates diverse scenarios. Funders can give grants to projects, organizations, or individuals. Grants can be monetary, or in-kind. These elements can be combined with other schema.org vocabulary properties to provide information about how much money went to different people and organizations, and what projects they went to.

Showing Appreciation

The first reason to let others know who supports you is to show appreciation. Organizations should want to use the metadata to give recognition to the funder, and encourage their continued future support.

The grants metadata helps people discover what kinds of organizations fund your work. Having funding can bring prestige to an organization. Many organizations are proud to let others know that their work was sponsored by a highly competitive grant. That can bring credibility to their work. As long as the funding organization enjoys a good reputation for being impartial and supporting high quality research, noting the funding organization is a big benefit to both the funder and the grant receiver. Who would want to hide the fact that they received a grant from the MacArthur Foundation, after all?

Appreciation can be expressed for in-kind grants as well. An organization can indicate that a local restaurant is a conference sponsor supplying the coffee and food.

Providing Transparency

The second reason to let others know who supports your work is to provide transparency. For some non-profits, the funding sources are opaque. In this age of widespread distrust, some readers may speculate about the motivations an organization if information about their finances is missing. The existence of dark money and anonymous donors fuels such distrust. A lack of transparency can spark speculations that might not be accurate. Such speculation can be reduced by disclosing the funder of any grants received.

While the funding source alone doesn’t indicate if the data is accurate, it can help others understand the provenience of the data. Corporations may have a self-interest in the results of research, and some foundations may have an explicit mission that could influence the kinds of research outcomes they are willing to sponsor. As foundations move away from unrestricted grants and toward impact investing, providing details about who sponsors your work can help others understand why you are doing specific kinds of projects.

Transparency about funding reduces uncertainty about conflicts of interest. There’s certainly nothing wrong with an organization funding research they hope will result in a certain conclusion. Pharmaceutical companies understandably hope that the new drugs they are developing will show promise in trials. They rely on third-parties to provide an independent review of a topic. Showing the funding relationship is central to convincing readers that the review is truly independent. If a funding relationship is not disclosed but is hidden, readers will doubt the independence of the researcher, and question the credibility of the results.

It’s common practice for researchers to acknowledge any potential conflict of interest, such as having received money from a source that has a vested interested in what is being reported. The principle of transparency applies not only to doctors reporting on medical research, but also to less formal research. Investment research often indicates if the writer has any ownership of stocks he or she is talking about. And news outlets increasingly note when reporting on a company if that company directly or indirectly owns the outlet. When writing about Amazon, The Washington Post will note “Bezos also owns The Washington Post.”

If the writer presents even the appearance that their judgment was influenced by a financial relationship, they should disclose that relationship to readers. Transparency is an expectation of readers, even though publishers are uneven in their application of transparency.

Right now, transparency is hard for readers to crack. Better metadata could help.

Current Problems with Funding Transparency

Transparency matters for any issue that’s subject to debate or verification, or open to interpretation. One such issue I’m familiar with is antitrust — whether certain firms have too much (monopoly) market power. It’s an issue that has been gaining interest across the globe by people holding different political persuasions, but it’s an issue where there is a range of views and cited evidence. Even if you are not be interested in this specific issue, the example of content relating to antitrust illustrates why greater transparency through metadata can be helpful.

A couple of blocks from my home in the Washington DC area is an institution that’s deeply involved in the antitrust policy debate: the Antonin Scalia Law School at George Mason University (GMU), a state-funded university that I financially support as a taxpayer. GMU is perhaps best-known for the pro-market, anti-regulation views of its law and economics faulty. It is the academic home of New York Times columnist Tyler Cowen, and has produced a lot of research and position papers on issues such as copyright, data privacy, and antitrust issues. Last month GMU hosted public hearings for the US Federal Trade Commission (FTC) on the future of antitrust policy.

Earlier this year, GMU faced a transparency controversy. As a state-funded university, it was subject to a Freedom of Information Act (FOIA) request about funding grants it receives. The request revealed that the Charles Koch Foundation had provided an “estimated $50 million” in grants to George Mason University to support their law and economic programs, according to the New York Times. Normally, generosity of that scale would be acknowledged by naming a building after the donor. But in this case the scale of donations only came to light after the FOIA request. Some of this funding entailed conditions that could be seen as compromising the independence of the researchers using the funds.

The New York Times noted that the FOIA also revealed a another huge gift to GMU: “executives of the Federalist Society, a conservative national organization of lawyers, served as agents for a $20 million gift from an anonymous donor.” What’s at issue is not whether political advocacy groups are entitled to provide grants, or whether or not the funded research is valid. What’s problematic is that research funding was not transparent.

Right now, it is difficult for citizens to “follow the money” when it comes to corporate-sponsored research on public policy issues such as the future of antitrust. Corporations are willing to provide funding for research that is sympathetic to their positions, but may not want to draw attention to their funding.

In the US, the EU, and elsewhere, elected officials and government regulators have discussed the possibility of bringing new antitrust investigations against Google. For many years, Google has funded research countering arguments that it should be subject to antitrust regulation. But Google has faced its own controversies about its funding transparency, according to a report from the Google Transparency Project, part of the Campaign for Accountability, which describes itself as “a 501(c)(3) non-profit, nonpartisan watchdog organization.” The report “Google Academics” asserts: “Eric Schmidt, then Google’s chief executive, cited a Google-funded author in written answers to Congress to back his contention that his company wasn’t a monopoly. He didn’t mention Google had paid for the paper.”

Google champions the use of metadata, especially the schema.org vocabulary. As Wikipedia notes, “Google’s mission statement is ‘to organize the world’s information and make it universally accessible and useful.’” I like Google for doing that, and hold them to a high standard for transparency precisely because their mission is making information accessible.

Google provides hundreds research grants to academics and others. How easy it is to know who Google funds? The Google Transparency Project tried to find out who Google funds by using Google Scholar, Google’s online search engine for academic papers. There was no direct way for them to search by funding source.

Searching for grants information without the benefit of metadata is very difficult. Source: Google Transparency Project, “Google Academics” report

They needed to search for phrases such as “grateful to Google.” That’s far short of making information accessible and useful. The funded researchers could express their appreciation more effectively by using metadata to indicate grants funding.

Google Transparency Project produced another report on the antitrust policy hearings that the FTC sponsored at GMU last month. The report, entitled “FTC Tech Hearings Heavily Feature Google-funded Speakers” concludes:“A third of speakers have financial ties to Google, either directly or through their employer. The FTC has not disclosed those ties to attendees.” Many of the speakers Google funded were current or former faculty of GMU, according to the report.

I leave it to the reader to decide if the characterizations of the Google Transparency Project are fair and accurate. Assessing their report requires looking at footnotes and checking original sources. How much easier it would be if all the relevant information were captured in metadata, instead of scattered around in text documents.

Right now it is difficult to use Google Scholar to find out what academic research was funded by any specific company or foundation. I can only hope that funders of research, Google included, will encourage those who receive their grants to reveal that sponsorship within the metadata relating to the research. And that recipients will add funding metadata to their online profiles.

The Future of Grants & Funding Metadata

How might the general public benefit from metadata on grants funding? Individuals may want to know what projects or people a funder supports. They want to see how funding sources have changed over time for an organization.

These questions could be answered by a service such as Google, Bing, or Wolfram Alpha. More skilled users could even design their own query of the metadata by using a SPARQL query (SPARQL is query language for semantic metadata). No doubt many journalists, grants-receiving organizations, and academics will find this information valuable.

Imagine if researchers at taxpayer-supported institutions such as GMU were required to indicate their funding sources within metadata. Or if independent non-profits made it a condition of receiving funding that they indicate the source within metadata. Imagine if the public expected full transparency about funding sources as the norm, rather than as something optional to disclose.

How You can get Involved

If you make or receive grants, you can start using the pending Grants metadata now in anticipation of its formal release. Metadata allows an individual to write information once, and reuse it often. When metadata is used to indicate funding, organizations have less worry about forgetting to mention a relationship in a specific context. The information about the relationship is discoverable online.

Note that the specifics of the grants proposal could change when it is released, though I expect they would most likely be tweaks rather than drastic revisions. Some specific details of the proposal will most interest research scientists who are concerns with research productivity and impact metrics that are of less interest to researchers working in public policy and other areas. While the grants proposal has been under discussion for several years now, the momentum for final release is building and it will hopefully be finalized before long. Many researchers plan to use the newly-released metadata terms for datasets, and want including funder information as part of their dataset metadata. (Sharing research data is often a condition of research grants, so it makes sense to add funding sponsorship to the datasets.)

If you have suggestions or concerns about the proposal, you can contribute your feedback to the schema.org community GitHub issue (no 383) for grants. Schema.org is a W3C community, and is open to contributions from anyone.

— Michael Andrews

Categories
Agility

Seamless: Structural Metadata for Multimodal Content

Chatbots and voice interaction are hot topics right now. New services such as Facebook Messenger and Amazon Alexa have become popular quickly. Publishers are exploring how to make their content multimodal, so that users can access content in varied ways on different devices. User interactions may be either screen-based or audio-based, and will sometimes be hands-free.

Multimodal content could change how content is planned and delivered. Numerous discussions have looked at one aspect of conversational interaction: planning and writing sentence-level scripts. Content structure is another dimension relevant to voice interaction, chatbots and other forms of multimodal content. Structural metadata can support the reuse of existing web content to support multimodal interaction. Structural metadata can help publishers escape the tyranny of having to write special content for each distinct platform.

Seamless Integration: The Challenge for Multimodal Content

In-Vehicle Infotainment (IVI) systems such as Apple’s CarPlay illustrate some of challenges of multimodal content experiences. Apple’s Human Interface Guidelines state: “On-screen information is minimal, relevant, and requires little decision making. Voice interaction using Siri enables drivers to control many apps without taking their hands off the steering wheel or eyes off the road.” People will interact with content hands-free, and without looking. CarPlay includes six distinct inputs and outputs:

  1. Audio
  2. Car Data
  3. iPhone
  4. Knobs and Controls
  5. Touchscreen
  6. Voice (Siri)

The CarPlay UIKit even includes “Drag and Drop Customization”. When I review these details, much seems as if it could be distracting to drivers. Apple states with CarPlay “iPhone apps that appear on the car’s built-in display are optimized for the driving environment.” What that iPhone app optimization means in practice could determine whether the driver gets in an accident.

CarPlay screenshot
CarPlay: if it looks like an iPhone, does it act like an iPhone? (screenshot via Apple)

Multimodal content promises seamless integration between different modes of interaction, for example, reading and listening. But multimodal projects carry a risk as well if they try to port smartphone or web paradigms into contexts that don’t support them. Publishers want to reuse content they’ve already created. But they can’t expect their current content to suffice as it is.

In a previous post, I noted that structural metadata indicates how content fits together. Structural metadata is a foundation of a seamless content experience. That is especially true when working with multimodal scenarios. Structural metadata will need to support a growing range of content interactions, involving distinct modes. A mode is form of engaging with content, both in terms of requesting and receiving information. A quick survey of these modes suggests many aspects of content will require structural metadata.

Platform Example Input Mode Output Mode
Chatbots Typing Text
Devices with Mic & Display Speaking Visual (Video, Text, Images, Tables) or Audio
Smart Speakers Speaking Audio
Camera/IoT Showing or Pointing Visual or Audio

Multimodal content will force content creators to think more about content structure. Multimodal content encompasses all forms of media, from audio to short text messages to animated graphics. All these forms present content in short bursts. When focused on other tasks, users aren’t able to read much, or listen very long. Steven Pinker, the eminent cognitive psychologist, notes that humans can only retain three or four items in short term memory (contrary to the popular belief that people can hold 7 items). When exploring options by voice interaction, for example, users can’t scan headings or links to locate what they want.  Instead of the user navigating to the content, the content needs to navigate to the user.

Structural metadata provides information to machines to choose appropriate content components. Structural metadata will generally be invisible to users — especially when working with screen-free content. Behind the scenes, the metadata indicates hidden structures that are important to retrieving content in various scenarios.

Metadata is meant to be experienced, not seen. A photo of an Amazon customer’s Echo Show, revealing  code (via Amazon)

Optimizing Content With Structural Metadata

When interacting with multimodal content, users have limited attention, and a limited capacity to make choices. This places a premium on optimizing content so that the right content is delivered, and so that users don’t need to restate or reframe their requests.

Existing web content is generally not optimized for multimodal interaction — unless the user is happy listening to a long article being read aloud, or seeing a headline cropped in mid-sentence. Most published web content today has limited structure. Even if the content was structured during planning and creation, once delivered, the content lacks structural metadata that allows it to adapt to different circumstances. That makes it less useful for multimodal scenarios.

In the GUI paradigm of the web, users are expected to continually make choices by clicking or tapping. They see endless opportunities to “vote” with their fingers, and this data is enthusiastically collected and analyzed for insights. Publishers create lots of content, waiting to see what gets noticed. Publishers don’t expect users to view all their content, but they expect users to glance at their content, and scroll through it until users have spotted something enticing enough to view.

Multimodal content shifts the emphasis away from planning delivery of complete articles, and toward delivering content components on-demand, which are described by structural metadata. Although screens remain one facet of multimodal content, some content will be screen-free. And even content presented on screens may not involve a GUI: it might be plain text, such as with a chatbot. Multimodal content is post-GUI content. There are no buttons, no links, no scrolling. In many cases, it is “zero tap” content — the hands will be otherwise occupied driving, cooking, or minding children. Few users want to smudge a screen with cookie dough on their hands. Designers will need to unlearn their reflexive habit of adding buttons to every screen.

Users will express what they want, by speaking, gesturing, and if convenient, tapping. To support zero-tap scenarios successfully, content will need to get smarter, suggesting the right content, in the right amount. Publishers can no longer present an endless salad bar of options, and expect users to choose what they want. The content needs to anticipate user needs, and reduce demands on the user to make choices.

Users will aways want to choose what topics they are interested in. They may be less keen on actively choosing the kind of content to use. Visiting a website today, you find articles, audio interviews, videos, and other content types to choose from. Unlike the scroll-and-scan paradigm of the GUI web, multimodal content interaction involves an iterative dialog. If the dialog lasts too long, it gets tedious. Users expect the publisher to choose the most useful content about a topic that supports their context.

screenshot of Google News widget
Pattern: after saying what you want information about, now tell us how you’d like it (screenshot via Google News)

In the current use pattern, the user finds content about a topic of interest (topic criteria), then filters that content according to format preferences. In future, publishers will be more proactive deciding what format to deliver, based on user circumstances.

Structural metadata can help optimize content, so that users don’t have to choose how they get information. Suppose the publisher wants to show something to the user. They have a range of images available. Would a photo be best, or a line drawing? Without structural metadata, both are just images portraying something. But if structural metadata indicates the type of image (photo or line diagram), then deeper insights can be derived. Images can be A/B tested to see which type is most effective.

A/B testing of content according to its structural properties can yield insights into user preferences. For example, a major issue will be learning how much to chunk content. Is it better to offer larger size chunks, or smaller ones? This issue involves the tradeoffs for the user between the costs of interaction, memory, and attention. By wrapping content within structural metadata, publishers can monitor how content performs when it is structured in alternative ways.

Component Sequencing and Structural Metadata

Multimodal content is not delivered all at once, as is the case with an article. Multimodal content relies on small chunks of information, which act as components. How to sequence these components is important.

photo of Echo Show
Alexa showing some cards on an Echo Show device (via Amazon)

Screen-based cards are a tangible manifestation of content components. A card could show the current weather, or a basketball score. Cards, ideally, are “low touch.” A user wants to see everything they need on a single card, so they don’t need to interact with buttons or icons on the card to retrieve the content they want. Cards are post-GUI, because they don’t rely heavily on forms, search, links and other GUI affordances. Many multimodal devices have small screens that can display a card-full of content. They aren’t like a smartphone, cradled in your hand, with a screen that is scrolled. An embedded screen’s purpose is primarily to display information rather than for interaction. All information is visible on the card [screen], so that users don’t need to swipe or tap. Because most of us are accustomed to using screen-based cards already, but may be less familiar with screen-free content, cards provide a good starting point for considering content interaction.

Cards let us consider components both as units (providing an amount of content) and as plans (representing a purpose for the content). User experiences are structured from smaller units of content, but these units need have a cohesive purpose. Content structure is more than breaking content into smaller pieces. It is about indicating how those pieces can fit together. In the case of multimodal content, components need to fit together as an interaction unfolds.

Each card represents a specific type of content (recipe, fact box, news headline, etc.), which is indicated with structural metadata. The cards also present information in a sequence of some sort.1 Publishers need to know how various types of components can be mixed, and matched. Some component structures are intended to complement each other, while other structures work independently.

Content components can be sequenced in three ways. They can be:

  1. Modular
  2. Fixed
  3. Adaptive

Truly modular components can be sequenced in any order; they have no intrinsic sequence. They provide information in response to a specific task. Each task is assumed to be unrelated. A card providing an answer to the question of “What is the height of Mount Everest?” will be unrelated to a card answering the question “What is the price of Facebook stock?”

The technical documentation community uses an approach known as topic-based writing that attempts to answer specific questions modularly, so that every item of content can be viewed independently, without need to consult other content. In principle, this is a desirable goal: questions get answered quickly, and users retrieve the exact information they need without wading through material they don’t need. But in practice, modularity is hard to achieve. Only trivial questions can be answered on a card. If publishers break a topic into several cards, they should indicate the relations between the information on each card. Users get lost when information is fragmented into many small chunks, and they are forced to find their way through those chunks.

Modular content structures work well for discrete topics, but are cumbersome for richer topics. Because each module is independent of others, users, after viewing the content, need to specify what they want next. The downside of modular multimodal content is that users must continually specify what they want in order to get it.

Components can sequenced in a fixed order. An ordered list is a familiar example of structural metadata indicating a fixed order. Narratives are made from sequential components, each representing an event that happens over time. The narrative could be a news story, or a set of instructions. When considered as a flow, a narrative involves two kinds of choices: whether to get details about an event in the narrative, or whether to get to the next event in the narrative. Compared with modular content, fixed sequence content requires less interaction from the user, but longer attention.

Adaptive sequencing manages components that are related, but can be approached in different orders. For example, content about an upcoming marathon might include registration instructions, sponsorship info, a map, and event timing details, each as a separate component/card. After viewing each card, users need options that make sense, based on content they’ve already consumed, and any contextual data that’s available. They don’t want too many options, and they don’t want to be asked too many questions. Machines need to figure out what the user is likely to need next, without being intrusive. Does the user need all the components now, or only some now?

Adaptive sequencing is used in learning applications; learners are presented with a progression of content matching their needs. It can utilize recommendation engines, suggesting related components based on choices favored by others in a similar situation. An important application of adaptive sequencing is deciding when to ask a detailed question. Is the question going to be valuable for providing needed information, or is the question gratuitous? A goal of adaptive sequencing is to reduce the number of questions that must be asked.

Structural metadata generally does not explicitly address temporal sequencing, because (until now) publishers have assumed all content would be delivered at once on a single web page. For fixed sequences, attributes are needed to indicate order and dependencies, to allow software agents to follow the correct procedure when displaying content. Fixed sequences can be expressed by properties indicating step order, rank order, or event timing. Adaptive sequencing is more programmatic. Publishers need to indicate the relation of components to parent content type. Until standards catch up, publishers may need to indicate some of these details in the data-* attribute.

The sequencing of cards illustrates how new patterns of content interaction may necessitate new forms of structural metadata.

Composition and the Structure of Images

One challenge in multimodal interaction is how users and systems talk about images, as either an input (via a camera), or as an output. We are accustomed to reacting to images by tapping or clicking. We now have the chance to show things to systems, waving an object in front of a camera. Amazon has even introduced a hands-free voice activated IoT camera that has no screen. And when systems show us things, we may need to talk about the image using words.

Machine learning is rapidly improving, allowing systems to recognize objects. That will help machines understand what an item is. But machines still need to understand the structural relationship of items that are in view. They need to understand ordinary concepts such as near, far, next to, close to, background, group of, and other relational terms. Structural metadata could make images more conversational.

Vector graphics are composed of components that can represent distinct ideas, much like articles that are composed of structural components. That means vector images can be unbundled and assembled differently. The WAI-ARIA standard for web accessibility has an SVG Graphics Module that covers how to markup vector images. It includes properties to add structural metadata to images, such as group (a role indicating similar items in the image) and background (a label for elements in the image in the background). Such structural metadata could be useful for users interacting with images using voice commands. For example, the user might want to say, “Show me the image without a background” or “with a different background”.

Photos do not have interchangeable components the way that vector graphics do. But photos can present a structural perspective of a subject, revealing part of a larger whole. Photos can benefit from structural metadata that indicates the type of photo. For example, if a user wants a photo of a specific person, they might have a preference for a full-length photo or for a headshot. As digital photography has become ubiquitous, many photos are available of the same subject that present different dimensions of the subject. All these dimensions form a collection, where the compositions of individual photos reveal different parts of the subject. The IPTC photo metadata schema includes a controlled vocabulary for “scenes” that covers common photo compositions: profile, rear view, group, panoramic view, aerial view, and so on. As photography embraces more kinds of perspectives, such as aerial drone shots and omnidirectional 360 degree photographs, the value of perspective and scene metadata will increase.

For voice interaction with photo images to become seamless, machines will need to connect conversational statements with image representations. Machines may hear a command such as “show me the damage to the back bumper,” and must know to show a photo of the rear view of a car that’s been in an accident. Sometimes users will get a visual answer to a question that’s not inherently visual. A user might ask: “Who will be playing in Saturday’s soccer game?”, and the display will show headshots of all the players at once. To provide that answer, the platform will need structural metadata indicating how to present an answer in images, and how to retrieve player’s images appropriately.

Structural metadata for images lags behind structural metadata for text. Working with images has been labor intensive, but structural metadata can help with the automated processing of image content. Like text, images are composed of different elements that have structural relationships. Structural metadata can help users interact with images more fluidly.

Reusing Text Content in Voice Interaction

Voice interaction can be delivered in various ways: through natural language generation, through dedicated scripting, and through the reuse of existing text content. Natural language generation and scripting are especially effective in short answer scenarios — for example, “What is today’s 30 year mortgage rate? ” Reusing text content is potentially more flexible, because it lets publishers address a wide scope of topics in depth.

While reusing written text in voice interactions can be efficient, it can potentially be clumsy as well. The written text was created to be delivered and consumed all at once. It needs some curation to select which bits work most effectively in a voice interaction.

The WAI-ARIA standards for web accessibility offer lessons on the difficulties and possibilities of reusing written content to support audio interaction. By becoming familiar with what ARIA standards offer, we can better understand how structural metadata can support voice interactions.

ARIA standards seek to reduce the burdens of written content for people who can’t scan or click through it easily. Much web content contains unnecessary interaction: lists of links, buttons, forms and other widgets demanding attention. ARIA encourages publishers to prioritize these interactive features with the TAB index. It offers a way to help users fill out forms they must submit to get to content they want. But given a choice, users don’t want to fill out forms by voice. Voice interaction is meant to dispense with these interactive elements. Voice interaction promises conversational dialog.

Talking to a GUI is awkward. Listening to written web content can also be taxing. The ARIA standards enhance the structure of written content, so that content is more usable when read aloud. ARIA guidelines can help inform how to indicate structural metadata to support voice interaction.

The ARIA encourages publishers to curate their content: to highlight the most important parts that can be read aloud, and to hide parts that aren’t needed. ARIA designates content with landmarks. Publishers can indicate what content has role=“main”, or they can designate parts of content by region. The ARIA standard states: “A region landmark is a perceivable section containing content that is relevant to a specific, author-specified purpose and sufficiently important that users will likely want to be able to navigate to the section easily and to have it listed in a summary of the page.” ARIA also provides a pattern for disclosure, so that not all text is presented at once. All of these features allow publishers to indicate more precisely the priority of different components within the overall content.

ARIA supports screen-free content, but it is designed primarily for keyboard/text-to-speech interaction. Its markup is not designed to support conversational interaction — schema.org’s pending speakable specification, mentioned in my previous post, may be a better fit. But some ARIA concepts suggest the kinds of structures that written text need to work effectively as speech. When content conveys a series of ideas, users need to know what are major and minor aspects of text they will be hearing. They need the spoken text to match the time that’s available to listen. Just like some word processors can provide an “auto summary” of a document by picking out the most important sentences, voice-enabled text will need to identify what to include in a short version of the content. The content might be structured in an inverted pyramid, so that only the heading and first paragraph are read in the short version. Users may even want the option of hearing a short version or a long version of a story or explanation.

Structural metadata and User Intent in Voice Interaction

Structural metadata will help conversational interactions deliver appropriate answers. On the input side, when users are speaking, the role of structural metadata is indirect. People will state questions or commands in natural language, which will be processed to identify synonyms, referents, and identifiable entities, in order to determine the topic of the statement. Machines will also look at the construction of the statement to determine the intent, or the kind of content sought about the topic. Once the intent is known — what kind of information the user is seeking — it can be matched with the most useful kind of content. It is on the output side, when users view or hear an answer, that structural metadata plays an active role selecting what content to deliver.

Already, search engines such as Google rely on structural metadata to deliver specific answers to speech queries. A user can ask Google the meaning of a word or phrase (What does ‘APR’ mean?) and Google locates a term that’s been tagged with structural metadata indicating a definition, such as with the HTML element <dfn>.

When a machine understands the intent of a question, it can present content that matches the intent. If a user asks a question starting with the phrase Show me… the machine can select a clip or photograph about the object, instead of presenting or reading text. Structural metadata about the characteristics of components makes that matching possible.

Voice interaction supplies answers to questions, but not all answers will be complete in a single response. Users may want to hear alternative answers, or get more detailed answers. Structural metadata can support multi-answer questions.

Schema.org metadata indicates content that answers questions using the Answer type, which is used by many forums and Q&A pages. Schema.org distinguishes between two kinds of answers. The first, acceptedAnswer, indicates the best or most popular answer, often the answer that received most votes. But other answers can be indicated with a property called suggestedAnswer. Alternative answers can be ranked according to popularity as well. When sources have multiple answers, users can get alternative perspectives on a question. After listening to the first “accepted” answer, the user might ask “tell me another opinion” and a popular “suggested” answer could be read to them.

Another kind of multi-part answer involves “How To” instructions. The HowTo type indicates “instructions that explain how to achieve a result by performing a sequence of steps.” The example the schema.org website provides to illustrate the use of this type involves instructions on how to change a tire on a car. Imagine car changing instructions being read aloud on a smartphone or by an in-vehicle infotainment system as the driver tries to change his flat tire along a desolate roadway. This is a multi-step process, so the content needs to be retrievable in discrete chunks.

Schema.org includes several additional types related to HowTo that structure the steps into chunks, including preconditions such as tools and supplies required. These are:

  • HowToSection : “A sub-grouping of steps in the instructions for how to achieve a result (e.g. steps for making a pie crust within a pie recipe).”
  • HowToDirection : “A direction indicating a single action to do in the instructions for how to achieve a result.”
  • HowToSupply : “A supply consumed when performing the instructions for how to achieve a result.”
  • HowToTool : “A tool used (but not consumed) when performing instructions for how to achieve a result.”

These structures can help the content match the intent of users as they work through a multi-step process. The different chunks are structurally connected through the step property. Only the HowTo type ( and its more specialized subtype, the Recipe) currently accepts the step property and thus can address temporal sequencing.

Content Agility Through Structural Metadata

Chatbots, voice interaction and other forms of multimodal content promise a different experience than is offered by screen-centric GUI content. While it is important to appreciate these differences, publishers should also consider the continuities between traditional and emerging paradigms of content interaction. They should be cautious before rushing to create new content. They should start with the content they have, and see how it can be adapted before making content they don’t have.

A decade ago, the emergence of smartphones and tablets triggered an app development land rush. Publishers obsessed over the discontinuity these new devices presented, rather than recognizing their continuity with existing web browser experiences. Publishers created multiple versions of content for different platforms. Responsive web design emerged to remedy the siloing of development. The app bust shows that parallel, duplicative, incompatible development is unsustainable.

Existing content is rarely fully ready for an unpredictable future. The idealistic vision of single source, format free content collides with the reality of new requirements that are fitfully evolving. Publishers need an option between the extremes of creating many versions of content for different platforms, and hoping one version can serve all platforms. Structural metadata provides that bridge.

Publishers can use structural metadata to leverage content they have already that could be used to support additional forms of interaction. They can’t assume they will directly orchestrate the interaction with the content. Other platforms such as Google, Facebook or Amazon may deliver the content to users through their services or devices. Such platforms will expect content that is structured using standards, not custom code.

Sometimes publishers will need to enhance existing content to address the unique requirements of voice interaction, or differences in how third party platforms expect content. The prospect of enhancing existing content is preferable to creating new content to address isolated use case scenarios. Structural metadata by itself won’t make content ready for every platform or form of interaction. But it can accelerate its readiness for such situations.

— Michael Andrews


  1. Dialogs in chatbots and voice interfaces also involve sequences of information. But how to sequence a series of cards may be easier to think about than a series of sentences, since viewing cards doesn’t necessarily involve a series of back and forth questions. ↩︎

 

Categories
Content Engineering

Structural Metadata: Key to Structured Content

Structural metadata is the most misunderstood form of metadata.  It is widely ignored, even among those who work with metadata. When it is discussed, it gets confused with other things.  Even people who understand structural metadata correctly don’t always appreciate its full potential. That’s unfortunate, because structural metadata can make content more powerful. This post takes a deep dive into what structural metadata is, what it does, and how it is changing.

Why should you care about structural metadata? The immediate, self-interested answer is that structural metadata facilitates content reuse, taking content that’s already created to deliver new content. Content reuse is nice for publishers, but it isn’t a big deal for audiences.  Audiences don’t care how hard it is for the publisher to create their content. Audiences want content that matches their needs precisely, and that’s easy to use.  Structural metadata can help with that too.

Structural metadata matches content with the needs of audiences. Content delivery can evolve beyond creating many variations of content — the current preoccupation of many publishers. Publishers can use structural metadata to deliver more interactive content experiences.  Structural metadata will be pivotal in the development of multimodal content, allowing new forms of interaction, such as voice interaction.  Well-described chunks of content are like well-described buttons, sliders and other forms of interactive web elements.  The only difference is that they are more interesting.  They have something to say.

Some of the following material will assume background knowledge about metadata.  If you need more context, consult my very approachable book, Metadata Basics for Web Content.

What is Structural Metadata?

Structural metadata is data about the structure of content.  In some ways it is not mysterious at all.  Every time you write a paragraph, and enclose it within a
<p> paragraph element, you’ve created some structural metadata.  But structural metadata entails far more than basic HTML tagging.  It gives data to machines on how to deliver the content to audiences. When structural metadata is considered as a fancy name for HTML tagging, much of its potency gets missed.

The concept of structural metadata originated in the library and records management field around 20 years ago. To understand where structural metadata is heading, it pays to look at how it has been defined already.

In 1996, a metadata initiative known as the Warwick Framework first identified structural metadata as “data defining the logical components of complex or compound objects and how to access those components.”

In 2001, a group of archivists, who need to keep track of the relationships between different items of content, came up with a succinct definition:  “Structural metadata can be thought of as the glue that binds compound objects together.”

By 2004, the National Information Standards Organization (NISO) was talking about structural metadata in their standards.  According to their definition in the z39.18 standard, “Structural metadata explain the relationship between parts of multipart objects and enhance internal navigation. Such metadata include a table of contents or list of figures and tables.”

Louis Rosenfeld and Peter Morville introduced the concept of structural metadata to the web community in their popular book, Information Architecture for the World Wide Web — the “Polar Bear” book. Rosenfeld and Morville use the structural metadata concept as a prompt to define the information architecture of a websites:

“Describe the information hierarchy of this object. Is there a title? Are there discrete sections or chunks of content? Might users want to independently access these chunks?”

A big theme of all these definitions is the value of breaking content into parts.  The bigger the content, the more it needs breaking down.  The structural metadata for a book relates to its components: the table of contents, the chapters, parts, index and so on.  It helps us understand what kinds of material is within the book, to access specific sections of the book, even if it doesn’t tell us all the specific things the book discusses.  This is important information, which surprisingly, wasn’t captured when Google undertook their massive book digitization initiative a number of years ago.  When the books were scanned, entire books became one big file, like a PDF.   To find a specific figure or table within book on Google books requires searching or scrolling to navigate through the book.

Image of Google Books webpage.
The contents of scanned books in Google Books lack structural metadata, limiting the value of the content.

Navigation is an important purpose of structural metadata: to access specific content, such as a specific book chapter.  But structural metadata has an even more important purpose than making big content more manageable.  It can unbundle the content, so that the content doesn’t need to stay together. People don’t want to start with the whole book and then navigate through it to get to a small part in which they are interested. They want only that part.

In his recent book Metadata, Richard Gartner touches on a more current role for structural metadata: “it defines structures that bring together simpler components into something larger that has meaning to a user.” He adds that such information “builds links between small pieces of data to assemble them into a more complex object.”

In web content, structural metadata plays an important role assembling content. When content is unbundled, it can be  rebundled in various ways.  Structural metadata identifies the components within content types.  It indicates role of the content, such as whether the content is an introduction or a summary.

Structural metadata plays a different role today than it did in the past, when the assumption was that there was one fixed piece of large content that would be broken into smaller parts, identified by structural metadata.  Today, we may compose many larger content items, leveraging structural metadata, from smaller parts.

The idea of assembling content from smaller parts has been promoted in particular by DITA evangelists such as Anne Rockley (DITA is a widely used framework for technical documentation). Rockley uses the phrase “semantic structures” to refer to structural metadata, which she says “enable(s) us to understand ‘what’ types of content are contained within the documents and other content types we create.”  Rockley’s discussion helpfully makes reference to content types, which some other definitions don’t explicitly mention.  She also introduces another concept with a similar sounding name, “semantically rich” content, to refer to a different kind of metadata: descriptive metadata.  In XML (which is used to represent DITA), the term semantic is used generically for any element. Yet the difference between structural and descriptive metadata is significant — though it is often obscured, especially in the XML syntax.

Curiously, semantic web developments haven’t focused much on structural metadata for content (though I see a few indications that this is starting to change).  Never assume that when someone talks about making content semantic, they are talking about adding structural metadata.

Don’t Confuse Structural and Descriptive Metadata

When information professionals refer to metadata, most often they are talking about descriptive metadata concerning people, places, things, and events.  Descriptive metadata indicates the key information included within the content.  It typically describes the subject matter of the content, and is sometimes detailed and extensive.  It helps one discover what the content is about, prior to viewing the content.  Traditionally, descriptive metadata was about creating an external index — a proxy — such as assigning a keywords or subject headings about the content. Over the past 20 years, descriptive metadata has evolved to describing the body of the content in detail, noting entities and their properties.

Richard Gartner refers to descriptive metadata as “finding metadata”: it locates content that contains some specific information.  In modern web technology, it means finding values for a specific field (or property).  These values are part of the content, rather than separate from it.  For example, find smartphones with dual SIMs that are under $400.  The  attributes of SIM capacity and price are descriptive metadata related to the content describing the smartphones.

Structural metadata indicates how people and machines can use the content.  If people see a link indicating a slideshow, they have an expectation of how such content will behave, and will decide if that’s the sort of content they are interested in.  If a machine sees that the content is a table, it uses that knowledge to format the content appropriately on a smartphone, so that all the columns are visible.  Machines rely extensively on structural metadata when stitching together different content components into a larger content item.

diagram showing relationship of structural and descriptive metadata
Structural and descriptive metadata can be indicated in the same HTML tag.  This tag indicates the start of an introductory section discussing Albert Einstein.

Structural metadata sometimes is confused with descriptive metadata because many people use vague terms such as “structure” and “semantics” when discussing content. Some people erroneously believe that structuring content makes the content “semantic”.  Part of this confusion derives from having an XML-orientation toward content.  XML tags content with angle-bracketed elements. But XML elements can be either structures such as sections, or they can be descriptions such as names.  Unlike HTML, where elements signify content structure while descriptions are indicated in attributes, the XML syntax creates a monster hierarchical tree, where content with all kinds of roles are nested within elements.  The motley, unpredictable use of elements in XML is a major reason it is unpopular with developers, who have trouble seeing what roles different parts of the content have.

The buzzword “semantically structured content” is particularly unhelpful, as it conflates two different ideas together: semantics, or what content means, with structure, or how content fits together.  The semantics of the content is indicated by descriptive metadata, while the structure of the content is indicated by structural metadata.  Descriptive metadata can focus on a small detail in the content, such as a name or concept (e.g., here’s a mention of the Federal Reserve Board chair in this article).  Structural metadata, in contrast, generally focuses on a bigger chunk of content: here’s a table, here’s a sidebar.   To assemble content, machines need to distinguish what the specific content means, from what the structure of the content means.

Interest in content modeling has grown recently, spurred by the desire to reuse content in different contexts. Unfortunately, most content models I’ve seen don’t address metadata at all; they just assume that the content can be pieced together.  The models almost never distinguish between the properties of different entities (descriptive metadata), and the properties of different content types (structural metadata). This can lead to confusion.  For example, a place has an address, and that address can be used in many kinds of content.  You may have specific content types dedicated to discussing places (perhaps tourist destinations) and want to include address information.  Alternatively, you may need to include the address information in content types that are focused on other purposes, such as a membership list.  Unless you make a clear distinction in the content model between what’s descriptive metadata about entities, and what’s structural metadata about content types, many people will be inclined to think there is a one-to-one correspondence between entities and content types, for example, all addresses belong the the content type discussing tourist destinations.

Structural metadata isn’t merely a technical issue to hand off to a developer.  Everyone on a content team who is involved with defining what content gets delivered to audiences, needs to jointly define what structural metadata to include in the content.

Three More Reasons Structural Metadata Gets Ignored…

Content strategists have inherited frameworks for working with metadata from librarians, database experts and developers. None of those roles involves creating content, and their perspective of content is an external one, rather than an internal one. These hand-me-down concepts don’t fit the needs of online content creators and publishers very well.  It’s important not to be misled by legacy ideas about structural metadata that were developed by people who aren’t content creators and publishers.  Structural metadata gets sidelined when people fail to focus on the value that content parts can contribute in different scenarios.

Reason 1: Focus on Whole Object Metadata

Librarians have given little attention to structural metadata, because they’ve been most concerned with cataloging and  locating things that have well defined boundaries, such as books and articles (and most recently, webpages).  Discussion of structural metadata in library science literature is sparse compared with discussions of descriptive and administrative metadata.

Until recently, structural metadata has focused on identifying parts within a whole.  Metadata specialists assumed that a complete content item existed (a book or document), and that structural metadata would be used to locate parts within the content.  Specifying structural metadata was part of cataloging existing materials. But given the availability of free text searching and more recently natural language processing, many developers question the necessity of adding metadata to sub-divide a document. Coding structural metadata seemed like a luxury, and got ignored.

In today’s web, content exists as fragments that can be assembled in various ways.  A document or other content type is a virtual construct, awaiting components. The structural metadata forms part of the plan for how the content can fit together. It’s important to define the pieces first.

Reason 2: Confusion with Metadata Schemas

I’ve recently seen several cases where content strategists and others mix up the concept of structural metadata, with the concept of metadata structure, better known as metadata schemas.  At first I thought this confusion was simply the result of similar sounding terms.  But I’ve come to realize that some database experts refer to structural metadata in a different way than it is being used by librarians, information architects, and content engineers.  Some content strategists seem to have picked up this alternative meaning, and repeat it.

Compared to semi-structured web content, databases are highly regular in structure.  They are composed of tables of rows and columns.  The first column of a row typically identifies what the values relate to.  Some database admins refer to those keys or properties as the structure of the data, or the structural metadata.  For example, the OECD, the international statistical organization, says: “Structural metadata refers to metadata that act as identifiers and descriptors of the data.  Structural metadata are needed to identify, use, and process data matrixes and data cubes.”   What is actually being referred to is the schema of the data table.

Database architects develop many custom schemas to organize their data in tables.  Those schemas are very different from the standards-based structural metadata used in content.  Database tables provide little guidance on how content should be structured.  Content teams shouldn’t rely on a database expert to guide them on how to structure their content.

Reason 3: Treated as Ordinary Code

Web content management systems are essentially big databases built in programming language like PHP or .Net.  There’s a proclivity among developers to treat chunks of content as custom variables.  As one developer noted when discussing WordPress: “In WordPress (WP), the meaning of Metadata is a bit fuzzier.  It stores post metadata such as custom fields and additional metadata added via plugins.”

As I’ve noted elsewhere, many IT systems that manage content ignore web metadata standards, resulting in silos of content that can’t work together. It’s not acceptable to define chunks of content as custom variables. The purpose of structural metadata is to allow different chunks of content to connect with each other.  CMSs need to rely on web standards for their structural metadata.

Current Practices for Structural Metadata

For machines to piece together content components into a coherent whole, they need to know the standards for the structural metadata.

Until recently, structural metadata has been indicated only during the prepublication phase, an internal operation where standards were less important.  Structural metadata was marked up in XML together with other kinds of metadata, and transformed into HTML or PDF.  Yet a study in the journal Semantic Web last year noted: “Unfortunately, the number of distinct vocabularies adopted by publishers to describe these requirements is quite large, expressed in bespoke document type definitions (DTDs). There is thus a need to integrate these different languages into a single, unifying framework that may be used for all content.”

XML continues to be used in many situations.  But a recent trend has been to adopt more light weight approaches, using HTML, to publish content directly.  Bypassing XML is often simpler, though the plainness of HTML creates some issues as well.

As Jeff Eaton has noted, getting specific about the structure of content using HTML elements is not always easy:

“We have workhorse elements like ul, div, and span; precision tools like cite, table, and figure; and new HTML5 container elements like section, aside, and nav. But unless our content is really as simple as an unattributed block quote or a floated image, we still need layers of nested elements and CSS classes to capture what we really mean.”

Because HTML elements are not very specific, publishers often don’t know how to represent structural metadata within HTML.  We can learn from the experience of publishers who have used XML to indicate structure, and who are adapting their structures to HTML.

Scientific research, and technical documentation are two genres where content structure is well-established, and structural metadata is mature.  Both these genres have explored how to indicate the structure of their content in HTML.

Scientific research papers are a distinct content type that follows a regular pattern. The National Library of Medicine’s Journal Article Tag Suite (JATS) formalizes the research paper structure into a content type as an XML schema.  It provides a mixture of structural and descriptive metadata tags that are used to publish biomedical and other scientific research.  The structure might look like:

<sec sec-type="intro">

<sec sec-type="materials|methods">

<sec sec-type="results">

<sec sec-type="discussion">

<sec sec-type="conclusions">

<sec sec-type="supplementary-material" ... >

Scholarly HTML is an initiative to translate the typical sections of a research paper into common HTML.  It uses HTML elements, and supplements them with typeof attributes to indicate more specifically the role of each section.  Here’s an example of some attribute values in their namespace, noted by the prefix “sa”:

<section typeof="sa:MaterialsAndMethods">

<section typeof="sa:Results">

<section typeof="sa:Conclusion">

<section typeof="sa:Acknowledgements">

<section typeof="sa:ReferenceList">

As we can see, these sections overlap with the JATS, since both are describing similar content structures.  The Scholarly HTML initiative is still under development, and it could eventually become a part of the schema.org effort.

DITA — the technical documentation architecture mentioned earlier — is a structural metadata framework that embeds some descriptive metadata.  DITA structures topics, which can be different information types: Task, Concept, Reference, Glossary Entry, or Troubleshooting, for example.  Each type is broken into structural elements, such as title, short description, prolog, body, and related links.  DITA is defined in XML, and uses many idiosyncratic tags.

HDITA is a draft syntax to express DITA in HTML.  It converts DITA-specific elements into HTML attributes, using the custom data-* attribute.  For example a “key definition” element <keydef> becomes an attribute within an HTML element, e.g. <div data-hd-class="keydef”>
.  Types are expressed with the attribute data-hd-type.

The use of the data-* offers some advantages, such as javascript access by clients.  It is not, however, intended for use as a cross-publisher metadata standard. The W3C notes: “A custom data attribute is an attribute in no namespace…intended to store custom data private to the page or application.”  It adds:

“These attributes are not intended for use by software that is not known to the administrators of the site that uses the attributes. For generic extensions that are to be used by multiple independent tools, either this specification should be extended to provide the feature explicitly, or a technology like microdata should be used (with a standardized vocabulary).”

The HDITA drafting committee appears to use “hd” in the data attribute to signify that the attribute is specific to HDITA.  But they have not declared a namespace for these attributes (the XML namespace for DITA is xmlns:ditaarch.)  This will prevent automatic machine discovery of the metadata by Google or other parties.

The Future of Structural Metadata

Most recently, several initiatives have explored possibilities for extending structural metadata in HTML.  These revolve around three distinct approaches:

  1. Formalizing structural metadata as properties
  2. Using WAI-ARIA to indicate structure
  3. Combining class attributes with other metadata schemas

New Vocabularies for Structures

The web standards community is starting to show more interest in structural metadata.  Earlier this year, the W3C released the Web Annotation Vocabulary.  It provides properties to indicate comments about content.  Comments are an important structure in web content that are used in many genres and scenarios. Imagine that readers may be highlighting passages of text. For such annotations to be captured, there must be a way to indicate what part of the text is being referenced.  The annotation vocabulary can reference specific HTML elements and even CSS selectors within a body of text.

Outside of the W3C, a European academic group has developed the Document Components Ontology (DoCO), “a general-purpose structured vocabulary of document elements.”  It is a detailed set of properties for describing common structural features of text content.  The DoCO vocabulary can be used by anyone, though its initial adoption will likely be limited to research-oriented publishers.  However, many specialized vocabularies such as this one have become extensions to schema.org.  If DoCO were in some form adsorbed by schema.org, its usage would increase dramatically.

Diagram showing document ontology
Diagram showing document components ontology

 WAI-ARIA

WAI-ARIA is commonly thought of as a means to make functionality accessible.  However, it should be considered more broadly as a means to enhance the functionality of web content overall, since it helps web agents understand the intentions of the content. WAI-ARIA can indicate many dynamic content structures, such as alerts, feeds, marquees, and regions.

The new Digital Publishing WAI-ARIA developed out of the ePub standards, which have a richer set of structural metadata than is available in standard HTML5.  The goal of the Digital Publishing WAI-ARIA is to “produce structural semantic extensions to accommodate the digital publishing industry”.  It has the following structural attributes:

  • doc-abstract
  • doc-acknowledgments
  • doc-afterword
  • doc-appendix
  • doc-backlink
  • doc-biblioentry
  • doc-bibliography
  • doc-biblioref
  • doc-chapter
  • doc-colophon
  • doc-conclusion
  • doc-cover
  • doc-credit
  • doc-credits
  • doc-dedication
  • doc-endnote
  • doc-endnotes
  • doc-epigraph
  • doc-epilogue
  • doc-errata
  • doc-example
  • doc-footnote
  • doc-foreword
  • doc-glossary
  • doc-glossref
  • doc-index
  • doc-introduction
  • doc-noteref
  • doc-notice
  • doc-pagebreak
  • doc-pagelist
  • doc-part
  • doc-preface
  • doc-prologue
  • doc-pullquote
  • doc-qna
  • doc-subtitle
  • doc-tip
  • doc-toc

 

To indicate an the structure of a text box showing an example:

<aside role="doc-example">

<h1>An Example of Structural Metadata in WAI-ARIA</h1>

…

</aside>

Content expressing a warning might look like this:

<div role="doc-notice" aria-label="Explosion Risk">

<p><em>Danger!</em> Mixing reactive materials may cause an explosion.</p>

</div>

Although book-focused, DOC-ARIA roles provide a rich set of structural elements that can be used with many kinds of content.  In combination with the core WAI-ARIA, these attributes can describe the structure of web content in extensive detail.

CSS as Structure

For a long while, developers have been creating pseudo structures using CSS, such as making infoboxes to enclose certain information. Class is a global attribute of HTML, but has become closely associated with CSS, so much so that some believe that is its only purpose.  Yet Wikipedia notes: “The class attribute provides a way of classifying similar elements. This can be used for semantic purposes, or for presentation purposes.”  Some developers use what are called “semantic classes” to indicate what content is about.  The W3C advises when using the class attribute: “authors are encouraged to use values that describe the nature of the content, rather than values that describe the desired presentation of the content.”

Some developers claim that the class attribute should never be used to indicate the meaning of content within an element, because HTML elements will always make that clear. I agree that web content should never use the class attribute as a substitute for using a meaningful HTML element. But the class attribute can sometimes further refine the meaning of an HTML element. Its chief limitation is that class names involve private meanings. Yet if they are self-describing they can be useful.

Class attributes are useful for selecting content, but they operate outside of metadata standards.  However, schema.org is proposing a property that will allow class values to be specified within schema.org metadata.  This has potentially significant implications for extending the scope of structural metadata.

The motivating use case is as follows: “There is a need for authors and publishers to be able to easily call out portions of a Web page that are particularly appropriate for reading out aloud. Such read-aloud functionality may vary from speaking a short title and summary, to speaking a few key sections of a page; in some cases, it may amount to speaking most non-visual content on the page.”

The pending cssSelector property in schema.org can identify named portions of a web page.  The class could be a structure such as a summary or a headline that would be more specific than an HTML element.  The cssSelector has a companion property called xpath, which identifies HTML elements positionally, such as the paragraphs after h2 headings.

These features are not yet fully defined. In addition to indicating speakable content, the cssSelector can indicate parts of a web page. According to a Github discussion: “The ‘cssSelector’ (and ‘xpath’) property would be particularly useful on http://schema.org/WebPageElement to indicate the part(s) of a page matching the selector / xpath.  Note that this isn’t ‘element’ in some formal XML sense, and that the selector might match multiple XML/HTML elements if it is a CSS class selector.”  This could be useful selecting content targeted at specific devices.

The class attribute can identify structures within the web content, working together with entity-focused properties that describe specific data relating to the content.  Both of these indicate content variables, but they deliver different benefits.

Entity-based (descriptive) metadata can be used for content variables about specific information. They will often serve as  text or numeric variables. Use descriptive metadata variables when choosing what informational details to put in a message.

Structural metadata can be used phrase-based variables, indicating reusable components.    Phrases can be either blocks (paragraphs or divs), or snippets (a span).  Use structural metadata variables when choosing the wording to convey a message in a given scenario.

A final interesting point about cssSelector’s in schema.org.  Like other properties in schema.org, these can be expressed either as inline markup in HTML (microdata) or as an external JSON-LD script.  This gives developers the flexibility to choose whether to use coding libraries that are optimized for arrays (JSON-flavored), or ones focus on selectors.  For too long, what metadata gets included has been influenced by developer preferences in coding libraries.  The fact that CSS attributes can be expressed as JSON suggests that hurdle is being transcended.

Conclusion

Structural metadata is finally getting some love in the standards community, even though awareness of it remains low among developers.  I hope that content teams will consider how they can use structural metadata to be more precise in indicating what their content does, so that it can be used flexibly in emerging scenarios such as voice interactions.

— Michael Andrews