Categories
Big Content

Time to end Google’s domination of schema.org

Few companies enjoy being the object of public scrutiny.  But Google, one of the world’s most recognized brands, seems especially averse.  Last year, when Sundar Pichai, Google’s chief executive, was asked to testify before Congress about antitrust concerns, he refused.  They held the hearing without his presence.  His name card was there, in front of an empty chair.

Last month Congress held another hearing on antitrust.  This time, Pichai was in his chair in front of the cameras, remotely if reluctantly.   During the hearings, Google’s fairness was a focal issue.  According to a summary of the testimony on the ProMarket blog of the University of Chicago Business School’s Stigler Center: “If the content provider complained about its treatment [by Google], it could be disappeared from search. Pichai didn’t deny the allegation.”

One of the major ways that content providers gain (or lose) visibility on Google — the first option that most people choose to find information — is through their use of a metadata standard known as schema.org.  And the hearing revealed that publishers are alleging that Google engages in bullying tactics relating to how their information is presented on the Google platform.  How might these issues be related?  Who sits in the chair that decides the stakes?

Metadata and antitrust may seem arcane and wonky topics, especially when looked at together. Each requires some basic knowledge to understand, so it is rare that the interaction between the two is discussed. Yet it’s never been more important to remove the obscurity surrounding how the most widely used standard for web metadata, schema.org influences the fortunes of Google, one of the most valuable companies in the world. 

Why schema.org is important to Google’s dominant market position

Google controls 90% of searches in the US, and its Android operating system powers 9 of 10 smartphones globally.  Both these products depend on schema.org metadata (or “structured data”) to induce consumers to use Google products.  

A recent investigation by The Markup noted that “Google devoted 41 percent of the first page of search results on mobile devices to its own properties and what it calls ‘direct answers.’”  Many of these direct answers are populated by schema.org metadata that publishers provide in hopes of driving traffic to their websites.  But Google has a financial incentive to stop traffic from leaving its websites.  The Markup notes that “Google makes five times as much revenue through advertising on its own properties as it does selling ad space on third-party websites.”  Tens of billions of dollars of Google revenues depend in some way on the schema.org metadata.  In addition to web search results, many Google smartphone apps including Gmail capture schema.org metadata that can support other ad-related revenues.  

During the recent antitrust hearings, Congressman Cicilline told Sundar Pichai that “Google evolved from a turnstile to the rest of the web to a walled garden.”  The walled garden problem is at the core of Google’s monopoly position.   Perversely, Google has been able to manipulate the use of public standards to create a walled garden for its products.  Google reaps a disproportionate benefit from the standard by preventing broader uses of the standards that could result in competitive threats to Google.

There’s a deep irony is that a W3C-sanctioned metadata standard, schema.org, has been captured by a tech giant not just to promote its unique interests but to limit the interests of others.  Schema.org was supposed to popularize the semantic web and help citizens gain unprecedented access to the world’s information. Yet Google has managed to monopolize this public asset. 

How schema.org became a walled garden

How Google came to dominate a W3C-affiliated standard requires a little history.  The short history is that Google has always been schema.org’s chief patron.  It created schema.org and promoted it in the W3C.  Since then, it has consolidated its hold on it. 

The semantic web — the inspiration for schema.org — has deep roots in the W3C.  Tim Berners Lee, the founder of the World Wide Web, coined the concept and has been its major champion.  The commercialization of the approach has been long in the making. Metaweb was the first venture-funded company to commercialize the semantic web with its product Freebase  The New York Times noted at the time: “In its ambitions, Freebase has some similarities to Google — which has asserted that its mission is to organize the world’s information and make it universally accessible and useful. But its approach sets it apart.”  Google bought Metaweb and its Freebase database in 2010, buying and removing a potential competitor.   The following year (2011) Google launched the schema.org initiative, bringing along Bing and Yahoo, the other search engines that competed with Google.  While the market share of Bing and Yahoo were small compared to Google, the launch initiative raised hopes that more options would be available for search.  Google noted: “With schema.org, site owners can improve how their sites appear in search results not only on Google, but on Bing, Yahoo! and potentially other search engines as well in the future.”  Nearly a decade later, there is even less competition in search than there was when schema.org was created.

In 2015 a Google employee proposed that schema.org become a W3C community group.  He soon became the chair of the group once it was formed.  

By making schema.org a W3C community, the Google-driven initiative gained credibility through its W3C endorsement as a community-driven standard. Previously, only Google and its initiative partners (Microsoft’s Bing, Yahoo, and later Russia’s Yandex) had any say over the decisions that webmasters and other individuals involved with publishing web content needed to follow, a situation which could have triggered antitrust alarms relating to collusion.   Google also faced the challenge of encouraging webmasters to adopt the schema.org standard.  Webmasters had been slow to embrace the standard and assume the work involved with using it.  Making schema.org an open community-driven standard solved multiple problems for Google at once.  

In normal circumstances — untinged by a massive and domineering tech platform — an open standard should have encouraged webmasters to participate in the standards-making process and express their goals and needs. Ideally, a community-driven standard would be the driver of innovation. It could finally open up the semantic web for the benefit of web users.  But the tight control Google has exercised over the schema.org community has prevented that from happening.

The murky ownership of the schema.org standard

From the beginnings of schema.org, Google’s participation has been more active than anyone else, and Google’s guidance about schema.org has been more detailed than even the official schema.org website.  This has created a great deal of confusion among webmasters about what schema.org requires for compliance to the standard, as opposed to what Google requires for compliance for its search results and ranking.  It’s common for an SEO specialist to ask in a schema.org forum a question about Google’s search results.  Even people with a limited knowledge of schema.org’s mandate assume — correctly — that it exists primarily for the benefit of Google.  

In theory, Google is just one of numerous organizations that implements a standard that is created by a third party.  In practice, Google is both the biggest user of the schema.org standard — and also its primary author.  Google is overwhelmingly the biggest consumer of schema.org structured data.  It also is by far the most active contributor to the standard.  Most other participants are along for the ride: trying to keep up with what Google is deciding internally about how it will use schema.org in its products, and what it is announcing externally about changes Google wants to make to the standard.

In many cases, if you want to understand the schema.org standard, you need to rely on Google’s documentation.  Webmasters routinely complain about the quality of schema.org’s documentation: its ambiguities, or the lack of examples.  Parts of the standard that are not priorities for Google are not well documented anywhere.  If they are priorities for Google, however, Google itself provides excellent documentation about how information should be specified in schema.org so that Google can use it.   Because schema.org’s documentation is poor, the focus of attention stays on Google.

The reliance that nearly everyone has on Google to ascertain compliance with schema.org requirements was highlighted last month by Google’s decision to discontinue its Structured Data Testing Tool, which is widely used by webmasters to check that their schema.org metadata is correct — at least as far as Google is concerned.  Because the concrete implementation requirements of schema.org are often murky, many rely on this Google tool to verify the correctness of the data independently of how the data would be used.  Google is replacing this developer-focused tool with a website that checks whether the metadata will display correctly in Google’s “rich results.”  The new “Rich Results Test Tool” acknowledges finally what’s been an open secret: Google’s promotion of schema.org is primarily about populating its walled garden with content.  

Google’s domination of the schema.org community

The purpose of a W3C group should be to serve everyone, not just a single company. In the case of schema.org, a W3C community has been dominated from the start by a single company: Google.

Google has chaired the schema.org community continuously since its inception in 2015.   Microsoft (Bing) and Yahoo (now Verizon), who are minor players in the search business, participate nominally but are not very active considering they were founding members of schema.org.  Google, in contrast, has multiple employees active in community discussions, steering the direction of conversations.  These employees shape the core decisions, together with a few independent consultants who have longstanding relationships with Google.  It’s hard to imagine any decision happening without Google’s consent.  Google has effective veto power over decisions.

Google’s domination of the schema.org community is possible because the community has no resources of its own.  Google conveniently volunteers the labor of its employees to perform duties related to community business, but these activities will naturally reflect the interests of the employer, Google.  Since other firms don’t have the same financial incentives that Google has through its market dominance of search and smartphones in the outcomes of schema.org decisions, they don’t allocate their employees to spend time on schema.org issues.  Google corners the discussion while appearing to be the most generous contributor.

The absence of governance in the schema.org community

The schema.org community essentially has zero governance — a situation Google is happy with.  There are no formal rules, no formal process for proposals and decisions, no way to appeal a decision, and no formal roles apart from the chair, who ultimately can decide everything. There’s no process of recusal.  Google holds sway in part because the community has no permanent and independent staff.  And there’s no independent board of oversight reviewing how business is conducted.

It’s tempting to see the absence of governance as an example of a group of developers who have a disdain for bureaucracy — that’s the view Google encourages.  But the commercial and social significance of these community decisions is enormous and shouldn’t be cloaked in capricious informality.  Moreover, the more mundane problems of a lack of process are also apparent.  Many people who attempt to make suggestions feel frozen out and unwelcome. Suggestions may be challenged by core insiders who have deep relationships with one another.  The standards- making process itself lacks standardization.  

 In the absence of governance, the possibilities of a conflict of interest are substantial.  First, there’s the problem of self-dealing: Google using its position as the chair of a public forum to prioritize its own commercial interests ahead of others.  Second, there’s the possibility that non-Google proposals will be stopped because they are seen as costly to Google, if only because they create extra work for the largest single user of schema.org structured data.  

As a public company, Google is obligated to its shareholders — not to larger community interests.  A salaried Google employee can’t simultaneously promote his company’s commercial interests and promote interests that could weaken his company’s competitive position.  

Community bias in decisions

Few people want an open W3C community to exhibit biases in their decisions.  But owing to Google’s outsized participation and the absence of governance, decision making that’s biased toward Google’s priorities is common.

Whatever Google wants is fast-tracked — sometimes happening within a matter of days.  If a change to schema.org is needed to support a Google product that needs to ship, nothing will slow down that from happening.

 Suggestions from people not affiliated with Google face a tougher journey.  If the suggestion does not match Google priorities, it is slow-walked. They will be challenged as to their necessity or practicality.  They will languish as an open issue in Github, where they will go unnoticed unless they generate an active discussion.  Eventually, the chair will cull proposals that have been long buried in the interest of closing out open issues.

While individuals and groups can propose suggestions of their own, successful ones tend to be incremental in nature, already aligned with Google’s agenda.  More disruptive or innovative ones are less likely to be adopted.

In the absence of a defined process, the ratification of proposals tends to happen through an informal virtual acclamation.  Various Google employees will conduct a public online discussion agreeing with one another on the merits of adopting a proposal or change.  With “community” sentiment demonstrated, the change is pushed ahead.  

Consumer harm from Google’s capture of schema.org

Google’s domination of schema.org is an essential part of its business model.  Schema.org structured data drives traffic to Google properties, and Google has leveraged it so that it can present fewer links that would drive traffic elsewhere.  The more time consumers spend on Google properties, the more their information decisions are limited to the ads that Google sells.  Consumers need to work harder to find “organic” links (objectively determined by their query and involving no payment to Google) to information sources they seek.

A technical standard should be a public good that benefits all.  In principle, publishers that use schema.org metadata should be able to expand the reach of their information, so that apps from many firms take advantage of it, and consumers have more choices about how and where they get their information.  The motivating idea behind semantic structured data such as schema.org provides is that information becomes independent of platforms.  But ironically, for consumers to enjoy the value of structured data, they mostly need to use Google products.  This is a significant market failure, which hasn’t happened by accident.

The original premise of the semantic web was based on openness.  Publishers freely offered information, and consumers could freely access it.  But the commercial version, driven by Google, has changed this dynamic.  The commercial semantic web isn’t truly open; it is asymmetrically open.  It involves open publishing but closed access.  Web publishers are free to publish their data using the schema.org standard and are actively encouraged to do so by Google. The barriers to creating structured data are minimal, though the barriers to retrieving it aren’t.  

Right now, only a firm with the scale of Google is in a position to access this data and normalize it into something useful for consumers.  Google’s formidable ad revenues allow it to crawl the web and harvest the data for its private gain.  A few other firms are also harvesting this data to build private knowledge graphs that similarly provide gated access.  The goal of open consumer access to this data remains elusive.  A small company may invest time or money to create structured data, but they lack the means to use structured data for their own purposes.   But it doesn’t have to be this way.   

Has Google’s domination of schema.org stifled innovation?

When considering how big tech has influenced innovation, it is necessary to pose a counterfactual question: What might have been possible if the heavy hand of a big tech platform hadn’t been meddling?

Google’s routine challenge to suggestions for additions to the schema.org vocabulary is to question whether the new idea will be used.  “What consuming application is going to use this?” is the common screening question.  If Google isn’t interested in using it, why is it worthwhile doing?  Unless the individual making the suggestion is associated with a huge organization that will build significant infrastructure around the new proposal, the proposal is considered unviable.  

The word choice of “consuming applications” is an example of how Google avoids referring to itself and its market dominance.  The Markup recently revealed how Google coaches its employees to avoid phrases that could get it in additional antitrust trouble.  Within the schema.org community group, Google employees strive to make discussion appear objective, where Google seems disinterested in the decision.  

One area where Google has discouraged alternative developments is in discouraging the linking of schema.org data with data using other metadata vocabularies (standards).  This is significant for multiple reasons.  The schema.org vocabulary is limited in its scope, mostly focusing on commercial entities and not on non-commercial entities.  Because Google is not interested in non-commercial entity coverage, publishers need to rely on other vocabularies.  But Google doesn’t want to look at other vocabularies, claiming that it is too taxing for them to crawl data described by other vocabularies.  In this, Google is making a commercial decision that goes against the principles of linked data (a principle of the semantic web), which explicitly encourages the mixing of vocabularies. For publishers, they are forced to obey Google’s diktats.  Why should they supply metadata that Google, the biggest consumer of schema.org metadata, says it will ignore?  With a few select exceptions, Google mandates that only schema.org metadata should be used in web content and no other semantic vocabularies.  Google sets the vision of what schema.org is, and what it does.

To break this cycle, the public should be asking: How might consumers access and utilize information from the information commons without relying on Google?

There are several paths possible.  One might involve opening up the web crawl to wider use by firms of all sizes.  Another would be to expand the role of the schema.org vocabularies in APIs to support consumer apps.  Whatever path is pursued, it needs to be attractive to small firms and startups to bring greater diversity to consumers and spark innovation.

Possibilities for reform: Getting Google out of the way

If schema.org is to continue as a W3C community and associated with the trust conferred by that designation, then it will require serious reform.  It needs governance — and independence from Google.  It may need to transform into something far more formal than a community group.  

In its current incarnation, it’s difficult to imagine this level of oversight.  The community is resource-starved, and relies on Google to function. But if schema.org isn’t viable without Google’s outsized involvement, then why does it exist at all?  Whose community is it?

There’s no rationale to justify the W3C  lending its endorsement to a community that is dominated by a single company.  One solution is for schema.org to cease being part of the W3C umbrella and return to its prior status of being a Google-sponsored initiative.  That would be the honest solution, barring more sweeping changes.

Another option would be to create a rival W3C standard that isn’t tied to Google and therefore couldn’t be dominated by it, but a standard Google couldn’t afford to ignore.  That would be a more radical option, involving significant reprioritization by publishers.  It would be disruptive in the short term, but might ultimately result in greater innovation.  A starting point for this option would be to explore how to popularize Wikidata as a general-purpose vocabulary that could be used instead of schema.org.

A final option would be for Google to step up in order to step down.  They could acknowledge that they have benefited enormously from the thousands of webmasters and others who contribute structured data and they owe a debt to them.  They could offer to payback in kind.  Google could draw on the recent example of Facebook’s funding of an independent body that will provide oversight that company.  Google could fund a truly independent body to oversee schema.org, and financially guarantee the creation of a new organizational structure.   Such an organization would leave no questions about how decisions are made and would dispel industry concerns that Google is gaining unfair advantages.  Given the heightening regulatory scrutiny of Google, this option is not as extravagant as it may first sound.

On a pragmatic level, I would like to see schema.org realize its full potential.  This issue is important enough to merit broader discussion, not just in the narrow community of people who work on web metadata, but those involved with regulating technology and looking at antitrust.  Google spends considerable sums, often furtively, hiring academic experts and others to dismiss concerns about their market dominance.  The role of metadata should be to make information more transparent.  That’s why this matters in many ways.

— Michael Andrews

Clarification: schema.org’s status as a  north star and as a standard (August 12)

The welcome page of schema.org notes it is “developed by an open community process, using the public-schemaorg@w3.org mailing list.”   When I first published this post I referred to schema.org as an “open W3C metadata standard.” Dan Brickley of Google tweeted to me and others stating that I made a “simple factual error” doing so. He is technically correct that my characterization of the W3C’s role is not precise, so I have changed the wording to say “a W3C-sanctioned metadata standard” instead (sanctioned = permitted), which is the most accurate I can manage, given the intentionally confusing nature of schema.org’s mandate.  This may seem like mincing words, but the implications are important, and I want to elaborate on what those are.

It is true that schema.org is not an official W3C standard in the sense that HTML5 is, which had a cast of thousands involved in its development.  For standards to become an official W3C standard, they need to go through a long process of community vetting, moving through stages such as being a recommendation first.  Even a recommendation is not yet an official standard, though it is widely followed.  Just because technical guidelines aren’t official W3C standards or are even referred to as standards does not mean they don’t have the effect of a standard that others would be expected to follow in order to gain market acceptance. Standards vary in the degree they are voluntary — schema.org has always been a voluntary standard.  And there are different levels of standards maturity within the W3C’s standards making framework, with the most mature ones reflecting the most stringent levels of compliance.  A W3C community group discussions around standards proposals would be the least rigorous and normally associated with the least developed stage of standards activity.  It is typically associated with new ideas for standards, rather than well-formed standards that are widely used by thousands of companies.  

A key difference with the schema.org community group is that is hosts discussions about a fully-formed standard.  This standard was fully formed before there was ever a community group to discuss it.  In other words, there was never any community input on the basic foundation of schema.org.  Google decided this together with its partners in the schema.org initiative.  

So I agree that schema.org fails to satisfy the expectations of a W3C standard.  The W3C has a well-established process for standards, and schema.org’s governance doesn’t remotely align with how a W3C standard is developed.  

The problem is that by having fully-formed standard being discussed in a W3C forum, it appears as if schema.org is a W3C standard of some sort.  Appearances do matter. Webmasters on the W3C mailing list can reasonably assume the W3C endorses schema.org.  And by hosting a community group on schema.org, the W3C has lent support to schema.org.  To outsiders, they appear to be sponsoring its development and one would presume be interested in having open participation in decision making about it.  The terms of service for schema.org treat “the schemas published by Schema.org as if the schemas were W3C Recommendations.”  The optics of schema.org imply it is W3C-ish.  

Dan Brickley refers to schema.org as a “independent project” and not a “W3C thing.”  I’m not reassured by that characterization, which is the first I’ve heard Google draw explicit distance from W3C affiliation.  He seems to be rejecting the notion that the W3C should provide any oversight over the schema.org process.  They’re merely providing a free mailing list.  The four corporate “sponsors” of schema.org set the binding conditions of the terms of service.  Nothing schema.org is working on is intended to become an official W3C standard and hence subject to W3C governance.  

Even though 10 million websites use schema.org metadata and are affected by its decisions, schema.org’s decision making is tightly held.   Ultimate decision making authority rests with a Steering Committee (also chaired by Google) that is invitation-only and not open to public participation.  Supposedly, a W3C representative is allowed to sit on this committee, though the details about this, like much else in schema.org’s documentation, are unclear.   

It may seem reassuring to imagine that schema.org belongs to a nebulous entity called the “community,” but that glosses over how much of the community activities and decisions are Google-driven. Google does draw on the expertise and ideas of others, so that schema.org is more than one company’s creation.  But in the end, Google keeps tight control over the process so that schema.org reflects its priorities.  It would be simpler to call this the Google Structured Data schema.  

 Schema.org appears to be public and open, while in practice is controlled by a small group of competitors and one firm in particular. Google is having its cake and eating it too.  If schema.org does not want W3C oversight, then the W3C should disavow having a connection with them, and help to reduce at least some of the confusion about who is in control of schema.org.  

Categories
Agility

Seamless: Structural Metadata for Multimodal Content

Chatbots and voice interaction are hot topics right now. New services such as Facebook Messenger and Amazon Alexa have become popular quickly. Publishers are exploring how to make their content multimodal, so that users can access content in varied ways on different devices. User interactions may be either screen-based or audio-based, and will sometimes be hands-free.

Multimodal content could change how content is planned and delivered. Numerous discussions have looked at one aspect of conversational interaction: planning and writing sentence-level scripts. Content structure is another dimension relevant to voice interaction, chatbots and other forms of multimodal content. Structural metadata can support the reuse of existing web content to support multimodal interaction. Structural metadata can help publishers escape the tyranny of having to write special content for each distinct platform.

Seamless Integration: The Challenge for Multimodal Content

In-Vehicle Infotainment (IVI) systems such as Apple’s CarPlay illustrate some of challenges of multimodal content experiences. Apple’s Human Interface Guidelines state: “On-screen information is minimal, relevant, and requires little decision making. Voice interaction using Siri enables drivers to control many apps without taking their hands off the steering wheel or eyes off the road.” People will interact with content hands-free, and without looking. CarPlay includes six distinct inputs and outputs:

  1. Audio
  2. Car Data
  3. iPhone
  4. Knobs and Controls
  5. Touchscreen
  6. Voice (Siri)

The CarPlay UIKit even includes “Drag and Drop Customization”. When I review these details, much seems as if it could be distracting to drivers. Apple states with CarPlay “iPhone apps that appear on the car’s built-in display are optimized for the driving environment.” What that iPhone app optimization means in practice could determine whether the driver gets in an accident.

CarPlay screenshot
CarPlay: if it looks like an iPhone, does it act like an iPhone? (screenshot via Apple)

Multimodal content promises seamless integration between different modes of interaction, for example, reading and listening. But multimodal projects carry a risk as well if they try to port smartphone or web paradigms into contexts that don’t support them. Publishers want to reuse content they’ve already created. But they can’t expect their current content to suffice as it is.

In a previous post, I noted that structural metadata indicates how content fits together. Structural metadata is a foundation of a seamless content experience. That is especially true when working with multimodal scenarios. Structural metadata will need to support a growing range of content interactions, involving distinct modes. A mode is form of engaging with content, both in terms of requesting and receiving information. A quick survey of these modes suggests many aspects of content will require structural metadata.

Platform Example Input Mode Output Mode
Chatbots Typing Text
Devices with Mic & Display Speaking Visual (Video, Text, Images, Tables) or Audio
Smart Speakers Speaking Audio
Camera/IoT Showing or Pointing Visual or Audio

Multimodal content will force content creators to think more about content structure. Multimodal content encompasses all forms of media, from audio to short text messages to animated graphics. All these forms present content in short bursts. When focused on other tasks, users aren’t able to read much, or listen very long. Steven Pinker, the eminent cognitive psychologist, notes that humans can only retain three or four items in short term memory (contrary to the popular belief that people can hold 7 items). When exploring options by voice interaction, for example, users can’t scan headings or links to locate what they want.  Instead of the user navigating to the content, the content needs to navigate to the user.

Structural metadata provides information to machines to choose appropriate content components. Structural metadata will generally be invisible to users — especially when working with screen-free content. Behind the scenes, the metadata indicates hidden structures that are important to retrieving content in various scenarios.

Metadata is meant to be experienced, not seen. A photo of an Amazon customer’s Echo Show, revealing  code (via Amazon)

Optimizing Content With Structural Metadata

When interacting with multimodal content, users have limited attention, and a limited capacity to make choices. This places a premium on optimizing content so that the right content is delivered, and so that users don’t need to restate or reframe their requests.

Existing web content is generally not optimized for multimodal interaction — unless the user is happy listening to a long article being read aloud, or seeing a headline cropped in mid-sentence. Most published web content today has limited structure. Even if the content was structured during planning and creation, once delivered, the content lacks structural metadata that allows it to adapt to different circumstances. That makes it less useful for multimodal scenarios.

In the GUI paradigm of the web, users are expected to continually make choices by clicking or tapping. They see endless opportunities to “vote” with their fingers, and this data is enthusiastically collected and analyzed for insights. Publishers create lots of content, waiting to see what gets noticed. Publishers don’t expect users to view all their content, but they expect users to glance at their content, and scroll through it until users have spotted something enticing enough to view.

Multimodal content shifts the emphasis away from planning delivery of complete articles, and toward delivering content components on-demand, which are described by structural metadata. Although screens remain one facet of multimodal content, some content will be screen-free. And even content presented on screens may not involve a GUI: it might be plain text, such as with a chatbot. Multimodal content is post-GUI content. There are no buttons, no links, no scrolling. In many cases, it is “zero tap” content — the hands will be otherwise occupied driving, cooking, or minding children. Few users want to smudge a screen with cookie dough on their hands. Designers will need to unlearn their reflexive habit of adding buttons to every screen.

Users will express what they want, by speaking, gesturing, and if convenient, tapping. To support zero-tap scenarios successfully, content will need to get smarter, suggesting the right content, in the right amount. Publishers can no longer present an endless salad bar of options, and expect users to choose what they want. The content needs to anticipate user needs, and reduce demands on the user to make choices.

Users will aways want to choose what topics they are interested in. They may be less keen on actively choosing the kind of content to use. Visiting a website today, you find articles, audio interviews, videos, and other content types to choose from. Unlike the scroll-and-scan paradigm of the GUI web, multimodal content interaction involves an iterative dialog. If the dialog lasts too long, it gets tedious. Users expect the publisher to choose the most useful content about a topic that supports their context.

screenshot of Google News widget
Pattern: after saying what you want information about, now tell us how you’d like it (screenshot via Google News)

In the current use pattern, the user finds content about a topic of interest (topic criteria), then filters that content according to format preferences. In future, publishers will be more proactive deciding what format to deliver, based on user circumstances.

Structural metadata can help optimize content, so that users don’t have to choose how they get information. Suppose the publisher wants to show something to the user. They have a range of images available. Would a photo be best, or a line drawing? Without structural metadata, both are just images portraying something. But if structural metadata indicates the type of image (photo or line diagram), then deeper insights can be derived. Images can be A/B tested to see which type is most effective.

A/B testing of content according to its structural properties can yield insights into user preferences. For example, a major issue will be learning how much to chunk content. Is it better to offer larger size chunks, or smaller ones? This issue involves the tradeoffs for the user between the costs of interaction, memory, and attention. By wrapping content within structural metadata, publishers can monitor how content performs when it is structured in alternative ways.

Component Sequencing and Structural Metadata

Multimodal content is not delivered all at once, as is the case with an article. Multimodal content relies on small chunks of information, which act as components. How to sequence these components is important.

photo of Echo Show
Alexa showing some cards on an Echo Show device (via Amazon)

Screen-based cards are a tangible manifestation of content components. A card could show the current weather, or a basketball score. Cards, ideally, are “low touch.” A user wants to see everything they need on a single card, so they don’t need to interact with buttons or icons on the card to retrieve the content they want. Cards are post-GUI, because they don’t rely heavily on forms, search, links and other GUI affordances. Many multimodal devices have small screens that can display a card-full of content. They aren’t like a smartphone, cradled in your hand, with a screen that is scrolled. An embedded screen’s purpose is primarily to display information rather than for interaction. All information is visible on the card [screen], so that users don’t need to swipe or tap. Because most of us are accustomed to using screen-based cards already, but may be less familiar with screen-free content, cards provide a good starting point for considering content interaction.

Cards let us consider components both as units (providing an amount of content) and as plans (representing a purpose for the content). User experiences are structured from smaller units of content, but these units need have a cohesive purpose. Content structure is more than breaking content into smaller pieces. It is about indicating how those pieces can fit together. In the case of multimodal content, components need to fit together as an interaction unfolds.

Each card represents a specific type of content (recipe, fact box, news headline, etc.), which is indicated with structural metadata. The cards also present information in a sequence of some sort.1 Publishers need to know how various types of components can be mixed, and matched. Some component structures are intended to complement each other, while other structures work independently.

Content components can be sequenced in three ways. They can be:

  1. Modular
  2. Fixed
  3. Adaptive

Truly modular components can be sequenced in any order; they have no intrinsic sequence. They provide information in response to a specific task. Each task is assumed to be unrelated. A card providing an answer to the question of “What is the height of Mount Everest?” will be unrelated to a card answering the question “What is the price of Facebook stock?”

The technical documentation community uses an approach known as topic-based writing that attempts to answer specific questions modularly, so that every item of content can be viewed independently, without need to consult other content. In principle, this is a desirable goal: questions get answered quickly, and users retrieve the exact information they need without wading through material they don’t need. But in practice, modularity is hard to achieve. Only trivial questions can be answered on a card. If publishers break a topic into several cards, they should indicate the relations between the information on each card. Users get lost when information is fragmented into many small chunks, and they are forced to find their way through those chunks.

Modular content structures work well for discrete topics, but are cumbersome for richer topics. Because each module is independent of others, users, after viewing the content, need to specify what they want next. The downside of modular multimodal content is that users must continually specify what they want in order to get it.

Components can sequenced in a fixed order. An ordered list is a familiar example of structural metadata indicating a fixed order. Narratives are made from sequential components, each representing an event that happens over time. The narrative could be a news story, or a set of instructions. When considered as a flow, a narrative involves two kinds of choices: whether to get details about an event in the narrative, or whether to get to the next event in the narrative. Compared with modular content, fixed sequence content requires less interaction from the user, but longer attention.

Adaptive sequencing manages components that are related, but can be approached in different orders. For example, content about an upcoming marathon might include registration instructions, sponsorship info, a map, and event timing details, each as a separate component/card. After viewing each card, users need options that make sense, based on content they’ve already consumed, and any contextual data that’s available. They don’t want too many options, and they don’t want to be asked too many questions. Machines need to figure out what the user is likely to need next, without being intrusive. Does the user need all the components now, or only some now?

Adaptive sequencing is used in learning applications; learners are presented with a progression of content matching their needs. It can utilize recommendation engines, suggesting related components based on choices favored by others in a similar situation. An important application of adaptive sequencing is deciding when to ask a detailed question. Is the question going to be valuable for providing needed information, or is the question gratuitous? A goal of adaptive sequencing is to reduce the number of questions that must be asked.

Structural metadata generally does not explicitly address temporal sequencing, because (until now) publishers have assumed all content would be delivered at once on a single web page. For fixed sequences, attributes are needed to indicate order and dependencies, to allow software agents to follow the correct procedure when displaying content. Fixed sequences can be expressed by properties indicating step order, rank order, or event timing. Adaptive sequencing is more programmatic. Publishers need to indicate the relation of components to parent content type. Until standards catch up, publishers may need to indicate some of these details in the data-* attribute.

The sequencing of cards illustrates how new patterns of content interaction may necessitate new forms of structural metadata.

Composition and the Structure of Images

One challenge in multimodal interaction is how users and systems talk about images, as either an input (via a camera), or as an output. We are accustomed to reacting to images by tapping or clicking. We now have the chance to show things to systems, waving an object in front of a camera. Amazon has even introduced a hands-free voice activated IoT camera that has no screen. And when systems show us things, we may need to talk about the image using words.

Machine learning is rapidly improving, allowing systems to recognize objects. That will help machines understand what an item is. But machines still need to understand the structural relationship of items that are in view. They need to understand ordinary concepts such as near, far, next to, close to, background, group of, and other relational terms. Structural metadata could make images more conversational.

Vector graphics are composed of components that can represent distinct ideas, much like articles that are composed of structural components. That means vector images can be unbundled and assembled differently. The WAI-ARIA standard for web accessibility has an SVG Graphics Module that covers how to markup vector images. It includes properties to add structural metadata to images, such as group (a role indicating similar items in the image) and background (a label for elements in the image in the background). Such structural metadata could be useful for users interacting with images using voice commands. For example, the user might want to say, “Show me the image without a background” or “with a different background”.

Photos do not have interchangeable components the way that vector graphics do. But photos can present a structural perspective of a subject, revealing part of a larger whole. Photos can benefit from structural metadata that indicates the type of photo. For example, if a user wants a photo of a specific person, they might have a preference for a full-length photo or for a headshot. As digital photography has become ubiquitous, many photos are available of the same subject that present different dimensions of the subject. All these dimensions form a collection, where the compositions of individual photos reveal different parts of the subject. The IPTC photo metadata schema includes a controlled vocabulary for “scenes” that covers common photo compositions: profile, rear view, group, panoramic view, aerial view, and so on. As photography embraces more kinds of perspectives, such as aerial drone shots and omnidirectional 360 degree photographs, the value of perspective and scene metadata will increase.

For voice interaction with photo images to become seamless, machines will need to connect conversational statements with image representations. Machines may hear a command such as “show me the damage to the back bumper,” and must know to show a photo of the rear view of a car that’s been in an accident. Sometimes users will get a visual answer to a question that’s not inherently visual. A user might ask: “Who will be playing in Saturday’s soccer game?”, and the display will show headshots of all the players at once. To provide that answer, the platform will need structural metadata indicating how to present an answer in images, and how to retrieve player’s images appropriately.

Structural metadata for images lags behind structural metadata for text. Working with images has been labor intensive, but structural metadata can help with the automated processing of image content. Like text, images are composed of different elements that have structural relationships. Structural metadata can help users interact with images more fluidly.

Reusing Text Content in Voice Interaction

Voice interaction can be delivered in various ways: through natural language generation, through dedicated scripting, and through the reuse of existing text content. Natural language generation and scripting are especially effective in short answer scenarios — for example, “What is today’s 30 year mortgage rate? ” Reusing text content is potentially more flexible, because it lets publishers address a wide scope of topics in depth.

While reusing written text in voice interactions can be efficient, it can potentially be clumsy as well. The written text was created to be delivered and consumed all at once. It needs some curation to select which bits work most effectively in a voice interaction.

The WAI-ARIA standards for web accessibility offer lessons on the difficulties and possibilities of reusing written content to support audio interaction. By becoming familiar with what ARIA standards offer, we can better understand how structural metadata can support voice interactions.

ARIA standards seek to reduce the burdens of written content for people who can’t scan or click through it easily. Much web content contains unnecessary interaction: lists of links, buttons, forms and other widgets demanding attention. ARIA encourages publishers to prioritize these interactive features with the TAB index. It offers a way to help users fill out forms they must submit to get to content they want. But given a choice, users don’t want to fill out forms by voice. Voice interaction is meant to dispense with these interactive elements. Voice interaction promises conversational dialog.

Talking to a GUI is awkward. Listening to written web content can also be taxing. The ARIA standards enhance the structure of written content, so that content is more usable when read aloud. ARIA guidelines can help inform how to indicate structural metadata to support voice interaction.

The ARIA encourages publishers to curate their content: to highlight the most important parts that can be read aloud, and to hide parts that aren’t needed. ARIA designates content with landmarks. Publishers can indicate what content has role=“main”, or they can designate parts of content by region. The ARIA standard states: “A region landmark is a perceivable section containing content that is relevant to a specific, author-specified purpose and sufficiently important that users will likely want to be able to navigate to the section easily and to have it listed in a summary of the page.” ARIA also provides a pattern for disclosure, so that not all text is presented at once. All of these features allow publishers to indicate more precisely the priority of different components within the overall content.

ARIA supports screen-free content, but it is designed primarily for keyboard/text-to-speech interaction. Its markup is not designed to support conversational interaction — schema.org’s pending speakable specification, mentioned in my previous post, may be a better fit. But some ARIA concepts suggest the kinds of structures that written text need to work effectively as speech. When content conveys a series of ideas, users need to know what are major and minor aspects of text they will be hearing. They need the spoken text to match the time that’s available to listen. Just like some word processors can provide an “auto summary” of a document by picking out the most important sentences, voice-enabled text will need to identify what to include in a short version of the content. The content might be structured in an inverted pyramid, so that only the heading and first paragraph are read in the short version. Users may even want the option of hearing a short version or a long version of a story or explanation.

Structural metadata and User Intent in Voice Interaction

Structural metadata will help conversational interactions deliver appropriate answers. On the input side, when users are speaking, the role of structural metadata is indirect. People will state questions or commands in natural language, which will be processed to identify synonyms, referents, and identifiable entities, in order to determine the topic of the statement. Machines will also look at the construction of the statement to determine the intent, or the kind of content sought about the topic. Once the intent is known — what kind of information the user is seeking — it can be matched with the most useful kind of content. It is on the output side, when users view or hear an answer, that structural metadata plays an active role selecting what content to deliver.

Already, search engines such as Google rely on structural metadata to deliver specific answers to speech queries. A user can ask Google the meaning of a word or phrase (What does ‘APR’ mean?) and Google locates a term that’s been tagged with structural metadata indicating a definition, such as with the HTML element <dfn>.

When a machine understands the intent of a question, it can present content that matches the intent. If a user asks a question starting with the phrase Show me… the machine can select a clip or photograph about the object, instead of presenting or reading text. Structural metadata about the characteristics of components makes that matching possible.

Voice interaction supplies answers to questions, but not all answers will be complete in a single response. Users may want to hear alternative answers, or get more detailed answers. Structural metadata can support multi-answer questions.

Schema.org metadata indicates content that answers questions using the Answer type, which is used by many forums and Q&A pages. Schema.org distinguishes between two kinds of answers. The first, acceptedAnswer, indicates the best or most popular answer, often the answer that received most votes. But other answers can be indicated with a property called suggestedAnswer. Alternative answers can be ranked according to popularity as well. When sources have multiple answers, users can get alternative perspectives on a question. After listening to the first “accepted” answer, the user might ask “tell me another opinion” and a popular “suggested” answer could be read to them.

Another kind of multi-part answer involves “How To” instructions. The HowTo type indicates “instructions that explain how to achieve a result by performing a sequence of steps.” The example the schema.org website provides to illustrate the use of this type involves instructions on how to change a tire on a car. Imagine car changing instructions being read aloud on a smartphone or by an in-vehicle infotainment system as the driver tries to change his flat tire along a desolate roadway. This is a multi-step process, so the content needs to be retrievable in discrete chunks.

Schema.org includes several additional types related to HowTo that structure the steps into chunks, including preconditions such as tools and supplies required. These are:

  • HowToSection : “A sub-grouping of steps in the instructions for how to achieve a result (e.g. steps for making a pie crust within a pie recipe).”
  • HowToDirection : “A direction indicating a single action to do in the instructions for how to achieve a result.”
  • HowToSupply : “A supply consumed when performing the instructions for how to achieve a result.”
  • HowToTool : “A tool used (but not consumed) when performing instructions for how to achieve a result.”

These structures can help the content match the intent of users as they work through a multi-step process. The different chunks are structurally connected through the step property. Only the HowTo type ( and its more specialized subtype, the Recipe) currently accepts the step property and thus can address temporal sequencing.

Content Agility Through Structural Metadata

Chatbots, voice interaction and other forms of multimodal content promise a different experience than is offered by screen-centric GUI content. While it is important to appreciate these differences, publishers should also consider the continuities between traditional and emerging paradigms of content interaction. They should be cautious before rushing to create new content. They should start with the content they have, and see how it can be adapted before making content they don’t have.

A decade ago, the emergence of smartphones and tablets triggered an app development land rush. Publishers obsessed over the discontinuity these new devices presented, rather than recognizing their continuity with existing web browser experiences. Publishers created multiple versions of content for different platforms. Responsive web design emerged to remedy the siloing of development. The app bust shows that parallel, duplicative, incompatible development is unsustainable.

Existing content is rarely fully ready for an unpredictable future. The idealistic vision of single source, format free content collides with the reality of new requirements that are fitfully evolving. Publishers need an option between the extremes of creating many versions of content for different platforms, and hoping one version can serve all platforms. Structural metadata provides that bridge.

Publishers can use structural metadata to leverage content they have already that could be used to support additional forms of interaction. They can’t assume they will directly orchestrate the interaction with the content. Other platforms such as Google, Facebook or Amazon may deliver the content to users through their services or devices. Such platforms will expect content that is structured using standards, not custom code.

Sometimes publishers will need to enhance existing content to address the unique requirements of voice interaction, or differences in how third party platforms expect content. The prospect of enhancing existing content is preferable to creating new content to address isolated use case scenarios. Structural metadata by itself won’t make content ready for every platform or form of interaction. But it can accelerate its readiness for such situations.

— Michael Andrews


  1. Dialogs in chatbots and voice interfaces also involve sequences of information. But how to sequence a series of cards may be easier to think about than a series of sentences, since viewing cards doesn’t necessarily involve a series of back and forth questions. ↩︎