Categories
Intelligent Content

Why Standards Compliance is a Tricky Notion

I just published a book about metadata, called Metadata Basics for Web Content.  The book refers to many standards, and provides samples of code illustrating metadata (or structured data, if you prefer) using these standards.  To locate good code examples, I relied on international organizations such as the W3C, industry working groups such as schema.org, and prominent companies such as Google.

All these sources are important ones for publishers to consult.  But if you pay very close attention, you may notice that the various sources aren’t always completely aligned with one another. This is a bit disconcerting. Publishers, after all, are expected to comply with standards. Various standards reference and build on each other. But certain details are different as you move between different actors in the standards arena. How can that be, that standards aren’t completely aligned?  To answer that question one must consider the governance, mission, and adoption goals of various parties involved with standards.

Publishers should recognize that no one party is in charge of metadata standards. Many parties are involved.  Decisions and practices evolve organically through a combination of planning and adaptation.  Different parties offer different choices.

The W3C is the largest standards body addressing web content.  It has a fairly open structure.  If there is sufficient interest in a topic, where enough people volunteer to work on standards issue, then a group can be started, which can begin a process of drafting notes, recommendations, and eventually standards.  The W3C doesn’t always initiate standards.  Sometimes they embrace standards that have been developed by other groups.  And sometimes the W3C has different groups addressing broadly similar issues, but in different ways.  While W3C recommendations and standards carry tremendous weight, they do not always represent a single consensus about priorities.  Generally, they skew toward accommodating a diverse range of needs, rather than enforcing a narrow set of practices.  As a nonprofit body, the W3C isn’t marketing anything, or promoting adoption of one standard over another.

Many industry groups develop standards as well.  An important one in the area of web content metadata is called schema.org.  This group started out as a partnership between search engine companies, namely Google, Bing, Yahoo and Yandex.  These companies developed a core set of standards for describing common web content with metadata.  Now that the core standard has been developed, schema.org has subsequently transformed to become a W3C community group.  Google remains the single most important driver of schema.org’s development.  But as a community, the standard has accepted contributions from many parties, and the scope of the standard is expanding.

In addition to international bodies and industry groups, certain companies, on account of their size and influence, influence standards practices through the implementation choices they make.  They may set trends of what are deemed “best practices” or they may recommend to others how to do things.  Google again is a leading example of a single firm having a big influence on standards.  As a private company, it recommends guidelines to its customers, the publishers who want their content to display in Google’s search results.  These guidelines seem like standards, though they are specific to one company.

Let’s consider how different levels of standards interact with each other.

Metadata needs to be encoded using a syntax. One widely used syntax is called RDFa, which is a W3C standard.

Metadata also needs schema to indicate entities and properties within the content.  Schema.org metadata can be encoded using RDFa syntax.  So we have one standard relying on another.  But schema.org only uses part of the RDFa specification.  There are some features in RDFa that aren’t needed when implementing schema.org.  Other metadata schemas also use the RDFa syntax, and some of these take advantage of the additional features.  The group designing schema.org decided to pare down what was needed to implement schema.org in RDFa.  They chose to keep things as simple as they could to help promote adoption of their schema.

As mentioned earlier, Google is a key player as both a developer of schema.org, and as a consumer of schema.org metadata.  Google evangelizes the use of schema.org metadata, and they offer guidelines and tools to help webmasters learn what they need to do.  Publishers often take this advice as gospel.  They presume they need to comply with Google’s standards, at least as they understand them.   What they may not realize is that Google’s tools and guidelines are often advice rather than rigid rules.  When developing its advice and tools, Google has chosen to focus on high priority content that many organizations produce, and provide guidelines to help webmasters ensure that they don’t make mistakes when creating metadata for such content.  Google’s guidelines only cover a subset of the range of content addressed by schema.org.  In effect, Google has chosen to simplify schema.org further to encourage wider adoption of it.

Google’s guidelines provide assurance that if complied with, the metadata will work with Google.  However, it does not follow that if the publisher deviates from Google’s guidelines that their metadata is wrong.  Many publishers use Google’s structured data testing tool (SDTT) to validate their metadata.  It’s a useful tool, but it validates only some dimensions of schema.org metadata, not all dimensions.

Google's structured data testing tool "complaining" about a webpage on the schema.org website
Google’s structured data testing tool “complaining” about a webpage on the schema.org website

We can see the limitations of Google’s structured data testing tool by looking at how it assesses the schema.org website.  We can find pages where the schema.org website, which Google is involved with developing, fails Google’s own SDTT.  How can that be?  The schema.org website and Google’s SDTT serve different purposes, and even different audiences.  The SDTT is trying to encourage certain practices, and in a almost gamified manner, gives a thumbs up if the metadata code conforms to the advice.  Schema.org continually develops to cover a range of needs.  Some of these needs will be more specialized, and publishers may decide to implement metadata in a standards-compliant manner that doesn’t pass inspection by Google’s SDTT.  I would not assume, however, that Google’s search algorithms are incapable of interpreting standards-compliant metadata that fails Google’s SDTT.   I’d guess that Google’s search algorithms are probably more sophisticated than the code used in the SDTT.  Sometimes the SDTT is playing catch-up with new developments in schema.org.

Google is trying to do two things at once: expand the coverage of schema.org to make it even more useful in a wider range of domains and scenarios, and popularize schema.org by presenting a simple set of guidelines for publishers to follow.  It’s a difficult situation to balance, how to manage and evolve standards over time, while promoting easy-to-follow guidelines that publishers consider reliable.  I would not expect Google to encourage publishers to adopt complicated metadata implementations that some would struggle to code correctly.  If less sophisticated publishers fail, they might fault Google for encouraging them to try something that exceeded their understanding or abilities.

Sometimes publishers gripe that they’ve created logically-valid schema.org metadata that nonetheless fails Google’s SDTT.   But publishers seem more upset when they’ve created metadata that passes the SDTT, yet they fail to see how it shines in Google’s search results.  Where’s my rich snippet I was expecting? they complain.  For many publishers, seeing the rich snippet payoff is the reward for using schema.org structured data, and for using the SDTT.  The SDTT is not just a technical tool: it is a marketing and public relations tool for Google.

A representative rich snippet as shown is SDTT. For some publishers, seeing their structured data in search results provides tangible proof they are correct and compliant.
A representative rich snippet as shown in Google’s SDTT. For some publishers, seeing their structured data in search results provides tangible proof they are correct and compliant with standards.

So does metadata compliance mean that one follows the pages of details in W3C standards, or that one gets a snippet to show in Google’s search results? Standards compliance can involve many layers. There is no one standard to follow: there can be various permutations of a standard that are sanctioned or encouraged by different parties. Publishers need to rely on the standards guidance that best supports the goals they are trying to achieve with their metadata.

— Michael Andrews

Categories
Intelligent Content

Why Structured Data needs to talk to Structured Content

A recent post on Google’s webmaster blog  illustrates how metadata needs to address both the structure of web content, and the meaning of that content.

People who work in SEO talk about structured data a lot, while those who work in content strategy talk about structured content. These topics are obviously related, but the terminology used by each party obscures how each topic relates to the other. My take: both structured data and structured content are different dimensions of metadata. Structured data is generally descriptive metadata identifying entities discussed in the content. Structured content provides the foundation for structural metadata that indicates the logic and organization of the content. Both descriptive and structural metadata are important in content, and they should ideally be integrated together.

The Google blog advises publishers to include structured data in their content. The below screenshot shows how this advice is presented.

(source: Google Central Webmaster Blog)
(source: Google Central Webmaster Blog)

The advice presented follows a pattern:

  • Advice to follow
  • Rationale
  • Best practices to implement advice (shown in green)
  • Actions not to do (shown in pink)

Some other items of advice in the post include another element:

  • Practices to avoid when implementing advice (shown in yellow)

We can see that the post follows good structure that is easy to scan and understand, and provides a foundation to reuse the information in other contexts. Now, let’s look at the post’s source code. This is where we’d expect to see the structured data associated with the content.

Source code for Blog post.
Source code for Blog post.

Disappointingly, no structured data is associated with the specific items of advice. The details of the advice are marked up with “class” attributes intended to style the content, but not to identify the meaning of the content. The only structured data on the page relates to the blog post in general (such as its author).

Imagine how the content could be reused if structured data identified the meaning of the advice. Someone might type a search looking for tips on “mistakes when using schema.org,” “why use schema.org,” or “schema.org best practices” and get specific bullets of content relating to their query.

In this example, the post’s author has done nothing wrong, though an opportunity has been missed nonetheless. Currently, schema.org doesn’t have any entity types that address advice statements that would contain sub-elements such as Rationale, Do, Avoid, and Don’t. The closest types are related to Questions and Answers, which are slightly different in their structure.

Because the structured data used in SEO, particularly schema.org, tends to focus on descriptive metadata, it has less coverage of other dimensions of metadata such as structural metadata indicating the role of content elements, or technical, administrative and rights metadata. All these kinds of metadata are important to address, to allow content to be shared and reused across different platforms and in different contexts. Fortunately, schema.org has been evolving quickly, and its coverage is improving every month. This expansion will allow for genuinely integrated metadata that indicates both the meaning and the structure of the content.

Metadata is a rich and important topic for everyone concerned with content published on the web. If you are interested in learning more about the many dimensions of metadata, you may be interested in my forthcoming book, Metadata Basics for Web Content, which will be available in early 2017 on Amazon.

— Michael Andrews

Categories
Intelligent Content

Identifiers in Content

One of the central challenges of content strategy is tracking all the content being created.  So much content is available about so many different things.  If you’ve ever done a content inventory, you know that different URLs may refer to the same content. It’s even possible for the same content to exist with two different titles.  And sometimes it isn’t clear if two items of content are talking about the same thing, or simply talking about things that sound similar.

Identifiers are the solution to this chaos. Identifiers are alphanumeric strings associated with an item. They don’t seem very exciting, but they will play an increasingly important role in content moving forward. We are finding that relying on titles and URLs to identify content is not enough. We need something more robust.

It’s hard to relate to something as abstract as an alphanumeric string.  Fortunately, some real world examples point to how identifiers can support content. Real world identifiers show how they can indicate such important things as:

  • The provenance of an item
  • A persistent way to refer to something
  • Whether something is unique or a copy
  • A way to listen to changes about something described.

Who Moved My Cheese?

One basic need is to know where content comes from.  There is much pilfering of content online these days: it’s become a big industry to rip off other people’s content and republish it as one’s own.

The problem of impostors and lookalikes is not limited to web content.  People who produce cheese worry about the confusion that can arise from similar looking and sounding products. Parmigiano Reggiano is a famous Italian cheese, colloquially known in English as parmesan.  It can be very expensive: a wheel of Parmigiano Reggiano typically weighs 38 kilos and will cost several hundred dollars. Parmigiano Reggiano is similar to other another Italian cheese called Gran Padano, and is the original inspiration for various cheeses called parmesan made outside Italy.  The makers of Parmigiano Reggiano work to distinguish their cheese from the rest through identifiers.  Each cheese house (caseificio) has a unique number that they apply to the outside rind of a cheese wheel, together with the month and year of production. These identifiers let the consumer know the provenance of the cheese.

A wheel of Parmigian-Reggiano with identifiers indicating cheese house and production date. Image via Wikipedia.
A wheel of Parmigiano-Reggiano with identifiers indicating cheese house and production date. Image via Wikipedia.

At the supermarket it can be hard to figure out where products come from.  Online it can be hard to know where content comes from. Increasingly people get content not from the producer, but indirectly through a channel like Facebook.  As content gets promoted and aggregated across a growing range of platforms and channels, the provenance of the content will be increasingly important to track. Content requires identifiers that can reveal the originator of the content. The Federal Trade Commission issued guidance recently rejecting vague statements that content is “sponsored”. Publishers need a process that can track and identify who that sponsor is.

Deposed Content

Another challenge for content arises when it is remixed.  Titles and URLs are designed to identify pages, not content components that might show up in a multitude of delivered content.

The challenge of remixed content is similar to a situation facing trial lawyers. As part of the pretrial discovery process, lawyers collect volumes of information. This information needs to be shared between opposing parties, and may not have any intrinsic order to it. Lawyers solved how to identify all these random bits with something called a Bates number.  Originally a Bates number was produced by an elaborate mechanical ink stamp, that would sequentially number each page of any documentation with a unique alphanumeric string.  Today, lawyers will scan documents into PDFs, which can render Bates numbers for each page automatically.

A Bates Numbering Machine. Image via US Patent Office.
A Bates Numbering Machine. Image via US Patent Office.

The elegance of the Bates number is that it provides a persistent identifier for a piece of information that is independent of its source and its context.  No matter how different items of content are shuffled around, a specific item can be located by any party according to its unique Bates number.

Having persistent identifiers for content components is valuable when content is assembled from different components, and components are reused in many contexts.

In the Matrix

Another inevitable dimension of content is that there can be many versions of a content item.  Sometimes this is unintended: organizations have generated duplicate content. But other times organizations have purposefully made different versions of the same underlying content to meet slightly different needs. Either way, it can be hard to sort out what is master content, and what is the derivative.

Distinguishing what’s the original content is an old problem. Enthusiasts of early jazz recordings faced this problem when they wanted to trace the recordings of a famous musician such as Louis Armstrong. Early recordings on 78 records didn’t supply much information about the full orchestra.  And sometimes the masters of these recordings were rented to other record companies, who released the recording on their own label.  Licensees even sometimes put false information on record labels to disguise that they were re-releasing an existing recording (done sometimes to get around labor contracts).  To complicate matters even more, the same artist might release several versions of the same tune. Jazz is after all about improvisation, and each different version can be interesting in its own right.  So even knowing the song title and the artist wasn’t sufficient to know if the recording was unique or not.

Fans who developed discographies of early jazz found a key to solving the problem of unreliable information on the labels on records. They tracked recordings according to their matrix number.  Each matrix used to press records contained a hand inscribed number indicating the master recording.  No matter who subsequently used the master to release the recording, the same number was stamped into the record.  As a result, one could see that a French record was the same recording as an American one, because they shared the same matrix number, while two records with the same title and performers were in fact different recordings.

Content variation is a phenomenon driven by the desire of audiences to have choice.  People want versions of content that match their needs: that are shorter or longer depending on their interests, or are formatted for a larger or smaller screen depending on their device. To track all these variations, organizations need identifiers that can let them know how content is being repurposed, and where.

Tuning In

Broadcast radio stations often identify themselves by number.  They broadcast at a certain frequency, and use that frequency as an identifier: “101.3 FM” or whatever.  RFID is a different kind of radio broadcast, one specifically designed to identify objects. Identifiers have morphed into stickers that we can listen to.

Last year I visited an exhibit at Expo Milan featuring an MIT prototype of the supermarket of the future. The premise of the exhibit was that RFID tags can track produce and other food items, to give consumers information about where the products are from, when they were harvested, how they were shipped, and so forth.  What’s intriguing about this vision is that products can now have biographies. No longer does one need to talk about the product generically.  One can now talk about a specific instance of the product: this orange, or this batch of pesto. Products now have real stories that can be told.

RFID allows us to listen to things: to know what’s been going on with them. We are starting to move toward creating specific content that tells stories about specific instances of items. To do this, we will need the ability to be very specific about what we refer to.

Conclusion

Identifiers give us the ability to make statements about things. They allow us to distinguish what specifically we are saying, and about what specifically we are making a statement.  That capability will be important as content and products become more varied and customized.  Identifiers support accountability in the face of growing complexity.

— Michael Andrews