Categories
Content Engineering

Format Free Content and Format Agility

A core pillar supporting the goal of reusable modules of content is that the content should be “format free”.  Format free conveys a target for content to attain, but the phrase tends to downplay how readily content can be transformed from one state to another.  It can conceal how people need to receive content, and whether the underlying content can support those needs.

I want to bring the user perspective into the discussion of formats.  Rather than only think about the desirability of format neutrality, I believe we should broaden the objective to consider the concept of format readiness.  Instead of just trying to transcend formats, content engineers should also consider how to enable customized formats to support different scenarios of use.  Users need content to have format flexibility, a quality that doesn’t happen automatically.   Not all content is equally ready for different format needs.

The Promise and Tyranny of Formats

Formats promise us access to content where we want it, how we want it. Consider two trends underway in the world of audio content.  First, there is growing emphasis on audio content for in-car experiences.  Since staring at a screen while driving is not recommended, auto makers are exploring how to make the driving experience more enriching with audio content.  A second trend goes in the opposite direction.  We see a renewed interested in a nearly dead format, the long playing record disc, with its expressive analog sensuality.  Suddenly LPs are everywhere, even in the supermarket.  The natural progression of these trends is that people buy a record in the supermarket, and then play the record in their car as soon as they reach the parking lot. An enveloping sonic experience awaits.

Playing records in your car may sound far fetched.  But the idea has a long pedigree.  As Consumer Reports notes: “A new technology came on the market in the mid-1950s and early 1960s that freed drivers from commercials and unreliable broadcast signals, allowing them to be the masters of their motoring soundtrack with their favorite pressed vinyl spinning on a record player mounted under the dash.”

Highway Hi-Fi record player. Image via Wikipedia.
Highway Hi-Fi record player. Image via Wikipedia.

In 1956, Chrysler introduced Highway Hi-Fi, an in-dash record player that played special sized discs that ran at 16 ⅔ rpms — half the speed of regular LPs, packing twice the playtime.  You could get a Dodge or DeSoto with a Highway Hi-Fi, and play records such as the musical the “Pajama Game.”  The Highway Hi-Fi came endorsed by the accordion playing taste maker, Laurence Welk.

Sadly playing records while driving in your car didn’t turn out to be a good idea.  Surprise: the records skipped in real-world driving conditions.  Owners complained, and Chrysler discontinued the Highway Hi-Fi in 1959.  Some hapless people were stuck with discs of the Pajama Game that they couldn’t play in their cars, and few home stereos supported 16 ⅔ play.  The content was locked in a dead format.

Format Free and Transcending Limitations

Many people imagine we’ve solved the straight jacket of formats in the digital era.  All content is now just a stream of zeros and ones.  Nearly any kind of digital content can be reduced to an XML representation.  Format free implies we can keep content in a raw state, unfettered by complicating configurations.

Format free content is a fantastic idea, worth pursuing as far as possible.  The prospect of freedom from formats can lead one to believe that formats are of secondary importance, and that content can maintain meaning completely independently of them.

The vexing reality is that content can never be completely output-agnostic.  Even when content is not stored in an audience-facing format, that doesn’t imply it can be successfully delivered to any audience-facing format. Computer servers are happy to store zeros and ones, but humans need that content translated into a form that is meaningful to them.  And the form does ultimately influence the substance of the content.  The content is more the file that stores it.

Four Types of Formats

In many cases when content strategists talk about format free content, they are referring to content that doesn’t contain styling.  But formats may refer to any one of four different dimensions:

  1. The file format, such as whether the content is HTML or PDF
  2. The media format, such as whether the content is audio, video, or image
  3. The output format, such as whether the content is a slide, an article, or a book
  4. The rendered formatting, or how the content is laid out and presented.

Each of these dimensions impacts how content is consumed, and each has implications for what information is conveyed.  Formats aren’t neutral.  One shouldn’t presume parity between formats.  Formats embody biases that skew how information is conveyed.  Content can’t simply be converted from one format to another and express the content in the same way.

Just Words: The Limitations of Fixed Wording

Let’s start with words.  Historically, the word has existed in two forms: the spoken word, and the written word.  People told stories or gave speeches to audiences.  Some of these stories and speeches were written down.  People also composed writings that were published.  These writings were sometimes read aloud, especially in the days when books were scarce.

Today moving between text and audio is simple.  Text can be synthesized into speech, and speech can be digitally processed into text.  Words seemingly are free now from the constraints of formats.

But converting words between writing and speech is more than a technical problem.  Our brains process words heard, and words read, differently.  When reading, we skim ahead, and reread text seen already.  When listening, we need to follow the pace of the spoken word, and require redundancy to make sure we’ve heard things correctly.

People who write for radio know that writing for the ear is different from writing for a reader.  The same text will not be equally effective as audio and as writing. National Public Radio, in their guidebook Sound Reporting, notes: “A reader who becomes confused at any point in [a] sentence or elsewhere in the story can just go back and reread it — or even jump ahead a few paragraphs to search for more details.  But if a listener doesn’t catch a fact the first time around, it’s lost.”  They go on to say that even the syntax, grammar and wording used may need to be different when writing for the ear.

The media involved changes what’s required of words.  Consider a recipe for a dish.  Presented in writing, the recipe follows a standard structure, listing ingredients and steps.  Presented on television, a recipe follows a different structure.  According to the Recipe Writers Handbook, a recipe for television is “a success when it works visually, not when it is well written in a literary, stylistic, or even culinary sense.”  The book notes that on television: “you must show, not tell; i.e., stir, fry, serve…usually under four minutes.”  Actions replace explicit words.  If one were to transcribe the audio of the TV show, it is unlikely the text would convey adequately how to prepare the dish.

The Hidden Semantics of Presentational Rendering

For written text, content strategists prudently advise content creators to separate the structure of content from how it is presented.  The advice is sensible for many reasons: it allows publishers to restyle content, and to change how it is rendered on different devices. Cascading Style Sheets (CSS), and Responsive Web Design (RWD) frameworks, allow the same content to appear in different ways on different devices.

Restyling written content is generally easy to do, and can be sophisticated as well.  But the variety of CSS classes that can be created for styling can overshadow how rudimentary the underlying structures are that define the meaning of the text.  Most digital text relies on the basic structural elements available in HTML.  The major elements are headings at different levels, ordered and unordered lists, and tables.  Less common elements include block quotes and code blocks.  Syntaxes such as Markdown have emerged to specify text structure without presentational formatting.

While these structural elements are useful, for complex text they are not very sophisticated.  Consider the case of a multi-paragraph list.  I’m writing a book where I want to list items in a series of numbered statements.  Each numbered statement has an associated paragraph providing elaboration.  To associate the explanatory paragraph with the statement, I must use indenting to draw a connection between the two.  This is essentially a hack, because HTML does not have a concept of an ordered list item elaboration paragraph.  Instead, I rely on pseudo-structure.

When rendered visually, the connection between the statement and elaboration is clear.  But the connection is implicit rather than explicit.  To access only the statement without the elaboration paragraph, one would need to know the structure of the document beforehand, and filter it using an XPath query.

Output Containers May Be Inelastic

Output formats inform the structure of content needed.  In an ideal world, a body of structured content can be sent to many different forms of output.  There’s a nifty software program called Pandoc that lets you convert text between different output formats.  A file can become an HTML webpage, or an EPUB book, or a slide show using Slidy or DZSlides.

HTML content can be displayed in many containers. But those containers may be of vastly different scales.  Web pages don’t roll up into a book without first planning a structure to match the target output format.  Books can’t be broken down into a slide show.  Because output formats inform structure required, changing the output format can necessitate a restructuring of content.

The output format can effect the fidelity of the content. The edges of a wide screen video are chopped off when displayed  within the boxy frame of an in-flight entertainment screen.  We trust that this possibility was planned for, and that nothing important is lost in the truncated screen. But information is lost.

The Challenges of Cross-Media Content Translation

If content could be genuinely format free, then content could easily morph between different kinds of media.  Yet the translational subtleties of switching between written text and spoken audio content demonstrate how the form of content carries implicit sensory and perceptual expectations.

Broadly speaking, five forms of digital media exist:

  1. Text
  2. Image
  3. Audio
  4. Video
  5. Interactive.

Video and interactive content are widely considered “richer” than text, images and audio.  Richer content conveys more information.  Switching between media formats involves either extracting content from a richer format into a simpler one, or compiling richer format content using simpler format inputs.

The transformation possibilities between media formats determine:

  • how much automation is possible
  • how usable the content will be.

From a technical perspective, content can be transformed between media as follows.

Media format conversion is possible between text and spoken audio.  While bi-directional, the conversion involves some potential loss of expressiveness and usability.  The issues become far more complex when there are several speakers, or when non-verbal audio is also involved.

Various content can be extracted from video.  Text (either on-screen text, or converted from spoken words in audio) can be extracted, as well as images (frames) and audio (soundtracks).  Machine learning technologies are making such extraction more sophisticated, as millions of us answer image recognition CAPTCHA quizzes on Google and elsewhere.  Because the extracted content is divorced from its context, its complete meaning is not always clear.

Transforming interactive content typically involves converting it into a linear time sequence.  A series of interactive content explorations can be recorded as a non-interactive animation (video).

Simple media formats can be assembled into richer ones.  Text, images and audio can be combined to feed into video  content.  Software exists that can “auto-create” a video by combining text with related images to produce a narrated slide show.  From a technical perspective, the instant video is impressive, because little pre-planning is required.  But the user experience of the video is poor, with the content feeling empty and wooden.

Interactive content is assembled from various inputs: video, text/data, images, and audio formats.  Because the user is defining what to view, the interaction between formats needs to be planned.  The possible combinations are determined by the modularity of the inputs, and how well-defined they are in terms of metadata description.

translation of content between formats
Translation of content between formats

Atomic Content Fidelity

Formats of all kinds (file, output, rendering, and media) together produce the form of the content that determines the content experience and the content’s usability.

  • File formats can influence the perceptual richness (e.g., a 4k video verses a YouTube-quality one).
  • Rendition formatting influences audience awareness of distinct content elements.
  • Output formats influence the pacing of how content gets delivered, and how immersive content the content engagement will be.
  • Media formats influence how content is processed cognitively and emotionally by audiences and viewers.

Formats define the fidelity of the content that conveys the intent behind the communication.  Automation can convert formats, but conversion won’t necessarily preserve fidelity.

Format conversions are easy or complex according to how the conversion impacts the fidelity of the content.  Let’s consider each kind of content format in turn.

File format conversions are easy to do, and any loss in fidelity is generally manageable.

Rendition format conversions such as CSS changes or RWD alternative views are simple to implement.  In many cases the impact on users is minimal, though in some cases contextual content cues can be lost in the conversion, especially when  a change in emphasis occurs in what content is displayed or how it is prioritized.

Output format conversion is tricky to do.  Few people want to read an e-book novel on their Apple Watch.  The hurdles to automation are apparent when one looks at the auto-summarization of a text.  Can we trust the software to identify the most important points? An inherent tension exists between introducing structures to control machine prioritization of content, and creating a natural content flow necessary for a good content experience.  The first sentence of a paragraph will often introduce the topic and main point, but won’t always.

Media format conversion is typically lossy.  Extracting content from a rich media format to a simpler one generally involves a loss of information.  The automated assembly of content rich media formats from content in simpler formats often feels less interesting and enjoyable than rich formats that were purposively designed by humans.

Format Agility and Content as Objects

We want to transcend the limitations of specific formats to support different scenarios.  We also want to leverage the power of formats to deliver the best content experience possible across different scenarios.  One approach to achieve these goals would be to extend some of the scenario-driven, rules-based thinking that underpins CSS and RWD, and apply it more generally to scenarios beyond basic web content delivery.  Such an approach would consider how formats need to adjust based on contextual factors.

If content cannot always be free from the shaping influence of format, we can at least aim to make formats more agile.  A BBC research program is doing exciting work in this area, developing an approach called Object Based Media (OBM) or Object Based Broadcasting.  I will highlight some interesting ideas from the OBM program, based on my understanding of it reading the BBC’s research blog.

Object-Based Media brings intelligence to content form.  Instead of considering formats as all equivalent, and independent of the content, OBM considers formats in part of the content hierarchy.  Object Based Media takes a core set of content, and then augments the content with auxiliary forms that might be useful in various scenarios.  Content form becomes a progressive enhancement opportunity.  Auxiliary content could be subtitles and audio transcripts that can be used in combination with, or in leu of, the primary content in different scenarios.

During design explorations with the OBM concept, the BBC found that “stories can’t yet be fully portable across formats — the same story needed to be tailored differently on each prototype.” The notion of tailoring content to suit the format is one of the main areas under investigation.

A key concept in Object-Based Media is unbundling different inputs to allow them to be configured in different format variations on delivery.  The reconfiguration can be done automatically (adaptively), or via user selection.  For example, OBM can enable a video to be replaced with an image having text captions in a low bandwidth situation.  Video inputs (text, background graphics, motion overlays) are assembled on delivery, to accommodate different output formats and rendering requirements.  In another scenario, a presenter in a video can be replaced with a signer for someone who is hearing impaired.

The BBC refers to OBM as “adjustable content.”  They are looking at ways to allow listeners to specify how long they want to listen to a program, and give audiences control over video and audio options during live events.

Format Intelligence

In recent years we’ve witnessed remarkable progress transcending the past limitations that formats pose to content.  File formats are more open, and metadata standards have introduced more consistency in how content is structured.  Technical progress has enabled basic translation of content between media formats.

Against this progress taming idiosyncrasies that formats pose, new challenges have emerged.   Output formats keep getting more diverse: whether wearables or immersive environments including virtual reality.  The fastest growing forms of content media are video and audio, which are less malleable than text.  Users increasingly want to personalize the content experience, which includes dimensions relating to the form of content.

We are in the early days of thinking about flexibility in formats that give users more control over their content experience — adjustable content.  The concept of content modularity should be broadened to consider not only chunks of information, but chunks of experience.  Users want the right content, at the right time, in the right format for their needs and preferences.

— Michael Andrews

Categories
Content Engineering Intelligent Content

Defining Meaning in a Post-Document World

Digital content is undergoing a metamorphosis. It is no longer about fixed documents. But neither is it just a collection of data. It is something in-between, yet we haven’t developed a vivid and shared way to conceive and discuss precisely what that is. We see evidence of this confusion in the vocabulary used to describe content meaning. We talk about content as structurally rich, as semantic, as containing structured data. Behind these labels are deeper convictions: whether content is fundamentally about documents or data.

Content has evolved into a complex experience, composed of many different pieces. We need new labels to express what these pieces mean.

“The moment it all changed for me was the moment when Google Maps first appeared. Because it was a software application—not a set of webpages, not a few clever dynamic calls, but almost aggressively anti-document. It allowed for zooming and panning, but it was once again opaque. And suddenly it became clear that the manifest destiny of the web was not accessibility. What was it? Then the people who advocated for a semantically structured web began to split off from the mainstream and the standards stopped coming so quickly.” — Paul Ford in the Manual

In the traditional world of documents, meaning is conveyed through document-centric metadata. Publishers govern the document with administrative metadata, situate sections of the document using structural metadata, and identify and classify the document with descriptive metadata. As long as we considered digital content as web pages, we could think of them as documents, and could rely on legacy concepts to express the meaning of the content.

But web pages should be more than documents. Documents are unwieldy. The World Wide Web’s creator, Tim Berners-Lee, started agitating for “Raw data now!” Developers considered web pages as “unstructured data” and advocated the creation and collection of structured data that machines could use. What is valuable in content got redefined as data that could be placed in a database table or graph. Where documents deliver a complete package of meaning, data structures define meaning on a more granular level as discrete facts. Meaningful data can be extracted, and inserted into apps when in a structured format. In the paradigm of structured data, the meaning of an entity should be available outside of a context in which it was associated. Rather than define what the parts of documents mean, structured data focuses on what fragments of information mean independently of context.

Promoters of structured data see possibilities to create new content by recombining fragments of information. Information boxes, maps, and charts are content forms that can dynamically refresh with structured data. These are clearly important developments. But these non-narrative content types are not the only forms of content reuse.

The Unique Needs of Component Content

A new form of content emerged that was neither a document nor a data element: the content component. In HTML5, component level content might be sections of text, videos, images and perhaps tables.[1] These items have meaning to humans like documents, but unlike documents, they can be recombined in different ways, and so carry meaning outside the context of a document, much the way structured data does.

Component content needs various kinds of descriptions to be used effectively. Traditional document metadata (administrative, structural, and descriptive) are useful for content components. It is also useful to know what specific entities are mentioned within a component; structured data is also nice to have. But content components have further needs. If we are moving around discrete components that carry meaning to audiences, we want to understand what specific meaning is involved, so we match up the components with each other appropriately. The component-specific metadata addresses the purpose of the component.

Component metadata allows content to be adaptable: to match the needs of the user according to the specific circumstances they are in. We don’t have well-accepted terms to describe this metadata, so its importance tends to get overlooked. Various kinds of component metadata can characterize the purpose of a component. Though metadata relating to these facets aren’t yet well-established, there are signs of interest as content creators think about how to curate an experience for audiences using different content components.

Contextual metadata indicates the context in which a component should be used. This might be the device the component is optimized for, the geolocation it is intended for, the specific audience variation, or the intended sequencing of the component relative to other components.

Performance metadata addresses the intended lifecycle of the component. It indicates whether the component is meant to be evergreen, seasonal or ephemeral, and if it has a mass or niche use. It helps authors answer how the component should be used, and what kind of lifting it is expected to do.

Sentiment metadata describes the mood or the metaphor associated with the component. It answers what kind of impression on the audience the component is expected to make.

We can see how component metadata can matter by looking at a fairly simple example: using a photographic image. We might use different images together with the same general content according to different circumstances. Different images might express different metaphors presented to different audience segments. We might want to restrict the use of certain images to ensure they are not overused. We need to have different image sizes to optimize the display of the image on different devices. While structured data specialists might be preoccupied with what entities are shown in an image, in this example we don’t really care about who the models are appearing in the stock image. We are more concerned about the implicit meaning of the image in different contexts, rather than its explicit meaning.

The Challenges of Context-free Metadata

Metadata has a problem: it hasn’t yet evolved to address the changing context in which a content component might appear. We still talk about metadata as appearing in the head of a document, or in the body of a document, without considering that the body of the document is changing. We run the risk that the head and the body get out of alignment.

The rise of component variation is a key feature of the approach that’s commonly referred to as intelligent content. Intelligent content, according to Ann Rockley’s definition, involves structurally rich and semantically categorized content. Intelligent content is focused on making content components interchangeable.

Discussions of intelligent content rarely get too explicit about what metadata is needed. Marcia Riefer Johnston addressed the topic in an article entitled Intelligent Content: What Does ‘Semantically Categorized’ Mean? She says: “Semantic categories enable content managers to organize digital information in nearly limitless ways.” It’s a promising vision, but we still don’t have a sense of where the semantic categories come from, and what precisely they consist of. The inspiration for intelligent content, DITA, is an XML-based approach that allows publishers to choose their own metadata. DITA is a document-centric way of managing content, and accordingly assumes that the basic structure of the document is fixed, and only specific elements can be changed within that stable structure. Intelligent content, in contrast, suggests a post-document paradigm. Again, we don’t get a sense of what structurally rich means outside of a fixed document structure. How can one piece together items in “limitless ways?” What is the glue making sure these pieces fit together appropriately?

Content intelligence involves not only how components are interchangeable, but also how they are interoperable — intelligible to others. Intelligent content discussions often take a walled-garden approach. They focus on the desirability of publishers providing different combinations of content, but don’t discuss how these components might be discovered by audiences.[2] Intelligent content discussions tend to assume that the audience discovers the publisher (or that the publisher identifies the audience via targeting), and then the publisher assembles the right content for the audience. But the process could be reversed, where the audience discovers the content first, prior to any assembly by the publisher. How do the principles of semantically categorized and structurally rich content relate to SEO or Linked Data? Here, we start to see the collision between the document-centric view of content and the structured data view of it. Does intelligent content require publisher-defined and controlled metadata to provide its capabilities, or can it utilize existing, commonly used metadata vocabularies to achieve these goals?

Document-centric Thinking Hurts Metadata Description

Content components already exist in the wild. Publishers are recombining components all the time, even if they don’t have a robust process governing this. Whether or not publishers talk about intelligent content, the post-document era has already started.

But we continue to talk about web pages as enduring entities that we can describe. We see this in discussions of metadata. Two styles of metadata compete with each other: metadata in the document head of a page, and metadata that is in-line, in the body of a page. Both these styles assume there is a stable, persistent page to describe. Both approaches fail because this assumption isn’t true in many cases.

The first approach involves putting descriptive metadata outside of the content. On a web page, it involves putting the description in the head, rather than the body. This is a classic document-centric style. It is similar to how librarians catalog books: the description of the book is on a card (or in a database) that is separate from the actual book. Books are fixed content, so this approach works fine.

The second approach involves putting the description in the body of the text. Think of it as an annotation. It is most commonly done to identify entities mentioned in the text. It is similar to an index of a book. As long as the content of the book doesn’t change, the index should be stable.

Yet web pages aren’t books. They change all the time. There may be no web page: just a wrapper for presenting a stream of content. What do we need to describe here, and how do we need to do that?

Structured Data’s Lost Bearings

When people want to identify entities mentioned in content, they need a way to associate a description of the entity with the content where it appears. Entity-centric metadata is often called structured data, a confusing term given the existence of other similar sounding terms such as structured content, and semantic structure. While structured data was originally a term used by data architects, the SEO community uses it to refer more specifically to search-engine markup using vocabulary standards such as Schema.org. The structure referred to in the term “structured data” is the structure of the vocabulary indicating the relationships associated with the description. It doesn’t refer to the structure of the content, and here is where problems arise.

While structured data excels at describing entities, it struggles to locate these entities in the content. The question SEO consultants wrestle with is what precisely to index: a web page, or a sentence fragment where the item is mentioned? There are two rival approaches for doing this. One can index entities appearing on a web page using a format called JSON-LD, which is typically placed in the document head of the page (though it does not have to be). Or one can index entities where they appear in the content using a format called RDFa, which are placed in-line in the body of the HTML markup.

Both these approaches presume that the content itself is stable. But content changes continually, and both approaches founder because they are based on a page-centric view of content instead of a component-centric view.

Disemboweled Data

First, consider the use of RDFa to describe the entities mentioned in the sentence. The metadata is embedded in the body of the page: it’s embodied metadata. It’s an appealing approach: one just needs to annotate what these entities are, so a search engine can identify them. But embedded in-line metadata turns out to be rather fragile. Such annotation works only so far as every relevant associated entity is explicitly mentioned in the text. And if the text mentions several different kinds of entities in a single paragraph, the markup gets complicated, because one needs to disambiguate the different entities so as not to confuse the search robots.

The big trouble starts when one changes the wording of texts containing embedded structured data. The entities mentioned change, which has a cascading impact on how the metadata used to describe these entities must be presented. What seemed a unified description of related entities can become disemboweled with even a minor change in a sentence. The structured data didn’t have a stable context with which to associate itself.

Decapitated data

Given the hassles of RDFa, many SEO consultants lately are promoting the virtues of putting the structured data in the head of a page using JSON-LD. The head of the description is separate from the body of the content, much like the library catalog card describing a book is separate from the book and its contents. The description is separate from the context in which it appears.

Supporters of JSON-LD note that the markup is simpler than RDFa, and less prone to glitchiness. That is true. But the cost of this approach is that the structured data looses its context. It too is fragile, in some ways more so than RDFa.

Putting data in the document head, outside of the body of the content, is to decapitate the data. We now have data that is vaguely associated with a page, though we don’t know exactly how. Consider Paul Ford’s recent 32,000-word article for Business Week on programming. He mentioned countless entities in the article, all of which would be placed in the head. You might know the entity was mentioned somewhere, but you can’t be sure where.

What's efficient for one party may not be for another.  (original image via Wikipedia)
What’s efficient for one party may not be so for another. (original image via Wikipedia)

With decapitated data, we risk having the description of the content get out of alignment with what the content is actually discussing. Since the data is not associated with a context, it can be hard to see that the data is wrong. You might revise the content, adding and deleting entities, and not revise the document head data accurately.

The management problem becomes greater when one thinks about content as components rather than pages. We want to change content components, but the metadata is tied to a page, rather than a component. So every variation of a page requires a new JSON-LD profile in the document head that will match the contents of the variation. As a practical matter this approach is untenable. A dynamically-generated page might have dozens or hundreds of variations based on different combinations of components.

Structured data largely exists to serve the needs of search engines. Its practices tend to define content in terms of web pages. Structured data can describe a rendered page, but isn’t geared to describe content components independently of a rendered page. To indicate the main theme of a piece of content, Schema.org offers a tag called “main content of page”, reflecting an expectation that there is one webpage with an overriding theme. Even if a webpage exists for a desktop browser, it may be a series of short sections when viewed on a mobile device, and won’t have a single persistent “main content” theme. Current structured data practices don’t focus on how to describe entities in unbundled content — entities associated with discrete components such as a section of text. Each reuse of content involves a re-creation of structured data in the document head.

It is important not to confuse structured data with structured content. Structured data needs to work in concert with structured content delivered through content management systems, instead of operating independently of it.

When structured data gets separated from the content it represents, it creates confusion for content teams about what’s important. Decapitated data can foster an attitude that audience-facing content is a second class citizen. One presentation on the benefits of JSON-LD for SEO advised: “Keep the Data and Presentation layer separate.” Content in HTML gets reduced to presentation: a mere decoration. Such advocates talk about supplying a data “payload” to Google. It is true that structured data can be used in apps, but some structured data advocates create a false dichotomy between web pages and data-centric apps, because they are stuck in a paradigm that content equals web pages.

This perspective can lead to content reductionism: only the facts mentioned in the content matter. The primary goal is to free the facts from the content, so the facts can be used elsewhere by Google and others. Content-free data works fine for discussing commodities such as gas prices. But for topics that matter most to people, having context around the data is important. Decapitated data doesn’t support context: it works against it, by making it harder to provide more contextually appropriate information. Either the information is hanked out of its context entirely, or the reader is forced to locate it within the body of the content on her own.

The ultimate failure of decapitated data occurs when the data bears no relationship to the content. This is a known bug of the approach, and one no one seems to have a solution for. According to the W3C, “it is more difficult for search engines to verify that the JSON-LD structured data is consistent with the visible human-readable information.” When what’s important gets defined as what’s put in a payload for Google, the temptation exists to load things in the document head that aren’t discussed. Just as black hat operators loaded fake keywords in the document head of the meta description years ago to game search engines, there exists a real possibility that once JSON-LD becomes more popular, unscrupulous operators will put black hat structured data in the document head that’s unrelated to the content. No one, not least the people who have been developing the JSON-LD format, wants to see this happen.

Unbundling Meaning for Unbundled Content

The intelligent content approach stresses the importance of unbundling content. The web page as a unit of content is dying. Unbundled content can adapt to the display and interactive needs of mobile devices, and allow for content customization.

Metadata needs to describe content components, not just pages of content. Some of this metadata will describe the purpose of the component. Other metadata will describe the entities discussed in the component.

There are arguments whether to annotate entities in content with metadata, or whether to re-create the entities in a supplemental file. Part of the debate concerns the effort involved: the effort for inputting the content structure, verses the effort involved re-entering the data described by the structure. One expert, Alex Miłowski at the University of California Berkeley, suggests a hybrid approach could be most efficient and accurate. Regardless of format, structured content will be more precise and accurate if it refers to a reusable content component, rather than a changeable sentence or changeable web page.[3] Components are swappable and connectable by design. They are units of communication expressing a unified purpose, which can be described in an integrated way with less worry that something will change that will render the description inaccurate. It is easier to verify the accuracy of the structured data when it is closely associated with the content. Since content components are designed for reuse, one can reuse the structured data linked to the component.

While the idea of content components is not new, it still is not widely embraced as the default way of thinking about content. People still think about pages, or fragments. Even content strategists talk suggestively about chunks of content, instead of trying to define what a chunk would be in practice. As a first step, I would like to see discussion of chunks disappear, to be replaced by discussion of components. Thinking about reusable components does not preclude the reuse of more granular elements such as variables and standardized copy. But the concept of a component provides a way to discuss pieces of content based around a common theme.

Components need to be defined as units to manage internally in content management systems before they will be recognized as a unit that matters externally. A section of content in HTML may not map to standard templates in a CMS right now, but that can change — if we define a component as a section. A section of content in HTML may not mean much to a search engine right now, but that can change — if search engines perceive such a unit as having a coherent meaning. The case for both intelligent content and semantic search will be more compelling if we can make such changes.

Final note

More dialog is needed between the semantic search community and the intelligent content community about how to integrate each approach. Both these approaches involve significant complexity, and understanding by each side of the other seems limited. I’ve discovered that some ideas about structured data and the semantic representation of entities have political sensitivities and a stormy past, which can make exploration of these topics challenging for outsiders. In this post I have questioned a current idea in structured data best practice, separating data from content, even though this practice wasn’t common a year ago, or even widely practical. Practices used in semantic search (such as favored formats and vocabulary terms) seem to fluctuate noticeably, compared to the long established principles guiding content strategy. The cause of structured data will benefit when it is discussed in the wider context of content production, management and governance, instead of in isolation from these issues. For its part, content strategy should become more specific with how to implement principles, especially as adaptive content becomes more common. I foresee possibilities to refine concepts in intelligent content through dialog with semantic search experts.

— Michael Andrews


  1. I am merely suggesting kinds of HTML structures that correspond to content components, rather than attempting to provide a formal definition. HTML5 has its quirks and nuances, and the topic deserves a wider discussion.  ↩
  2. A notable exception is Joe Pairman’s article, “Connecting with real-world entities: is structured content missing a trick?”.  ↩
  3. Embedding JSON-LD in components seems like it could offer benefits, though I hesitate to suggest casually standards on such a multifaceted issue. I don’t want the merits of a particular solution to detract attention from a thorough examination of the core issues associated with the problem.  ↩