Categories
Intelligent Content

Identifiers in Content

One of the central challenges of content strategy is tracking all the content being created.  So much content is available about so many different things.  If you’ve ever done a content inventory, you know that different URLs may refer to the same content. It’s even possible for the same content to exist with two different titles.  And sometimes it isn’t clear if two items of content are talking about the same thing, or simply talking about things that sound similar.

Identifiers are the solution to this chaos. Identifiers are alphanumeric strings associated with an item. They don’t seem very exciting, but they will play an increasingly important role in content moving forward. We are finding that relying on titles and URLs to identify content is not enough. We need something more robust.

It’s hard to relate to something as abstract as an alphanumeric string.  Fortunately, some real world examples point to how identifiers can support content. Real world identifiers show how they can indicate such important things as:

  • The provenance of an item
  • A persistent way to refer to something
  • Whether something is unique or a copy
  • A way to listen to changes about something described.

Who Moved My Cheese?

One basic need is to know where content comes from.  There is much pilfering of content online these days: it’s become a big industry to rip off other people’s content and republish it as one’s own.

The problem of impostors and lookalikes is not limited to web content.  People who produce cheese worry about the confusion that can arise from similar looking and sounding products. Parmigiano Reggiano is a famous Italian cheese, colloquially known in English as parmesan.  It can be very expensive: a wheel of Parmigiano Reggiano typically weighs 38 kilos and will cost several hundred dollars. Parmigiano Reggiano is similar to other another Italian cheese called Gran Padano, and is the original inspiration for various cheeses called parmesan made outside Italy.  The makers of Parmigiano Reggiano work to distinguish their cheese from the rest through identifiers.  Each cheese house (caseificio) has a unique number that they apply to the outside rind of a cheese wheel, together with the month and year of production. These identifiers let the consumer know the provenance of the cheese.

A wheel of Parmigian-Reggiano with identifiers indicating cheese house and production date. Image via Wikipedia.
A wheel of Parmigiano-Reggiano with identifiers indicating cheese house and production date. Image via Wikipedia.

At the supermarket it can be hard to figure out where products come from.  Online it can be hard to know where content comes from. Increasingly people get content not from the producer, but indirectly through a channel like Facebook.  As content gets promoted and aggregated across a growing range of platforms and channels, the provenance of the content will be increasingly important to track. Content requires identifiers that can reveal the originator of the content. The Federal Trade Commission issued guidance recently rejecting vague statements that content is “sponsored”. Publishers need a process that can track and identify who that sponsor is.

Deposed Content

Another challenge for content arises when it is remixed.  Titles and URLs are designed to identify pages, not content components that might show up in a multitude of delivered content.

The challenge of remixed content is similar to a situation facing trial lawyers. As part of the pretrial discovery process, lawyers collect volumes of information. This information needs to be shared between opposing parties, and may not have any intrinsic order to it. Lawyers solved how to identify all these random bits with something called a Bates number.  Originally a Bates number was produced by an elaborate mechanical ink stamp, that would sequentially number each page of any documentation with a unique alphanumeric string.  Today, lawyers will scan documents into PDFs, which can render Bates numbers for each page automatically.

A Bates Numbering Machine. Image via US Patent Office.
A Bates Numbering Machine. Image via US Patent Office.

The elegance of the Bates number is that it provides a persistent identifier for a piece of information that is independent of its source and its context.  No matter how different items of content are shuffled around, a specific item can be located by any party according to its unique Bates number.

Having persistent identifiers for content components is valuable when content is assembled from different components, and components are reused in many contexts.

In the Matrix

Another inevitable dimension of content is that there can be many versions of a content item.  Sometimes this is unintended: organizations have generated duplicate content. But other times organizations have purposefully made different versions of the same underlying content to meet slightly different needs. Either way, it can be hard to sort out what is master content, and what is the derivative.

Distinguishing what’s the original content is an old problem. Enthusiasts of early jazz recordings faced this problem when they wanted to trace the recordings of a famous musician such as Louis Armstrong. Early recordings on 78 records didn’t supply much information about the full orchestra.  And sometimes the masters of these recordings were rented to other record companies, who released the recording on their own label.  Licensees even sometimes put false information on record labels to disguise that they were re-releasing an existing recording (done sometimes to get around labor contracts).  To complicate matters even more, the same artist might release several versions of the same tune. Jazz is after all about improvisation, and each different version can be interesting in its own right.  So even knowing the song title and the artist wasn’t sufficient to know if the recording was unique or not.

Fans who developed discographies of early jazz found a key to solving the problem of unreliable information on the labels on records. They tracked recordings according to their matrix number.  Each matrix used to press records contained a hand inscribed number indicating the master recording.  No matter who subsequently used the master to release the recording, the same number was stamped into the record.  As a result, one could see that a French record was the same recording as an American one, because they shared the same matrix number, while two records with the same title and performers were in fact different recordings.

Content variation is a phenomenon driven by the desire of audiences to have choice.  People want versions of content that match their needs: that are shorter or longer depending on their interests, or are formatted for a larger or smaller screen depending on their device. To track all these variations, organizations need identifiers that can let them know how content is being repurposed, and where.

Tuning In

Broadcast radio stations often identify themselves by number.  They broadcast at a certain frequency, and use that frequency as an identifier: “101.3 FM” or whatever.  RFID is a different kind of radio broadcast, one specifically designed to identify objects. Identifiers have morphed into stickers that we can listen to.

Last year I visited an exhibit at Expo Milan featuring an MIT prototype of the supermarket of the future. The premise of the exhibit was that RFID tags can track produce and other food items, to give consumers information about where the products are from, when they were harvested, how they were shipped, and so forth.  What’s intriguing about this vision is that products can now have biographies. No longer does one need to talk about the product generically.  One can now talk about a specific instance of the product: this orange, or this batch of pesto. Products now have real stories that can be told.

RFID allows us to listen to things: to know what’s been going on with them. We are starting to move toward creating specific content that tells stories about specific instances of items. To do this, we will need the ability to be very specific about what we refer to.

Conclusion

Identifiers give us the ability to make statements about things. They allow us to distinguish what specifically we are saying, and about what specifically we are making a statement.  That capability will be important as content and products become more varied and customized.  Identifiers support accountability in the face of growing complexity.

— Michael Andrews

Categories
Intelligent Content

When is Adaptive Content Appropriate?

Publishers want their content to be appropriate for their audiences.  They need to know when it is appropriate to adapt their content to specific situations.

Until recently, publishers presumed audiences would adapt to their content.  They supplied the same content to everyone, and people were expected to find what interested them in that content.  In some circumstances, they created different versions of the same content targeted for different segments of readers, perhaps people in different countries.  But audiences still needed to find what was relevant to them in that version.

What happens if we reverse the equation, so that the content adapts to the individual, rather than the individual adapting to the content? On an intuitive level it sounds great, but how is it done in practice?  Does it now mean everyone is not getting the same content?

Discussion of adaptive content has increased noticeably in the past year. The motivation behind adaptive content is to give people precisely what they want, when they want it, how they want it. Marketers imagine if their brand that can satisfy the egocentric needs of their customers, they will cement their relationship with them.

Now a buzzy topic: Sample headlines of recent posts about adaptive content.
Now a buzzy topic: Sample headlines of recent posts about adaptive content.

Adaptive content is attractive as an ideal.  But much recent discussion of the approach is short on specifics.  Karen McGrane, who introduced the concept several years ago to the wider content strategy community, recently wrote: “I am really, really annoyed with hearing adaptive solutions presented as some kind of magical panacea.”  We need less discussion about adaptive content as an abstract concept, and more focus on how it is implemented.  The critical question is not, “Why adaptive content?” but “How?”  Until we understand more of the how, its value can’t be judged.

What Adaptive Content Means

Adaptive content is difficult to define precisely.  It has various properties, a number of which are also associated with other content concepts, such as personalization, dynamic content, and intelligent content. Those who discuss adaptive content may emphasize different aspects of it.  Perhaps the biggest difference is between those who emphasize the production side of adaptive content (What do producers need to do to deliver content adaptively?) and those who talk about the consumption side (Why do consumers care and what do they notice that’s different?)

Adaptive content is a topic of growing interest in large part due to the smartphone.  The significance of the smartphone goes beyond the difference between a smaller touchscreen and a larger screen with a keyboard.  Smartphones are used in diverse situations and offer many capabilities.  They have cameras, microphones, GPS, a unique ID tied to an individual, and sensors such as gyroscopes.  These features can capture different information to support interaction with content and influence what content is provided to the user.  They’ve changed our assumptions about when and where users might need information.  We can no longer assume users will be making a simple explicit request, and getting content matching that request.

The adjective adaptive implies the user can somehow direct the content.  An adaptive approach involves various possibilities.  It’s an approach in the early stage of its adoption.  Its benefits and limitations at this point aren’t yet well understood.

I’ll pass here on trying to define precisely what adaptive content is.  Others such as Karen McGrane, Joe Goliner and Noz Urbina have valuable things to say on this topic.  I want to focus on what is genuinely useful in the approach. Understanding in more detail what adaptive content could represent helps us assess both its application, and the effort involved.

For me, the core idea of adaptive content is that content variations are available to provide a better, more relevant experience for users.  The key phrases are content variations (production side) and experiences (consumption side).

Many discussions of adaptive content look at the numerous variables relating to people, devices, locations, and so forth.  The number of permutations can seem enormous, and would imply a need for omniscient engineering.

It may be more valuable to focus on variations, which links the content to scenarios of use, and to whom is responsible for it.

Two key questions of adaptive content are:

  1. How much variation is necessary?
  2. How much variation is possible?

The first question speaks to what audiences need, and the second to what businesses can realistically do to meet those needs.

One point needs clarifying.  Adaptive content is not about mind-reading.  There is a big push in the world of big data around predictive analytics.  While predictive analytics might occasionally play a role in determining what content variation to show, it generally will not.  In most cases the intent and needs of the individual user will be clear, and conjecture isn’t necessary.

Examples of Adaptive Content

The best way to illustrate content variation is through examples, looking at use cases where individuals receive different variations of content depending on their situation.  These examples may not be relevant to all organizations, but they offer alternative perspectives on the value of content adaption.  We might even consider these as adaptive archetypes.

One popular archetype is context aware content.  The best known example is the card UI provided by Google.  A Google card might combine information relating to time, location, and the user’s calendar with status information from elsewhere.  The context is often event-focused.  Different people receive different variations of structured information.  People know the structure of the information they will receive, but not the precise information they will be getting.

A related archetype is situationally aware content.  Here, the context is not predefined, but is fluid. The situation is defined by preferences set by the user relating to variables in their environment.  Wearable devices may offer situationally aware content.  You may be a work and can’t watch a football match, but perhaps your wrist will buzz when your team scores a goal.  The focus is less on the structure of the information, and more on what specific content to receive, and how to receive it.  In the future, wearables may have sensors that trigger health advice, possibly on a different platform.  So we have a possibility of trans-device content.

Another kind of adaptive content is omnichannel content, a favorite of the retail sector.  Macy’s, the U.S. department store chain, needs to adapt content to various shopping scenarios.  Some people will go to the store to browse, but others want to know what’s available before going to the store.  A shopper may be looking for a sweater that’s been advertised in a specific color and size, and wants to know if it is in stock at her local store.  The content needs to display the stock availability of the item according to location.  There will be countless variations of content about the sweater depending on the size, color and store location.

A different sort of adaptive content is possible in e-learning.  Pearson, a large educational publisher, provides students with materials that adapt to their understanding of subject matter.  It compares what learning outcomes they need to achieve for different proficiencies with the student’s mastery of these topics, and provides an individualized learning path based on their knowledge of concepts.  Each student will see a different sequence of content, and different students may see different content items.  This is an example of outcome driven content variation based on inference.

In some of these examples, users imagine they are getting unique content.  But we are discussing content for an audience of many people, not personal information such as your fitness tracker information.  Individuals may just be seeing a variation tailored to them, and others matching their circumstances will see similar variations.

Back to the Future: Adaptive Content’s Origins

Adaptive content may seem like a new approach, but much of the thinking around it has been years in the making. The W3C defined core aspects of adaptive content over ten years ago, in 2004.  The proliferation of internet-connected devices with different characteristics and purposes has been evident for a long while, and with that, questions about how to provide content to increasingly diverse users.

The W3C uses the phrase “content adaptation” rather than “adaptive content,” but the two terms refer to the same general topic.  Here’s the W3C definition:

“Content Adaptation is a process that based on factors such as the capabilities of the displaying device or network, or the user’s preferences, adapts the content that has been requested to provide an optimized user experience. This adaptation can occur in a number of places in the content delivery chain: the author may make choices when writing the content, or intermediary automated content transformation proxies could adapt the content based on heuristics and knowledge of the user, or the adaptation could occur within the browser itself.”

This definition is slightly different from how adaptive content is commonly discussed.  Yet it highlights some important issues.   First, there are technical considerations (hardware and network) but also human considerations (preferences).  The goal is to deliver a good user experience, not conversions or network optimization. And there are multiple ways to accomplish this: through content planning, technical transformation of content based according to specific user needs, and using browser technology.

Delivery Context

Over a decade ago a W3C working group documented issues relating to device independent-content: How to provide different versions of the same core content, irrespective of platform.  They looked at the relationship between what is created and what is presented, and also the different dimensions of how content is received and manipulated by users.  A major focus is what they call the delivery context.

Schematic of W3C terms relating to device independence and content adaptation.
Schematic of W3C terms relating to device independence and content adaptation.

The W3C working group believed that users will often need to interact with units of content that are different from the units created by authors.  Authors may create larger content units that are broken down when presented to users (the perceivable unit).   The decomposition approach contrasts with the infinite scrolling people commonly experience these days, regardless of device.  The notion of decomposition also contrasts with some newer ideas of writing small atomic units of content, although the W3C also considered the possibility of aggregating units of content.

The most significant idea was the possibility of variations in content created.  Users weren’t just seeing different presentational views of a single version of content, they were seeing different variants.

The W3C considered how the delivery context shapes the user’s focus of attention: what users notice, and how they need to interact.  They noted interaction might not only be visual, but also gestural or based on speech.  They considered adaptation preferences — how the user indicates they want to experience the content, such as alert preferences.  And they reviewed the impacts of application personalization — things likes settings for video playback, or whether sound or location tracking is on. These variables are already important considerations for content on smartphones.

The delivery context is often overlooked. Some recent adaptive content discussions have focused on predicting implicit user desires and delivering variations based on those predictions. But the other, less explored aspect of adaptive content is making sure users can get content that matches their explicit preferences — especially when they don’t want to use a feature. Many applications assume users will use certain features: to take a selfie, use beacons, talk to a virtual assistant, or something else that designers think would be fun.  A growing number of applications assume people will use their smartphones to do things, including producing content such as bar code IDs or  social media check-ins, for use by the brand.  Except it might not be fun for everyone.   Content needs to adapt to when people opt-out of such experiences.

Adaptive Content Delivery

Before the rise of today’s popular techniques like AJAX, responsive web design and APIs, the W3C identified different techniques that can enable content adaptation. They identified different processes to support content adaption, and listed various client and server side processors to deliver the content. While the specific recommendation details are dated, the range of approaches remains interesting because they are not limited by current conceptions about how content is delivered.

Adaption Processes refer to how to change the content itself.  Examples the working group identified included:

  • Select/Remove
    • Selection via URL redirect
    • In-document Decision Tags (conditional or switch selection)
    • Layout decisions
    • Style conditions
    • Relevancy
  • Navigation
  • Adaptation via Substitution
  • Adaptation via Transformation

Many of these techniques involved markup and other instructions embedded in the content.  A tremendous amount of variation is possible using these techniques in combination.

Adaptation Processors, in the W3C working group’s terminology, refer to the technical means for enabling content adaptation — from the server side, client side, or some combination.  The working group identified:

  • Server-side Adaptation
    • Variant Selection
    • Structural Transformation
    • Media Adaptation
    • Using Meta-information
    • Decomposition
  • Client-side Adaptation
    • Image Resizing
    • Font Substitution
    • Transcoding
    • Contextual Selection

While most of the client-side adaptation techniques focused on alternative renderings of content, the server side techniques focused more on generating substantive variations in the content.  For example, one possibility mentioned for structural transformation is providing auto-summarization of content.

Today’s web environment places a strong emphasis on the client side. Responsive web design provides many of the client-side capabilities identified by the working group.  The extensive use of JavaScript libraries emphasizes user-screen interaction.  Conditional loading helps to manage when content appears on screen.

Much of the substantive variation in content needs to come from the server side.  Server-side data repositories are becoming more flexible delivering mixed types of content from different sources.  The lagginess of server-provided content should improve with true 4G network speeds.  The other major server-side factor, which was not mentioned at all by the working group, is the use of analytics data to shape the content adaptation. Using data to guide the display of content has been a significant transformation in the past five years.  Tracking user behavior over time can provide useful information for providing the right content variant, as the Pearson example shows.

The tools available to adapt content vary in what they accomplish and the effort they entail.  Server side approaches will generally be more complex to do, though they can potentially offer the most value if they provide content that would otherwise be unavailable or not accessed.   We can see this with Macy’s approach.  Having specific inventory information could be a decisive factor for a person making a purchase.  It is an example where the content variation is both high value to the user, and high value to the brand.

Design Parameters for Adaptive Content

What should publishers focus on, given that there are many approaches to adapting content?  Adaptive content can be challenging to implement, given the many factors that influence its success.

The success of adaptive content depends on the alignment of three factors:

  • The profile of the individual user
  • The opportunity that a variation offers the user and the brand
  • The constraints on the ability to execute the variation in a manner that offers value to both parties

The individual user profile is a mix of their current and past behavior (typically clicks, perhaps purchases), together with any preferences they have provided (opt-ins, default settings, etc.)   Brands with loyalty programs may have a range of indicators about a user.  A person who is a frequent patron of a hotel would expect content more adapted to their needs than someone who doesn’t use the hotel often.  This suggests that the opportunity to implement adaptive content is strongest in cases where a relationship already exists.  Adaptive content may be more effective at keeping a customer than it is at creating one.

The opportunities for content variations will often relate to timing and location: when and where users most need specific content.  It may be based on the need variations of different segments. Location and segmentation could even be related in the case of regional segments.

Constraints can be technical or human:

  • Technical constraints: device capabilities, network connection, ability to offer desired content
  • Human constraints: motivation to engage, attention and distraction

Sometimes constraints interact.  Many retailers show an option to pick up merchandise at the nearest store, but not everyone lives near a store.  That information, while useful to those near stores, may seem punishing to those far away.  Ideally, the adaptation needs to account for the possibility that not everyone can take advantage of the variant content, so that the content can “gracefully degrade” to a state where the variant is not in the foreground.

A critical implementation dimension involves timing: how anticipatory the adaptation is.  Some adaptations are real-time, responding to uncertain user interactions.  Other are event-triggered, where the event is already known and being monitored. Still others involve scripts based on knowable interaction pathways.  Here adaptive content overlaps with dynamic content (user-initiated requests) and some forms of personalization (remembering information across sessions.)

Content adapts to what is known within different time horizons:

  • Path-based adaptation, which serves different variations according either to prior actions from past sessions, or the immediate preceding actions of the current session
  • Forecast-based adaptation, which serves variations based on known variables such as calendar information or stages of a lifecycle
  • Real-time adaptation, which provides variations based on matching current behaviors with user profiles or task outcome goals.

Real-time adaptation is a data and algorithmically intensive approach.   It requires fast decisions using multiple variables, some of which may lack data.  The more inputs into the decision, and the more outputs of the decision (different content variations), the more challenging it is. A widely encountered example of real time adaptation are ad exchanges, where display ads are shown according to user profile characteristics and advertiser bids.  An impressive amount of computing power is marshaled to deliver display ads, a cost justified by the big stakes involved.

When is adaptation appropriate?

If done properly adaptive content can benefit audiences.  So should brands implement adaptive content?   The answer depends on many factors.   Brands need to evaluate how important content variants are to the audience, and to the brand.  Brands need to understand how much complexity is involved: the inputs needed to decide on the variant, and the number of variants needed to deliver the expected experience.

Adaptive content will often have the strongest business case when supporting transactions, such as sales.  The stronger the business rationale, the larger the potential investment and sophistication.

Adaptive content encompasses a range of approaches.  Not all require state-of-the-art back-end systems.  Some implementations may be small enhancements that improve the experience of using content without involving complex implementations.

What’s appropriate depends on user needs analysis, an assessment of available technical capabilities, and a development of a business case.

— Michael Andrews

Categories
Content Engineering Intelligent Content

Defining Meaning in a Post-Document World

Digital content is undergoing a metamorphosis. It is no longer about fixed documents. But neither is it just a collection of data. It is something in-between, yet we haven’t developed a vivid and shared way to conceive and discuss precisely what that is. We see evidence of this confusion in the vocabulary used to describe content meaning. We talk about content as structurally rich, as semantic, as containing structured data. Behind these labels are deeper convictions: whether content is fundamentally about documents or data.

Content has evolved into a complex experience, composed of many different pieces. We need new labels to express what these pieces mean.

“The moment it all changed for me was the moment when Google Maps first appeared. Because it was a software application—not a set of webpages, not a few clever dynamic calls, but almost aggressively anti-document. It allowed for zooming and panning, but it was once again opaque. And suddenly it became clear that the manifest destiny of the web was not accessibility. What was it? Then the people who advocated for a semantically structured web began to split off from the mainstream and the standards stopped coming so quickly.” — Paul Ford in the Manual

In the traditional world of documents, meaning is conveyed through document-centric metadata. Publishers govern the document with administrative metadata, situate sections of the document using structural metadata, and identify and classify the document with descriptive metadata. As long as we considered digital content as web pages, we could think of them as documents, and could rely on legacy concepts to express the meaning of the content.

But web pages should be more than documents. Documents are unwieldy. The World Wide Web’s creator, Tim Berners-Lee, started agitating for “Raw data now!” Developers considered web pages as “unstructured data” and advocated the creation and collection of structured data that machines could use. What is valuable in content got redefined as data that could be placed in a database table or graph. Where documents deliver a complete package of meaning, data structures define meaning on a more granular level as discrete facts. Meaningful data can be extracted, and inserted into apps when in a structured format. In the paradigm of structured data, the meaning of an entity should be available outside of a context in which it was associated. Rather than define what the parts of documents mean, structured data focuses on what fragments of information mean independently of context.

Promoters of structured data see possibilities to create new content by recombining fragments of information. Information boxes, maps, and charts are content forms that can dynamically refresh with structured data. These are clearly important developments. But these non-narrative content types are not the only forms of content reuse.

The Unique Needs of Component Content

A new form of content emerged that was neither a document nor a data element: the content component. In HTML5, component level content might be sections of text, videos, images and perhaps tables.[1] These items have meaning to humans like documents, but unlike documents, they can be recombined in different ways, and so carry meaning outside the context of a document, much the way structured data does.

Component content needs various kinds of descriptions to be used effectively. Traditional document metadata (administrative, structural, and descriptive) are useful for content components. It is also useful to know what specific entities are mentioned within a component; structured data is also nice to have. But content components have further needs. If we are moving around discrete components that carry meaning to audiences, we want to understand what specific meaning is involved, so we match up the components with each other appropriately. The component-specific metadata addresses the purpose of the component.

Component metadata allows content to be adaptable: to match the needs of the user according to the specific circumstances they are in. We don’t have well-accepted terms to describe this metadata, so its importance tends to get overlooked. Various kinds of component metadata can characterize the purpose of a component. Though metadata relating to these facets aren’t yet well-established, there are signs of interest as content creators think about how to curate an experience for audiences using different content components.

Contextual metadata indicates the context in which a component should be used. This might be the device the component is optimized for, the geolocation it is intended for, the specific audience variation, or the intended sequencing of the component relative to other components.

Performance metadata addresses the intended lifecycle of the component. It indicates whether the component is meant to be evergreen, seasonal or ephemeral, and if it has a mass or niche use. It helps authors answer how the component should be used, and what kind of lifting it is expected to do.

Sentiment metadata describes the mood or the metaphor associated with the component. It answers what kind of impression on the audience the component is expected to make.

We can see how component metadata can matter by looking at a fairly simple example: using a photographic image. We might use different images together with the same general content according to different circumstances. Different images might express different metaphors presented to different audience segments. We might want to restrict the use of certain images to ensure they are not overused. We need to have different image sizes to optimize the display of the image on different devices. While structured data specialists might be preoccupied with what entities are shown in an image, in this example we don’t really care about who the models are appearing in the stock image. We are more concerned about the implicit meaning of the image in different contexts, rather than its explicit meaning.

The Challenges of Context-free Metadata

Metadata has a problem: it hasn’t yet evolved to address the changing context in which a content component might appear. We still talk about metadata as appearing in the head of a document, or in the body of a document, without considering that the body of the document is changing. We run the risk that the head and the body get out of alignment.

The rise of component variation is a key feature of the approach that’s commonly referred to as intelligent content. Intelligent content, according to Ann Rockley’s definition, involves structurally rich and semantically categorized content. Intelligent content is focused on making content components interchangeable.

Discussions of intelligent content rarely get too explicit about what metadata is needed. Marcia Riefer Johnston addressed the topic in an article entitled Intelligent Content: What Does ‘Semantically Categorized’ Mean? She says: “Semantic categories enable content managers to organize digital information in nearly limitless ways.” It’s a promising vision, but we still don’t have a sense of where the semantic categories come from, and what precisely they consist of. The inspiration for intelligent content, DITA, is an XML-based approach that allows publishers to choose their own metadata. DITA is a document-centric way of managing content, and accordingly assumes that the basic structure of the document is fixed, and only specific elements can be changed within that stable structure. Intelligent content, in contrast, suggests a post-document paradigm. Again, we don’t get a sense of what structurally rich means outside of a fixed document structure. How can one piece together items in “limitless ways?” What is the glue making sure these pieces fit together appropriately?

Content intelligence involves not only how components are interchangeable, but also how they are interoperable — intelligible to others. Intelligent content discussions often take a walled-garden approach. They focus on the desirability of publishers providing different combinations of content, but don’t discuss how these components might be discovered by audiences.[2] Intelligent content discussions tend to assume that the audience discovers the publisher (or that the publisher identifies the audience via targeting), and then the publisher assembles the right content for the audience. But the process could be reversed, where the audience discovers the content first, prior to any assembly by the publisher. How do the principles of semantically categorized and structurally rich content relate to SEO or Linked Data? Here, we start to see the collision between the document-centric view of content and the structured data view of it. Does intelligent content require publisher-defined and controlled metadata to provide its capabilities, or can it utilize existing, commonly used metadata vocabularies to achieve these goals?

Document-centric Thinking Hurts Metadata Description

Content components already exist in the wild. Publishers are recombining components all the time, even if they don’t have a robust process governing this. Whether or not publishers talk about intelligent content, the post-document era has already started.

But we continue to talk about web pages as enduring entities that we can describe. We see this in discussions of metadata. Two styles of metadata compete with each other: metadata in the document head of a page, and metadata that is in-line, in the body of a page. Both these styles assume there is a stable, persistent page to describe. Both approaches fail because this assumption isn’t true in many cases.

The first approach involves putting descriptive metadata outside of the content. On a web page, it involves putting the description in the head, rather than the body. This is a classic document-centric style. It is similar to how librarians catalog books: the description of the book is on a card (or in a database) that is separate from the actual book. Books are fixed content, so this approach works fine.

The second approach involves putting the description in the body of the text. Think of it as an annotation. It is most commonly done to identify entities mentioned in the text. It is similar to an index of a book. As long as the content of the book doesn’t change, the index should be stable.

Yet web pages aren’t books. They change all the time. There may be no web page: just a wrapper for presenting a stream of content. What do we need to describe here, and how do we need to do that?

Structured Data’s Lost Bearings

When people want to identify entities mentioned in content, they need a way to associate a description of the entity with the content where it appears. Entity-centric metadata is often called structured data, a confusing term given the existence of other similar sounding terms such as structured content, and semantic structure. While structured data was originally a term used by data architects, the SEO community uses it to refer more specifically to search-engine markup using vocabulary standards such as Schema.org. The structure referred to in the term “structured data” is the structure of the vocabulary indicating the relationships associated with the description. It doesn’t refer to the structure of the content, and here is where problems arise.

While structured data excels at describing entities, it struggles to locate these entities in the content. The question SEO consultants wrestle with is what precisely to index: a web page, or a sentence fragment where the item is mentioned? There are two rival approaches for doing this. One can index entities appearing on a web page using a format called JSON-LD, which is typically placed in the document head of the page (though it does not have to be). Or one can index entities where they appear in the content using a format called RDFa, which are placed in-line in the body of the HTML markup.

Both these approaches presume that the content itself is stable. But content changes continually, and both approaches founder because they are based on a page-centric view of content instead of a component-centric view.

Disemboweled Data

First, consider the use of RDFa to describe the entities mentioned in the sentence. The metadata is embedded in the body of the page: it’s embodied metadata. It’s an appealing approach: one just needs to annotate what these entities are, so a search engine can identify them. But embedded in-line metadata turns out to be rather fragile. Such annotation works only so far as every relevant associated entity is explicitly mentioned in the text. And if the text mentions several different kinds of entities in a single paragraph, the markup gets complicated, because one needs to disambiguate the different entities so as not to confuse the search robots.

The big trouble starts when one changes the wording of texts containing embedded structured data. The entities mentioned change, which has a cascading impact on how the metadata used to describe these entities must be presented. What seemed a unified description of related entities can become disemboweled with even a minor change in a sentence. The structured data didn’t have a stable context with which to associate itself.

Decapitated data

Given the hassles of RDFa, many SEO consultants lately are promoting the virtues of putting the structured data in the head of a page using JSON-LD. The head of the description is separate from the body of the content, much like the library catalog card describing a book is separate from the book and its contents. The description is separate from the context in which it appears.

Supporters of JSON-LD note that the markup is simpler than RDFa, and less prone to glitchiness. That is true. But the cost of this approach is that the structured data looses its context. It too is fragile, in some ways more so than RDFa.

Putting data in the document head, outside of the body of the content, is to decapitate the data. We now have data that is vaguely associated with a page, though we don’t know exactly how. Consider Paul Ford’s recent 32,000-word article for Business Week on programming. He mentioned countless entities in the article, all of which would be placed in the head. You might know the entity was mentioned somewhere, but you can’t be sure where.

What's efficient for one party may not be for another.  (original image via Wikipedia)
What’s efficient for one party may not be so for another. (original image via Wikipedia)

With decapitated data, we risk having the description of the content get out of alignment with what the content is actually discussing. Since the data is not associated with a context, it can be hard to see that the data is wrong. You might revise the content, adding and deleting entities, and not revise the document head data accurately.

The management problem becomes greater when one thinks about content as components rather than pages. We want to change content components, but the metadata is tied to a page, rather than a component. So every variation of a page requires a new JSON-LD profile in the document head that will match the contents of the variation. As a practical matter this approach is untenable. A dynamically-generated page might have dozens or hundreds of variations based on different combinations of components.

Structured data largely exists to serve the needs of search engines. Its practices tend to define content in terms of web pages. Structured data can describe a rendered page, but isn’t geared to describe content components independently of a rendered page. To indicate the main theme of a piece of content, Schema.org offers a tag called “main content of page”, reflecting an expectation that there is one webpage with an overriding theme. Even if a webpage exists for a desktop browser, it may be a series of short sections when viewed on a mobile device, and won’t have a single persistent “main content” theme. Current structured data practices don’t focus on how to describe entities in unbundled content — entities associated with discrete components such as a section of text. Each reuse of content involves a re-creation of structured data in the document head.

It is important not to confuse structured data with structured content. Structured data needs to work in concert with structured content delivered through content management systems, instead of operating independently of it.

When structured data gets separated from the content it represents, it creates confusion for content teams about what’s important. Decapitated data can foster an attitude that audience-facing content is a second class citizen. One presentation on the benefits of JSON-LD for SEO advised: “Keep the Data and Presentation layer separate.” Content in HTML gets reduced to presentation: a mere decoration. Such advocates talk about supplying a data “payload” to Google. It is true that structured data can be used in apps, but some structured data advocates create a false dichotomy between web pages and data-centric apps, because they are stuck in a paradigm that content equals web pages.

This perspective can lead to content reductionism: only the facts mentioned in the content matter. The primary goal is to free the facts from the content, so the facts can be used elsewhere by Google and others. Content-free data works fine for discussing commodities such as gas prices. But for topics that matter most to people, having context around the data is important. Decapitated data doesn’t support context: it works against it, by making it harder to provide more contextually appropriate information. Either the information is hanked out of its context entirely, or the reader is forced to locate it within the body of the content on her own.

The ultimate failure of decapitated data occurs when the data bears no relationship to the content. This is a known bug of the approach, and one no one seems to have a solution for. According to the W3C, “it is more difficult for search engines to verify that the JSON-LD structured data is consistent with the visible human-readable information.” When what’s important gets defined as what’s put in a payload for Google, the temptation exists to load things in the document head that aren’t discussed. Just as black hat operators loaded fake keywords in the document head of the meta description years ago to game search engines, there exists a real possibility that once JSON-LD becomes more popular, unscrupulous operators will put black hat structured data in the document head that’s unrelated to the content. No one, not least the people who have been developing the JSON-LD format, wants to see this happen.

Unbundling Meaning for Unbundled Content

The intelligent content approach stresses the importance of unbundling content. The web page as a unit of content is dying. Unbundled content can adapt to the display and interactive needs of mobile devices, and allow for content customization.

Metadata needs to describe content components, not just pages of content. Some of this metadata will describe the purpose of the component. Other metadata will describe the entities discussed in the component.

There are arguments whether to annotate entities in content with metadata, or whether to re-create the entities in a supplemental file. Part of the debate concerns the effort involved: the effort for inputting the content structure, verses the effort involved re-entering the data described by the structure. One expert, Alex Miłowski at the University of California Berkeley, suggests a hybrid approach could be most efficient and accurate. Regardless of format, structured content will be more precise and accurate if it refers to a reusable content component, rather than a changeable sentence or changeable web page.[3] Components are swappable and connectable by design. They are units of communication expressing a unified purpose, which can be described in an integrated way with less worry that something will change that will render the description inaccurate. It is easier to verify the accuracy of the structured data when it is closely associated with the content. Since content components are designed for reuse, one can reuse the structured data linked to the component.

While the idea of content components is not new, it still is not widely embraced as the default way of thinking about content. People still think about pages, or fragments. Even content strategists talk suggestively about chunks of content, instead of trying to define what a chunk would be in practice. As a first step, I would like to see discussion of chunks disappear, to be replaced by discussion of components. Thinking about reusable components does not preclude the reuse of more granular elements such as variables and standardized copy. But the concept of a component provides a way to discuss pieces of content based around a common theme.

Components need to be defined as units to manage internally in content management systems before they will be recognized as a unit that matters externally. A section of content in HTML may not map to standard templates in a CMS right now, but that can change — if we define a component as a section. A section of content in HTML may not mean much to a search engine right now, but that can change — if search engines perceive such a unit as having a coherent meaning. The case for both intelligent content and semantic search will be more compelling if we can make such changes.

Final note

More dialog is needed between the semantic search community and the intelligent content community about how to integrate each approach. Both these approaches involve significant complexity, and understanding by each side of the other seems limited. I’ve discovered that some ideas about structured data and the semantic representation of entities have political sensitivities and a stormy past, which can make exploration of these topics challenging for outsiders. In this post I have questioned a current idea in structured data best practice, separating data from content, even though this practice wasn’t common a year ago, or even widely practical. Practices used in semantic search (such as favored formats and vocabulary terms) seem to fluctuate noticeably, compared to the long established principles guiding content strategy. The cause of structured data will benefit when it is discussed in the wider context of content production, management and governance, instead of in isolation from these issues. For its part, content strategy should become more specific with how to implement principles, especially as adaptive content becomes more common. I foresee possibilities to refine concepts in intelligent content through dialog with semantic search experts.

— Michael Andrews


  1. I am merely suggesting kinds of HTML structures that correspond to content components, rather than attempting to provide a formal definition. HTML5 has its quirks and nuances, and the topic deserves a wider discussion.  ↩
  2. A notable exception is Joe Pairman’s article, “Connecting with real-world entities: is structured content missing a trick?”.  ↩
  3. Embedding JSON-LD in components seems like it could offer benefits, though I hesitate to suggest casually standards on such a multifaceted issue. I don’t want the merits of a particular solution to detract attention from a thorough examination of the core issues associated with the problem.  ↩