If you put two things together side-by-side, what do they have in common? The answer depends on the point of view. Alternative viewpoints mold content identity differently. Designers of content experiences, such as content strategists and information architects, can use these viewpoints to surface different kinds of content relationships.
Three actors shape the identity of content: the author or curator; the audience; and the thing or things discussed in the content. Each brings its own perspective to what content is about:
Content identity as interpreted by an author or curator
Content identity as interpreted by the audience
Content about things that reveal dimensions of themselves
Each perspective plays a different role in framing the content experience.
Scene setting: the Curatorial Perspective
Scene setting lets people understand common themes in content that aren’t obvious. An author or curator draws on their unique knowledge to construct a theme that unifies different content items. Such themes set expectations about the relationship of content to other content. It is didactic in orientation.
A common label used to announce a theme is the series — for instance, a TV series, or a narrative trilogy. Sometimes the series is just a way to divide up something into smaller parts, but keep them connected: an article becomes a two-part article. A content series can express how different items are related according to the intentions of the author or the interpretation of a curator. They can be a sequence of items presented on a common theme. The series may present the evolution of the item over time, such as versions. A building architect might show a series of images starting with a sketch, then a foam model, and finally a photo of the finished building.
A series presents a collection of items and shows how they belong together. The author/curator draws on their intimate knowledge of the content to point out connections between different content items, which may not be self-evident. We find this in the museum world: an item presented is said to originally belong with other items, that have since been dispersed. A curator might indicate how several items embody a common theme, such as when similar paintings express a recurrent motif.
Any time items are defined by the values and judgments of the author (or curator), the audience must be willing to accept that valuation as relevant. So if a curator identifies items as “new and notable,” then the intended audience needs to buy that labeling.
Mirroring: the Audience Perspective
When mirroring, content reflects themes as seen by the audience. It represents concepts the way audiences think about them to support attraction to the content. Mirroring is different from the authorial perspective, which expresses the content’s intention. The audience perspective expresses how content is imagined.
Brand names are perhaps the purest example of imagined content. Brands have no intrinsic identity: they depend entirely on the perceptions of customers to define what they mean. Even a conglomerate that sells many brand products can’t dictate how consumers view these brands. The French brand house LVHM, which sells numerous luxury brand products, can’t control whether consumers consider Dior is more similar to Givenchy or to Louis Vuitton, even though it owns all three brands. In reality, Chinese consumers may have different opinions about these relationships than Italian consumers would.
High-level concepts that are meaningful to audiences should reflect how audiences perceive them. For example, people associate different kinds of experiences with different vacation activities. Is bungee jumping active-fun, adventurous, or extreme? It is best to work with the audiences’ framework of values, rather than trying to impose one on them. Card sorting is useful for eliciting subjective perceptions about the identity of things. Yet card sorting is less reliable when defining the identity of concrete things, since it shifts the attention away from the object’s specific properties. Better, more empirical approaches are available to classify concrete items.
Discovery: Perspectives based on Item Properties
Features of items can suggest themes. Object-defined themes let the things featured in the content to speak for themselves. This involves more showing, and less telling. Properties can define identities, and reveal commonalities between different items. It promotes discovery of content relationships.
Faceted search interfaces, such as found on e-commerce sites, are the most familiar implementation of property-driven identification. People choose values for various facets (properties) of items, and get a list of items matching these values. Using properties to identify items is especially valuable for non-text content. Some Digital Asset Management systems allow people to find images that match a certain shade of a color, regardless of what the subject of the image is. Properties can identify similarities and relationships that might not be expected from a higher level label. It can support more criteria-based consideration of identity. For example, when we think of travel items — things to pack — we generally have standard things in mind: toiletries, articles of clothing, etc. But if we start with properties, the universe of travel items expands. We might define travel items as things that are both small and lightweight. We discover small and lightweight versions of things we might not ordinarily pack for travel, but might enjoy having once we become aware of the option.
Leveraging Diverse Viewpoints
There’s more than one way to define the relationship between items of content. I sometimes see people try to make a single hierarchical taxonomy serve as both an authoritative or objective classification of content, and a user-centric classification that reflects the subjective perceptions of users, without realizing they are forcing together different kinds of content identities — one relatively stable, the other contextual and subject to change.
Content can be considered objectively as it is; authoritatively as it is intended; and subjectively as it seems to various audiences. These differences offer thematic lenses for looking at content. They can be used to help audiences connect different items of content together in different ways: setting the scene for audiences so they understand relationships better, reflecting their existing attitudes to promote attraction to items of interest, and helping them discover things they didn’t know.
Metadata is the foundation of a digitally-driven organization. Good data and analytics depend on solid metadata. Executional agility depends on solid metadata. Yet few organizations manage metadata comprehensively. They act as if they can improvise their way forward, without understanding how all the pieces fit together. Organizational silos think about content and information in different ways, and are unable to trace the impact of content on organizational performance, or fully influence that performance through content. They need metadata that connects all their activities to achieve maximum benefit.
Babel in the Office
Let’s imagine an organization that sells a kitchen gadget.
The copywriter is concerned with how to attract interest from key groups. She thinks about the audience in terms of personas, and constructs messages around tasks and topics of interest to these people.
The product manager is concerned with how different customer segments might react to different combinations of features. She also tracks the features and price points of competitors.
The data analyst pours over shipment data of product stock keeping units (SKU) to see which ZIP codes buy the most, and which ones return the product most often.
Each of these people supports the sales process. Each, however, thinks about the customer in a different way. And each defines the product differently as well. They lack a shared vocabulary for exchanging insights.
A System-generated Problem
The different ways of considering metadata are often embedded in the various IT systems of an organization. Systems are supposed to support people. Sometimes they trap people instead. How an organization implements metadata too often reveals how bad systems create suboptimal outcomes.
Organizations generate content and data to support a growing range of purposes. Data is everywhere, but understanding is stove-piped. Insights based on metadata are not easy to access.
We can broadly group the kinds of content that audiences encounter into three main areas: media, data, and service information.
Media includes articles, videos and graphics designed to attract and retain customers and encourage behaviors such as sharing, sign-ups, inquiries, and purchases. Such persuasive media is typically the responsibility of marketing.
Customer-facing data and packaged information support pre- and post-sales operations. It can be diverse and will reflect the purpose of the organization. Ecommerce firms have online product catalogs. Membership organizations such as associations or professional groups provide events information relating to conferences, and may offer modular training materials to support accreditation. Financial, insurance and health maintenance organizations supply data relating to a customer’s account and activities. Product managers specify and supply this information, which it is often the core of the product.
Service-related information centers on communicating and structuring tasks, and indicating status details. Often this dimension has a big impact on the customer experience, such as when the customer is undergoing a transition such as learning how to operate something new, or resolving a problem. Customer service and IT staff structure how tasks are defined and delivered in automated and human support.
Navigating between these realms is the user. He or she is an individual with a unique set of preferences and needs. This individual seeks a seamless experience, and at times, a differentiated one that reflects specific requirements.
Numerous systems and databases supply bits of content and information to the user, and track what the user does and requests. Marketing uses content management and digital asset management systems. Product managers feed into a range of databases, such as product information systems or event management systems. Customer service staff design and maintain their own systems to support training and problem resolution, and diagnose issues. Customer Relationship Management software centralizes information about the customer to track their actions and identify cross selling and up selling opportunities. Customer experience engines can draw on external data sources to monitor and shape online behaviors.
All these systems are potential silos. They may “talk” to the other systems, but they don’t all talk in a language that all the human stakeholders can understand. The stakeholders instead need to learn the language of a specific ERP or CRM application made by SAP, Oracle or Salesforce.
Metadata is Too Important for IT to Own
Data grows organically. Business owners ask to add a field, and it gets added. Data can be rolled up and cross tabulated, but only to an extent. Different systems may have different definitions of items, and coordination relies on the matching of IDs between systems.
To their credit, IT staff can be masterful in pulling data from one system and pushing it into another. Data exchange — moving data between systems — has been the solution to de-siloing. APIs have made the task easier, as tight integration is not necessary. But just because data are exchanged, does not mean data are unified.
The answer to inconsistent descriptions of customers and content has been data warehousing. Everything gets dumped in the warehouse, and then a team sorts through the dump to try to figure out patterns. Data mining has its uses, but it is not a helpful solution for people trying to understand the relationships between users and items of content. It is often selective in what it looks at, and may be at a level of aggregation that individual employees can’t use.
Employees want visibility into the content they define and create, and know how customers are using it. They want to track how content is performing, and change content to improve performance. Unfortunately, the perspectives of data architects and data scientists are not well aligned with those of operational staff. An analyst at Gartner noted that businesses “struggle to govern properly the actual data (and its business metadata) in the core business systems.”
A Common Language to Address Common Concerns
Too much measurement today concerns vaguely defined “stuff”: page views, sessions, or short-lived campaigns.
Often people compare variants A and B without defining what precisely is different between them. If the A and B variations differ in several different properties, one doesn’t learn which aspects made the winning variant perform better. They learn which variant did better, but not what attributes of the content performed better. It’s like watching the winner horse at a race where you see which one won, but not knowing why.
A lot of A/B testing is done because good metadata isn’t in place, so variations need to be consciously planned and crafted in an experiment. If you don’t have good metadata, it is difficult to look retrospectively to see what had an impact.
In the absence of shared metadata, the impact of various elements isn’t clear. Suppose someone wanted to know how important the color of the gadget shown in a promotional video is on sales. Did featuring the kitchen gadget in the color red in a how-to promotional video increase sales compared to other colors? Do content creators know which color to feature in a video, based on past viewing stats, or past sales? Some organizations can’t answer these questions. Others can, but have to tease out the answer. That’s because the metadata of the media asset, the digital platform, and the ordering system aren’t coordinated.
Metadata lets you do some forensics: to explore relationships between things and actions. It can help with root cause analysis. Organizations are concerned with churn: customers who decide not to renew a service or membership, or stop buying a product they had purchased regularly. While it is hard to trace all the customer interactions with an organization, one can at least link different encounters together to explore relationships. For example, do the customers who leave tend to have certain characteristics? Do they rely on certain content — perhaps help or instructional content? What topics were people who leave most interested in? Is there any relationship between usage of marketing content about a topic, and subsequent usage of self-service content on that topic?
There is a growing awareness that how things are described internally within an organization need to relate to how they are encountered outside the organization. Online retailers are grabbling with how to synchronize the metadata in product information management systems with the metadata they must publish online for SEO. These areas are starting to converge, but not all organizations are ready.
Metadata’s Connecting Role
Metadata provides meaningful descriptions of elements and actions. Connecting people and content through metadata entails identifying the attributes of both the people and the content, and the relationships between them. Diverse business functions need uniform ways to describe important attributes of people and content, using a common vocabulary to indicate values.
The end goal is having a unified description that provides both a single view of the customer, and gives the customer a single unified view of the organization.
Different stakeholders need different levels of detail. These differences involve both the granularity of facets covered, and whether information is collected and provided at the instance level or in aggregation. One stakeholder wants to know about general patterns relating to a specific facet of content or type of user. Another stakeholder wants precise metrics about a broad category of content or user. Brands need to establish a mapping between the interests of different stakeholders to allow a common basis to trace information.
Much business metadata is item-centric. Customers and products have IDs, which form the basis of what is tracked operationally. Meanwhile, much content is described rather than ID’d. These descriptions may not map directly to operational business metadata. Operational business classifications such as product lines and sales and distribution territories don’t align with content description categories involving lifestyle-oriented product descriptions and personas. Content metadata sometimes describes high level concepts that are absent in business metadata, which are typically focused on concrete properties.
The internal language an enterprise uses to describe things doesn’t match the external language of users. We can see how terminology and focus differs in the diagram below.
Not only do the terminologies not match, the descriptors often address different realms. Audience-centric descriptions are often associated with outside sources such as user generated content, social media interactions, and external research. Business centric metadata, in contrast, reflects information captured on forms, or is based on internal implicit behavioral data.
Brands need a unified taxonomy that the entire business can use. They need to become more audience-centric in how they think about and describe people and products. Consider the style of products. Some people might choose products based on how they look: after they buy one modern-style stainless product, they are more inclined to buy an unrelated product that also happens to have the same modern stainless style because they seem to go together in their home. While some marketing copy and imagery might feature these items together, they aren’t associated in the business systems, since they represent different product categories. From the perspective of sales data, any follow-on sales appear as statistical anomalies, rather than as opportune cross-selling. The business doesn’t track products according to style in any detail, which limits its ability to curate how to feature products in marketing content.
The gap between the businesses’ definition of the customer, and the audience’s self-definition can be even wider. Firms have solid data about what a customer has done, but may not manage information relating to people’s preferences. Admittedly it is difficult to know precisely the preferences of individuals in detail, but there are opportunities to infer them. By considering content as an expression of individual preferences and values, one can infer some preferences of individuals based on the content they look at. For example, for people who look at information on the environmental impact of the product, how likely are they to buy the product compared with people who don’t view this content?
Steps toward a Common Language
Weaving together different descriptions is not a simple task. I will suggest four approaches that can help to connect metadata across different business functions.
First, the entire business should use the same descriptive vocabulary wherever possible. Mutual understanding increases the less jargon is used. If business units need to use precise, technical terminology that isn’t audience friendly, then a synonym list can provide a one-to-one mapping of terms. Avoid having different parties talk in different ways about things that are related and similar, but not identical. Saying something is “kind of close” to something else doesn’t help people connect different domains of content easily.
Second, one should cross-map different levels of detail of concern to various business units. Copywriters would be overwhelmed having to think about 30 customer segments, though that number might be right for various marketing analysis purposes. One should map the 30 segments to the six personas the copywriter relies on. Figure out how to roll up items into larger conceptual categories, or break down things into subcategories according to different metadata properties.
Third, identify crosscutting metadata topics that aren’t the primary attributes of products and people, but can play a role in the interaction between them. These might be secondary attributes such as the finish of a product, or more intangible attributes such as environmental friendliness. Think about themes that connect unrelated products, or values that people have that products might embody. Too few businesses think about the possibility that unrelated things might share common properties that connect them.
Fourth, brands should try to capture and reflect the audience-centric perspective as much as possible in their metadata. One probably doesn’t have explicit data on whether someone enjoys preparing elaborate meals in the kitchen, but there could be scattered indications relating to this. People might view pages about fancy or quick recipes — the metadata about the content combined with viewing behavior provides a signal of audience interest. Visitors might post questions about a product suggesting concern about the complexity of a device — which indicate perceptions audiences have about things discussed in content, and suggest additional content and metadata to offer. Behavioral data can combine with metadata to provide another layer of metadata. These kinds of approaches are used in recommender systems for users, but could be adapted to provide recommendations to brands about how to change content.
An Ambitious Possibility
Metadata is a connective tissue in an organization, describing items of content, as well as products and people in contexts not related to content. As important as metadata is for content, it will not realize its full potential until content metadata is connected to and consistent with metadata used elsewhere in the organization. Achieving such harmonization represents a huge challenge, but it will become more compelling as organizations seek to understand how content impacts their overall performance.
Digital content is undergoing a metamorphosis. It is no longer about fixed documents. But neither is it just a collection of data. It is something in-between, yet we haven’t developed a vivid and shared way to conceive and discuss precisely what that is. We see evidence of this confusion in the vocabulary used to describe content meaning. We talk about content as structurally rich, as semantic, as containing structured data. Behind these labels are deeper convictions: whether content is fundamentally about documents or data.
Content has evolved into a complex experience, composed of many different pieces. We need new labels to express what these pieces mean.
“The moment it all changed for me was the moment when Google Maps first appeared. Because it was a software application—not a set of webpages, not a few clever dynamic calls, but almost aggressively anti-document. It allowed for zooming and panning, but it was once again opaque. And suddenly it became clear that the manifest destiny of the web was not accessibility. What was it? Then the people who advocated for a semantically structured web began to split off from the mainstream and the standards stopped coming so quickly.” — Paul Ford in the Manual
In the traditional world of documents, meaning is conveyed through document-centric metadata. Publishers govern the document with administrative metadata, situate sections of the document using structural metadata, and identify and classify the document with descriptive metadata. As long as we considered digital content as web pages, we could think of them as documents, and could rely on legacy concepts to express the meaning of the content.
But web pages should be more than documents. Documents are unwieldy. The World Wide Web’s creator, Tim Berners-Lee, started agitating for “Raw data now!” Developers considered web pages as “unstructured data” and advocated the creation and collection of structured data that machines could use. What is valuable in content got redefined as data that could be placed in a database table or graph. Where documents deliver a complete package of meaning, data structures define meaning on a more granular level as discrete facts. Meaningful data can be extracted, and inserted into apps when in a structured format. In the paradigm of structured data, the meaning of an entity should be available outside of a context in which it was associated. Rather than define what the parts of documents mean, structured data focuses on what fragments of information mean independently of context.
Promoters of structured data see possibilities to create new content by recombining fragments of information. Information boxes, maps, and charts are content forms that can dynamically refresh with structured data. These are clearly important developments. But these non-narrative content types are not the only forms of content reuse.
The Unique Needs of Component Content
A new form of content emerged that was neither a document nor a data element: the content component. In HTML5, component level content might be sections of text, videos, images and perhaps tables. These items have meaning to humans like documents, but unlike documents, they can be recombined in different ways, and so carry meaning outside the context of a document, much the way structured data does.
Component content needs various kinds of descriptions to be used effectively. Traditional document metadata (administrative, structural, and descriptive) are useful for content components. It is also useful to know what specific entities are mentioned within a component; structured data is also nice to have. But content components have further needs. If we are moving around discrete components that carry meaning to audiences, we want to understand what specific meaning is involved, so we match up the components with each other appropriately. The component-specific metadata addresses the purpose of the component.
Component metadata allows content to be adaptable: to match the needs of the user according to the specific circumstances they are in. We don’t have well-accepted terms to describe this metadata, so its importance tends to get overlooked. Various kinds of component metadata can characterize the purpose of a component. Though metadata relating to these facets aren’t yet well-established, there are signs of interest as content creators think about how to curate an experience for audiences using different content components.
Contextual metadata indicates the context in which a component should be used. This might be the device the component is optimized for, the geolocation it is intended for, the specific audience variation, or the intended sequencing of the component relative to other components.
Performance metadata addresses the intended lifecycle of the component. It indicates whether the component is meant to be evergreen, seasonal or ephemeral, and if it has a mass or niche use. It helps authors answer how the component should be used, and what kind of lifting it is expected to do.
Sentiment metadata describes the mood or the metaphor associated with the component. It answers what kind of impression on the audience the component is expected to make.
We can see how component metadata can matter by looking at a fairly simple example: using a photographic image. We might use different images together with the same general content according to different circumstances. Different images might express different metaphors presented to different audience segments. We might want to restrict the use of certain images to ensure they are not overused. We need to have different image sizes to optimize the display of the image on different devices. While structured data specialists might be preoccupied with what entities are shown in an image, in this example we don’t really care about who the models are appearing in the stock image. We are more concerned about the implicit meaning of the image in different contexts, rather than its explicit meaning.
The Challenges of Context-free Metadata
Metadata has a problem: it hasn’t yet evolved to address the changing context in which a content component might appear. We still talk about metadata as appearing in the head of a document, or in the body of a document, without considering that the body of the document is changing. We run the risk that the head and the body get out of alignment.
The rise of component variation is a key feature of the approach that’s commonly referred to as intelligent content. Intelligent content, according to Ann Rockley’s definition, involves structurally rich and semantically categorized content. Intelligent content is focused on making content components interchangeable.
Discussions of intelligent content rarely get too explicit about what metadata is needed. Marcia Riefer Johnston addressed the topic in an article entitled Intelligent Content: What Does ‘Semantically Categorized’ Mean? She says: “Semantic categories enable content managers to organize digital information in nearly limitless ways.” It’s a promising vision, but we still don’t have a sense of where the semantic categories come from, and what precisely they consist of. The inspiration for intelligent content, DITA, is an XML-based approach that allows publishers to choose their own metadata. DITA is a document-centric way of managing content, and accordingly assumes that the basic structure of the document is fixed, and only specific elements can be changed within that stable structure. Intelligent content, in contrast, suggests a post-document paradigm. Again, we don’t get a sense of what structurally rich means outside of a fixed document structure. How can one piece together items in “limitless ways?” What is the glue making sure these pieces fit together appropriately?
Content intelligence involves not only how components are interchangeable, but also how they are interoperable — intelligible to others. Intelligent content discussions often take a walled-garden approach. They focus on the desirability of publishers providing different combinations of content, but don’t discuss how these components might be discovered by audiences. Intelligent content discussions tend to assume that the audience discovers the publisher (or that the publisher identifies the audience via targeting), and then the publisher assembles the right content for the audience. But the process could be reversed, where the audience discovers the content first, prior to any assembly by the publisher. How do the principles of semantically categorized and structurally rich content relate to SEO or Linked Data? Here, we start to see the collision between the document-centric view of content and the structured data view of it. Does intelligent content require publisher-defined and controlled metadata to provide its capabilities, or can it utilize existing, commonly used metadata vocabularies to achieve these goals?
Content components already exist in the wild. Publishers are recombining components all the time, even if they don’t have a robust process governing this. Whether or not publishers talk about intelligent content, the post-document era has already started.
But we continue to talk about web pages as enduring entities that we can describe. We see this in discussions of metadata. Two styles of metadata compete with each other: metadata in the document head of a page, and metadata that is in-line, in the body of a page. Both these styles assume there is a stable, persistent page to describe. Both approaches fail because this assumption isn’t true in many cases.
The first approach involves putting descriptive metadata outside of the content. On a web page, it involves putting the description in the head, rather than the body. This is a classic document-centric style. It is similar to how librarians catalog books: the description of the book is on a card (or in a database) that is separate from the actual book. Books are fixed content, so this approach works fine.
The second approach involves putting the description in the body of the text. Think of it as an annotation. It is most commonly done to identify entities mentioned in the text. It is similar to an index of a book. As long as the content of the book doesn’t change, the index should be stable.
Yet web pages aren’t books. They change all the time. There may be no web page: just a wrapper for presenting a stream of content. What do we need to describe here, and how do we need to do that?
Structured Data’s Lost Bearings
When people want to identify entities mentioned in content, they need a way to associate a description of the entity with the content where it appears. Entity-centric metadata is often called structured data, a confusing term given the existence of other similar sounding terms such as structured content, and semantic structure. While structured data was originally a term used by data architects, the SEO community uses it to refer more specifically to search-engine markup using vocabulary standards such as Schema.org. The structure referred to in the term “structured data” is the structure of the vocabulary indicating the relationships associated with the description. It doesn’t refer to the structure of the content, and here is where problems arise.
While structured data excels at describing entities, it struggles to locate these entities in the content. The question SEO consultants wrestle with is what precisely to index: a web page, or a sentence fragment where the item is mentioned? There are two rival approaches for doing this. One can index entities appearing on a web page using a format called JSON-LD, which is typically placed in the document head of the page (though it does not have to be). Or one can index entities where they appear in the content using a format called RDFa, which are placed in-line in the body of the HTML markup.
Both these approaches presume that the content itself is stable. But content changes continually, and both approaches founder because they are based on a page-centric view of content instead of a component-centric view.
First, consider the use of RDFa to describe the entities mentioned in the sentence. The metadata is embedded in the body of the page: it’s embodied metadata. It’s an appealing approach: one just needs to annotate what these entities are, so a search engine can identify them. But embedded in-line metadata turns out to be rather fragile. Such annotation works only so far as every relevant associated entity is explicitly mentioned in the text. And if the text mentions several different kinds of entities in a single paragraph, the markup gets complicated, because one needs to disambiguate the different entities so as not to confuse the search robots.
The big trouble starts when one changes the wording of texts containing embedded structured data. The entities mentioned change, which has a cascading impact on how the metadata used to describe these entities must be presented. What seemed a unified description of related entities can become disemboweled with even a minor change in a sentence. The structured data didn’t have a stable context with which to associate itself.
Given the hassles of RDFa, many SEO consultants lately are promoting the virtues of putting the structured data in the head of a page using JSON-LD. The head of the description is separate from the body of the content, much like the library catalog card describing a book is separate from the book and its contents. The description is separate from the context in which it appears.
Supporters of JSON-LD note that the markup is simpler than RDFa, and less prone to glitchiness. That is true. But the cost of this approach is that the structured data looses its context. It too is fragile, in some ways more so than RDFa.
Putting data in the document head, outside of the body of the content, is to decapitate the data. We now have data that is vaguely associated with a page, though we don’t know exactly how. Consider Paul Ford’s recent 32,000-word article for Business Week on programming. He mentioned countless entities in the article, all of which would be placed in the head. You might know the entity was mentioned somewhere, but you can’t be sure where.
With decapitated data, we risk having the description of the content get out of alignment with what the content is actually discussing. Since the data is not associated with a context, it can be hard to see that the data is wrong. You might revise the content, adding and deleting entities, and not revise the document head data accurately.
The management problem becomes greater when one thinks about content as components rather than pages. We want to change content components, but the metadata is tied to a page, rather than a component. So every variation of a page requires a new JSON-LD profile in the document head that will match the contents of the variation. As a practical matter this approach is untenable. A dynamically-generated page might have dozens or hundreds of variations based on different combinations of components.
Structured data largely exists to serve the needs of search engines. Its practices tend to define content in terms of web pages. Structured data can describe a rendered page, but isn’t geared to describe content components independently of a rendered page. To indicate the main theme of a piece of content, Schema.org offers a tag called “main content of page”, reflecting an expectation that there is one webpage with an overriding theme. Even if a webpage exists for a desktop browser, it may be a series of short sections when viewed on a mobile device, and won’t have a single persistent “main content” theme. Current structured data practices don’t focus on how to describe entities in unbundled content — entities associated with discrete components such as a section of text. Each reuse of content involves a re-creation of structured data in the document head.
It is important not to confuse structured data with structured content. Structured data needs to work in concert with structured content delivered through content management systems, instead of operating independently of it.
When structured data gets separated from the content it represents, it creates confusion for content teams about what’s important. Decapitated data can foster an attitude that audience-facing content is a second class citizen. One presentation on the benefits of JSON-LD for SEO advised: “Keep the Data and Presentation layer separate.” Content in HTML gets reduced to presentation: a mere decoration. Such advocates talk about supplying a data “payload” to Google. It is true that structured data can be used in apps, but some structured data advocates create a false dichotomy between web pages and data-centric apps, because they are stuck in a paradigm that content equals web pages.
This perspective can lead to content reductionism: only the facts mentioned in the content matter. The primary goal is to free the facts from the content, so the facts can be used elsewhere by Google and others. Content-free data works fine for discussing commodities such as gas prices. But for topics that matter most to people, having context around the data is important. Decapitated data doesn’t support context: it works against it, by making it harder to provide more contextually appropriate information. Either the information is hanked out of its context entirely, or the reader is forced to locate it within the body of the content on her own.
The ultimate failure of decapitated data occurs when the data bears no relationship to the content. This is a known bug of the approach, and one no one seems to have a solution for. According to the W3C, “it is more difficult for search engines to verify that the JSON-LD structured data is consistent with the visible human-readable information.” When what’s important gets defined as what’s put in a payload for Google, the temptation exists to load things in the document head that aren’t discussed. Just as black hat operators loaded fake keywords in the document head of the meta description years ago to game search engines, there exists a real possibility that once JSON-LD becomes more popular, unscrupulous operators will put black hat structured data in the document head that’s unrelated to the content. No one, not least the people who have been developing the JSON-LD format, wants to see this happen.
Unbundling Meaning for Unbundled Content
The intelligent content approach stresses the importance of unbundling content. The web page as a unit of content is dying. Unbundled content can adapt to the display and interactive needs of mobile devices, and allow for content customization.
Metadata needs to describe content components, not just pages of content. Some of this metadata will describe the purpose of the component. Other metadata will describe the entities discussed in the component.
There are arguments whether to annotate entities in content with metadata, or whether to re-create the entities in a supplemental file. Part of the debate concerns the effort involved: the effort for inputting the content structure, verses the effort involved re-entering the data described by the structure. One expert, Alex Miłowski at the University of California Berkeley, suggests a hybrid approach could be most efficient and accurate. Regardless of format, structured content will be more precise and accurate if it refers to a reusable content component, rather than a changeable sentence or changeable web page. Components are swappable and connectable by design. They are units of communication expressing a unified purpose, which can be described in an integrated way with less worry that something will change that will render the description inaccurate. It is easier to verify the accuracy of the structured data when it is closely associated with the content. Since content components are designed for reuse, one can reuse the structured data linked to the component.
While the idea of content components is not new, it still is not widely embraced as the default way of thinking about content. People still think about pages, or fragments. Even content strategists talk suggestively about chunks of content, instead of trying to define what a chunk would be in practice. As a first step, I would like to see discussion of chunks disappear, to be replaced by discussion of components. Thinking about reusable components does not preclude the reuse of more granular elements such as variables and standardized copy. But the concept of a component provides a way to discuss pieces of content based around a common theme.
Components need to be defined as units to manage internally in content management systems before they will be recognized as a unit that matters externally. A section of content in HTML may not map to standard templates in a CMS right now, but that can change — if we define a component as a section. A section of content in HTML may not mean much to a search engine right now, but that can change — if search engines perceive such a unit as having a coherent meaning. The case for both intelligent content and semantic search will be more compelling if we can make such changes.
More dialog is needed between the semantic search community and the intelligent content community about how to integrate each approach. Both these approaches involve significant complexity, and understanding by each side of the other seems limited. I’ve discovered that some ideas about structured data and the semantic representation of entities have political sensitivities and a stormy past, which can make exploration of these topics challenging for outsiders. In this post I have questioned a current idea in structured data best practice, separating data from content, even though this practice wasn’t common a year ago, or even widely practical. Practices used in semantic search (such as favored formats and vocabulary terms) seem to fluctuate noticeably, compared to the long established principles guiding content strategy. The cause of structured data will benefit when it is discussed in the wider context of content production, management and governance, instead of in isolation from these issues. For its part, content strategy should become more specific with how to implement principles, especially as adaptive content becomes more common. I foresee possibilities to refine concepts in intelligent content through dialog with semantic search experts.
— Michael Andrews
I am merely suggesting kinds of HTML structures that correspond to content components, rather than attempting to provide a formal definition. HTML5 has its quirks and nuances, and the topic deserves a wider discussion. ↩
Embedding JSON-LD in components seems like it could offer benefits, though I hesitate to suggest casually standards on such a multifaceted issue. I don’t want the merits of a particular solution to detract attention from a thorough examination of the core issues associated with the problem. ↩