Publishers understandably want to leverage what they’ve already produced when creating new content. They need to decide how to best manage and deliver new content that’s related to — but different from — existing content. To create different versions of content, they have three options, which I will refer to as the template-based, compositional, and elastic approaches.
To understand how the three approaches differ, it is useful to consider a critical distinction: how content is expressed, as distinct from the details the content addresses.
When creating new content, publishers face a choice of what existing material to use again, and what to change. Should they change the expression of existing content, or the details of that content? The answer will depend on whether they are seeking to amplify an existing core message, or to extend the message to cover additional material. That core message straddles between expression (how something is said) and details (specifics), which is one reason both these aspects, the style and the substance, get lumped together into a generic idea of “content”. Telling an author to simply “change the content” does not indicate whether to change the connotation or denotation of the content. They need more clarity on the goal of the change.
Content variation results from the interaction of the two dimensions:
The content expression (the approach of written prose or other manifestations such as video)
The details (facts and concrete information).
Both expression and details can vary. Publishers can change both the expression and the details of content, or they can focus on just one of the dimensions.
The interplay of content expression and details can explain a broad range of content variation. Content management professionals commonly explain content variation by referring to a more limited concept: content structure — the inclusion and arrangement of chunk-size components or sections. Content structure does influence content variation in many cases, but not in all cases. Expressive variation can result when content is made up of different structural components. Variation in detail can take place within a common structural component. But rearranging content structure is not the only, or even necessarily the preferred, way to manage content variation. Much content lacks formal structure, even though the content follows distinguishable variations that are planned and managed.
The expression of content (for example, the wording used) can be either fixed (static, consistent or definitive) or fluid (changeable or adaptable). A fixed expression is present when all content sounds alike, even if the particulars of the content are different. As an example, a “form” email is a fixed expression, where the only variation is whether the email is addressed to Jack or to Jill. When the expression of content is fluid, in contrast, the same basic content can exist in many forms. For example, an anecdote could be expressed as a written short story, as a dramatized video clip, or as a comic book.
Details in content can also be either fixed, or they can vary. Some details are fixed, such as when all webpages include the same contact details. Other content is entirely about the variation of the details. For example, tables often look similar (their expression is fixed), though their details vary considerably.
Diagram showing how both expression and details in content can vary (revised). NB: elastic content can also fluidly address a diverse range of details, but its unique power comes from its ability to express the same fixed details different ways.
Now let’s look at three approaches for varying content. Only one relies on leveraging structures within content, while the other two exist without using structure.
Template-based content has a fixed expression. Think of a form letter, where details are merged into a fixed body of text. With template-based content, the details vary, and are frequently what’s most significant about the content. Template-based content resembles a “mad libs” style of writing, where the basic sentence structure is already in place, and only certain blanks get filled in with information. Much of the automated writing referred to as robo-journalism relies on templates. The Associated Press will, for example, feed variables into a template to generate thousands of canned sports and financial earnings reports. Needless to say, the rigid, fixed expression of template-based writing rates low on the creativity scale. On the other hand, fixed expression is valuable when even subtle changes in wording might cause problems, such as in legal disclaimers.
Compositional content relies on structural components. It is composed of different components that are fixed, relying on a process known as transclusion. These components may include informational variables, but most often do not. The expression of the content will vary according to which components are selected and included in the delivered content. Compositional content allows some degree of customization, to reflect variations in interests and detail desired. Content composed from different components can offer both expressive variation and consistency in content to some degree, though there is ultimately a intrinsic tradeoff in those goals. Generally the biggest limitation of compositional content is that its range of variation is limited. Compositional variation increases complexity, which tends to prioritize creating consistency in content instead of variation. Compositional content can’t generate novel variation, since it must rely on existing structures to create new variants.
Elastic content is content that can be expressed in a multitude of ways. With elastic content, the core informational details stay constant, but how these details are expressed will change. None of the content is fixed, except for the details. In fact, so much variation in expression is possible that publishers may not notice how they can reuse existing informational details in new contexts. Elastic content can even morph in form, by changing media.
Authors tend to repeat facts in content they create. They may want to keep mentioning the performance characteristic of a product, or an award that it has won. Such proof points may appeal to the rational mind, but don’t by themselves stimulate much interest. To engage the reader’s imagination, the author creates various stories and narratives that can illustrate or reinforce facts they want to convey. Each narrative is a different expression, but the core facts stay constant. Authors rely on this tactic frequently, but sometimes unconsciously. They don’t track how many separate narratives draw on the same facts. They can’t tell if a story failed to engage audiences because its expression was dull, or because the factual premise accompanying the narrative had become tired, and needs changing. When authors track these informational details with metadata, they can monitor which stories mention which facts, and are in a better position to understand the relationships between content details and expression.
Machines can generate elastic content as well. When information details are defined by metadata, machines can use the metadata to express the details in various ways. Consider content indicating the location of a store or an event. The same information, captured as a geo-coordinate value in metadata, can be expressed multiple ways. It can be expressed as a text address, or as a map. The information can also be augmented, by showing a photo of the location, or with a list of related venues that are close by. The metadata allows the content to become versatile.
As real time information becomes more important in the workplace, individuals are discovering they want that information in different ways. Some people want spreadsheet-like tools they can use to process and refine the raw alphanumeric values. Others want data summarized in graphic dashboards. And a growing number want the numbers and facts translated into narrative reports that highlight, in sentences, what is significant about the information. Companies are now offering software that assesses information, contextualizes it, and writes narratives discussing the information. In contrast to the fill-in-the-blank feeding of values in a template, this content is not fixed. The content relies on metadata (rather than a blind feed as used in templates); the description changes according to the information involved. The details of the information influence how the software creates the narrative. By capturing key information as metadata, publishers have the ability to amplify how they express that information in content. Readers can get a choice of what medium to access the information.
The next frontier in elastic content will be conversational interfaces, where natural language generation software will use informational details described with metadata, to generate a range of expressive statements on topics. The success of conversational interfaces will depend on the ability of machines to break free from robotic, canned, template-based speech, and toward more spontaneous and natural sounding language that adapts to the context.
Weighing Options
How can publishers leverage existing content, so they don’t have to start from scratch? They need to understand what dimensions of their content that might change. They also need to be realistic about what future needs can be anticipated and planned for. Sometimes publishers over-estimate how much of their content will stay consistent, because they don’t anticipate the circumstantial need for variation.
Information details that don’t change often, or may be needed in the future, should be characterized with metadata. In contrast, frequently changing and ephemeral details could be handled by a feed.
Standardized communications lend themselves to templates, while communications that require customization lend themselves to compositional approaches using different structural components. Any approach that relies on a fixed expression of content can be rendered ineffective when the essence of the communication needs to change.
The most flexible and responsive content, with the greatest creative possibilities, is elastic content that draws on a well- described body of facts. Publishers will want to consider how they can reuse information and facts to compose new content that will engage audiences.
Most organizations that create web content primarily focus on how to publish and deliver the content to audiences directly. In this age where “everyone is a publisher,” organizations have become engrossed in how to form a direct relationship with audiences, without a third party intermediary. As publishers try to cultivate audiences, some are noticing that audience attention is drifting away from their website. Increasingly, content delivery platforms are collecting and combining content from multiple sources, and presenting such integrated content to audiences to provide a more customer-centric experience. Publishers need to consider, and plan for, how their content will fit in an emerging framework of integrated, multi-source publishing.
The Changing Behaviors of Content Consumption: from bookmarks to snippets and cards
Bookmarks were once an important tool to access websites. People wanted to remember great sources of content, and how to get to them. A poster child for the Web 2.0 era was a site called Delicious, which combined bookmarking with a quaint labelling approach called a folksonomy. Earlier this year, Delicious, abandoned and forgotten, was sold at a fire sale for a few thousand dollars for the scrap value of its legacy data.
People have largely stopped bookmarking sites. I don’t even know how to use them on my smartphone. It seems unnecessary to track websites anymore. People expect information they need to come to them. They’ve become accustomed to seeing snippets and cards that surface in lists and timelines within their favorite applications.
Delicious represents the apex of the publisher centric era for content. Websites were king, and audiences collected links to them.
Single Source Publishing: a publisher centric approach to targeting information
In the race to become the best source of information — the top bookmarked website — publishers have struggled with how a single website can successfully please a diverse range of audience needs. As audience expectations grew, publishers sought to create more specific web pages that would address the precise informational needs of individuals. Some publishers embraced single source publishing. Single source publishing assembles many different “bundles” of content that all come from the same publisher. The publisher uses a common content repository (a single source) to create numerous content variations. Audiences benefit when able to read custom webpages that address their precise needs. Provided the audience locates the exact variant of information they need, they can bookmark it for later retrieval.
By using single source publishing, publishers have been able to dramatically increase the volume of webpages they produce. That content, in theory, is much more targeted. But the escalating volume of content has created new problems. Locating specific webpages with relevant information in a large website can be as challenging as finding relevant information on more generic webpages within a smaller website. Single source publishing, by itself, doesn’t solve the information hunting problem.
The Rise of Content Distribution Platforms: curated content
As publishers focused on making their websites king of the hill, audiences were finding new ways to avoid visiting websites altogether. Over the past decade, content aggregation and distribution platforms have become the first port of call for audiences seeking information. Such platforms include social media such as Facebook, Snapchat, Instagram and Pinterest, aggregation apps such as Flipboard and Apple News, and a range of Google products and apps. In many cases, audiences get all the information they need while within the distribution or aggregation platform, with no need to visit the website hosting the original content.
Hipmunk aggregates content from other websites, as well as from other aggregators.
The rise of distribution platforms mirrors broader trends toward customer-driven content consumption. Audiences are reluctant to believe that any single source of content provides comprehensive and fully credible information. They want easy access to content from many sources. An early example of this trend were travel aggregators that allow shoppers to compare airfares and hotel rates from different vendor websites. The travel industry has fought hard to counter this trend, with limited success. Audiences are reluctant to rely on a single source such as an airline or hotel website to make choices about their plans. They want options. They want to know what different websites are offering, and compare these options. They also want to know the range of perspectives on a topic. Various review and opinion websites such as Rotten Tomatoes present the judgment from different websites.
The movie review site Rotten Tomatoes republishes snippets of reviews from many websites.
Another harbinger of the future has been the evolution of Google search away from its original purpose of presenting links to websites, and toward providing answers. Consider Google’s “featured snippets,” which interprets user queries, and provides a list of related questions and answers. Featured snippets are significant in two respects :
They present answers on the Google platform, instead of taking the user to the publisher’s website.
They show different related questions and answers, meaning the publisher has less control framing how users consider a topic.
Google’s “featured snippets” presents related questions together, with answers using content extracted directly from different websites.
Google draws on content from many different websites, and combines the content together. Google scrapes the content from different webpages, and reuses content as it decides will be in the best interest of Google searchers. Website publishers can’t ask Google to be in a featured snippet. They need to opt-out with a <meta name="googlebot" content="nosnippet"> if they don’t want their content used by Google in such snippets. These developments illustrate how publishers no longer control exactly how their content is viewed.
A Copernican Revolution Comes to Publishing
Despite lip service to the importance of the customer, many publishers still have a publisher centric mentality that imagines customers orbiting around them. The publisher considers itself as the center of the customer’s universe. Nothing has changed: customers are seeking out the publisher’s content, visiting the publisher’s website. Publishers still expect customers to come to them. The customer is not at the center of the process.
Publishers do acknowledge the role of Facebook and Google in driving traffic, and more publish directly on these platforms. Yet such measures fall short of genuine customer-centricity. Publishers still want to talk uninterrupted, instead of contributing information that will fill-in the gaps in the audience’s knowledge and understanding. They expect audiences to read or view an entire article or presentation, even if that content contains information the audience knows already.
A publisher centric mentality assumes they can be, and will be, the one-best source of information, covering everything important about the topic. The publisher decides what they believe the audience needs to know, then proceeds to tell the audience about all those things.
A customer-centric approach to content, in contrast, expects and accepts that audiences will be viewing many sources of content. It recognizes that no one source of content will be complete or definitive. It assumes that the customer already has prior knowledge about a topic, which may have been acquired from other sources. It also assumes that audiences don’t want to view redundant information.
Let’s consider content needs from an audience perspective. Earlier this month I was on holiday in Lisbon. I naturally consulted travel guides to the city from various sources such as Lonely Planet, Rough Guides and Time Out. Which source was best? While each source did certain things slightly better than their rivals, there wasn’t a big difference in the quality of the content. Travel content is fairly generic: major sources approach information in much the same way. But while each source was similar, they weren’t identical. Lisbon is a large enough city that no one guide could cover it comprehensively. Each guide made its own choices about what specific highlights of the city to include.
As a consumer of this information, I wanted the ability to merge and compare the different entries from each source. Each source has a list of “must see” attractions. Which attractions are common to all sources (the standards), and which are unique to one source (perhaps more special)? For the specific neighborhood where I was staying, each guide could only list a few restaurants. Did any restaurants get multiple mentions, which perhaps indicated exquisite food, but also possibly signaled a high concentration of tourists? As a visitor to a new city, I want to know about what I don’t know, but also want to know about what others know (and plan to do), so I can plan with that in mind. Some experiences are worth dealing with crowds; others aren’t.
The situation with travel content applies to many content areas. No one publisher has comprehensive and definitive information, generally speaking. People by and large want to compare perspectives from different sources. They find it inconvenient to bounce between different sources. As the Google featured snippets example shows, audiences gravitate toward sources that provide convenient access to content drawing on multiple sources.
A publisher-centric attitude is no longer viable. Publishers that expect audiences to read through monolithic articles on their websites will find audiences less inclined to make that effort. The publishers that will win audience attention are those who can unbundle their content, so that audiences can get precisely want they want and need (perhaps as a snippet on a card on their smartphone).
Platforms have re-intermediated the publishing process, inserting themselves between the publisher and the audience. Audiences are now more loyal to a channel that distributes content than they are loyal to the source creating the content. They value the convenience of one-stop access to content. Nonetheless, the role of publishers remains important. Customer-centric content depends on publishers. To navigate these changes, publishers need to understand the benefit of unbundling content, and how it is done.
Content Unbundling, and playing well with others
Audience face a rich menu of choices for content. For most publishers, it is unrealistic to aspire to be the single best source of content, with the notable exception of when you are discussing your own organization and products. Even in these cases, audiences will often be considering content from other organizations that will be in competition with your own content.
CNN’s view of different content platforms where their audiences may be spending time. Screenshot via Tow Center report on the Platform Press.
Single source publishing is best suited for captive audiences, when you know the audience is looking for something specific, from you specifically. Enterprise content about technical specifications or financial results are good candidates for single source publishing. Publishers face a more challenging task when seeking to participate in the larger “dialog” that the audience is having about a topic not “owned” by a brand. For most topics, audiences consult many sources of information, and often discuss this information among themselves. Businesses rely on social media, for example, finding forums where different perspectives are discussed, and inserting teasers with links to articles. But much content consumption happens outside of active social media discussions, where audiences explicitly express their interests. Publishers need more robust ways to deliver relevant information when people are scanning content from multiple sources.
Consumers want all relevant content in one place. Publishers must decide where that one place might be for their audiences. Sometimes consumers will look to topic-specific portals that aggregate perspectives from different sources. Other times consumers will rely on generic content delivery platforms to gather preliminary information. Publishers need their content to be prepared for both scenarios.
To participate in multi-source publishing, publishers need to prepare their content so it can be used by others. They need to follow the Golden Rule: make it easy for others to incorporate your content in other content. Part of that task is technical: providing the technical foundation for sharing content between different organizations. The other part of the task is shifting perspective, by letting go of possessiveness about content, and fears of loss of control.
Rewards and Risks of Multi-source publishing
Multi-source content involves a different set of risks and rewards than when distributing content directly. Publishers must answer two key questions:
How can publishers maximize the use of their content across platforms? (Pursue rewards)
What conditions, if any, do they want to place on that use? (Manage risks)
More fundamentally, why would publishers want other platforms to display their content? The benefits are manifold. Other platforms:
Can increase reach, since these platforms will often get more traffic than one’s own website, and will generally offer incrementally more views of one’s content
May have better authority on a topic, since they combine information from multiple sources
May have superior algorithms that understand the importance of different informational elements
Can make it easier to audiences to locate specific content of interest
May have better contextual or other data about audiences, which can be leveraged to provide more precise targeting.
In short, multi-source publishing can reduce the information hunting problem that audiences face. Publishers can increase the likelihood that their content will be seen at opportune moments.
Publishers have a choice about what content to limit sharing, and what content to make easy to share. If left unmanaged, some of their content will be used by other parties regardless, and not necessarily in ways the publisher would like. If actively managed, the publisher can facilitate the sharing of specific content, or actively discourage use of certain content by others. We will discuss the technical dimensions shortly. First, let’s consider the strategic dimensions.
When deciding how to position their content with respect to third party publishing and distribution, publishers need to be clear on the ultimate purpose of their content. Is the content primarily about a message intended to influence a behavior? Is the content primarily about forming a relationship with an audience and measuring audience interests? Or is the content intended to produce revenues through subscriptions or advertising?
Publishers will want to control access to revenue-producing content, to ensure they capture the subscription or advertising revenues of that content, and not allow the revenue value benefit a free-rider. They want to avoid unmanaged content reuse.
In the other two cases, a more permissive access can make business sense. Let’s call the first case the selective exposure of content highlights — for example, short tips that are related to the broader category of product you offer. If the purpose of content is about forming a relationship, then it is important to attract interest in your perspectives, and demonstrate the brand’s expertise and helpfulness. Some information and messages can be highlighted by third party platforms, and audiences can see that your brand is trying to be helpful. Some of these viewers, who may not have been aware of your brand or website, may decide to click through to see the complete article. Exposure through a platform to new audiences can be the start of new customer relationships.
The second case of promoted content relates to content about a brand, product or company. It might be a specification about a forthcoming product, a troubleshooting issue, or news about a store opening. In cases where people are actively seeking out these details, or would be expected to want to be alerted to news about these issues, it makes sense to provide this information on whatever platform they are using directly. Get their questions answered and keep them happy. Don’t worry about trying to cross-sell them on viewing content about other things. They know where to find your website if they need greater details. The key metric to measure is customer satisfaction, not volume of articles read by customers. In this case, exposure through a platform to an existing audience can improve the customer relationship.
How to Enable Content to be Integrated Anywhere
Many pioneering examples of multi-source publishing, such as price comparison aggregators, job search websites, and Google’s featured snippets, have relied a brute-force method of mining content from other websites. They crawl websites, looking for patterns in the content, and extract relevant information programatically. Now, the rise of metadata standards for content, and their increased implementation by publishers, makes easier the task of assembling content derived from different sources. Standards-based metadata can connect a publisher’s content to content elsewhere.
No one knows what new content distribution or aggregation platform will become the next Hipmunk or Flipboard. But we can expect aggregation platforms will continue to evolve and expand. Data on content consumption behavior (e.g., hours spent each week by website, channel and platform) indicates customers more and more favor consolidated and integrated content. The technical effort needed to deliver content sourced from multiple websites is decreasing. Platforms have a range of financial incentives to assemble content from other sources, including ad revenues, the development of comparative data metrics on customer interest in different products, and the opportunity to present complementary content about topics related to the content that’s being republished. Provided your content is useful in some form to audiences, other parties will find opportunities to make money featuring your content. Price comparison sites make money from vendors who pay for the privilege of appearing on their site.
To get in front of audiences as they browse content from different sources, a publisher needs to be able to merge content into their feed or stream, whether it is a timeline, a list of search results, or a series of recommendations that appear as audiences scroll down their screen. Two options are available to facilitate content merging:
Planned syndication
Discoverable reuse
Planned Syndication
Publishers can syndicate their content, and plan how they want others to use it. The integration of content between different publishers can be either tightly coupled, or loosely coupled. For publishers who follow a single sourcing process, such as DITA, it is possible to integrate their content with content from other publishers, provided the other publishers follow the same DITA approach. Seth Earley, a leading expert on content metadata, describes a use case for syndication of content using DITA:
“Manufacturers of mobile devices work through carriers like Verizon who are the distribution channels. Content from an engineering group can be syndicated through to support who can in turn syndicate their content through marketing and through distribution partners. In other words, a change in product support or technical specifications or troubleshooting content can be pushed off through channels within hours through automated and semi-automated updates instead of days or weeks with manual conversions and refactoring of content.”
While such tightly coupled approaches can be effective, they aren’t flexible, as they require all partners to follow a common, publisher-defined content architecture. A more flexible approach is available when publisher systems are decoupled, and content is exchanged via APIs. Content integration via APIs embraces a very different philosophy than the single sourcing approach. APIs define chunks of content to exchange flexibly, whereas single-sourcing approaches like DITA define chunks more formally and rigidly. While APIs can accommodate a wide range of source content based on any content architecture, single sourcing only allows content that conforms to a publisher’s existing content architecture. Developers are increasingly using flexible microservices to make content available to different parties and platforms.
In the API model, publishers can expand the reach of their content two ways. They can submit their content to other parties, and/or permit other parties to access and use their content. The precise content they exchange, and the conditions under which it is exchanged, is defined by the API. Publishers can define their content idiosyncratically when using an API, but if they follow metadata standards, the API will be easier to adopt and use. The use of metadata standards in APIs can reduce the amount of special API documentation required.
Discoverable Reuse
Many examples cited earlier involve the efforts of a single party, rather than the cooperation of two parties. Platforms often acquire content from many sources without the active involvement of the original publishers. When the original publisher of the content does not need to be involved with the reuse of their content, the content has the capacity to reach a wider audience, and be discovered in unplanned, serendipitous ways.
Aggregators and delivery platforms can bypass the original publisher two ways. First, they can rely on crowdsourcing. Audiences might submit content to the platform, such as Pinterest’s “pins”. Users can pin images to Pinterest because these images contain Open Graph or schema.org metadata.
Second, platforms and aggregators can discover content algorithmically. Programs can crawl websites to find interesting content to extract. Web scraping, which was once solely done by search engines such as Google, has become easier and more widely available, due to the emergence of services such as Import.IO. Aided by advances in machine learning, some webscraping tools don’t require any coding at all, though to achieve greater precision requires some coding. The content that is most easily discovered by crawlers is content described by metadata standards such as schema.org. Tools can use simple Regex or XPath expressions to extract specific content that is defined by metadata .
Influencing Third-party Re-use
Publishers can benefit when other parties want to re-publish their content, but they will also want to influence how their content is used by others. Whether they actively manage this process by creating or accessing an API, or they choose not to directly coordinate with other parties, publishers can influence how others use their content through various measures:
They can choose what content elements to describe with metadata, which facilitates use of that content elsewhere
They can assert their authorship and copyright ownership of the content using metadata, to ensure that appropriate credit is given to the original source
They can indicate, using metadata, any content licensing requirements.
For publishers using APIs, they can control access via API keys, and limit the usage allowed to a party
When the volume of re-use justifies, publishers can explore revenue sharing agreements with platforms, as newspapers are doing with Facebook.
Readers interested in these issues can consult my book, Metadata Basics for Web Content, for a discussion of rights and permissions metadata, which covers issues such as content attribution and licensing.
Where is Content Sourcing heading?
Digital web content in some ways is starting to resemble electronic dance music, where content gets “sampled” and “remixed” by others. The rise of content microservices, and of customer expectations for multi-sourced, integrated content experiences, are undermining the supremacy of the article as the defining unit of content.
For publishers accustomed being in control, the rise of multi-source publishing represents a “who moved my cheese” moment. Publishers need to adapt to a changing reality that is uncertain and diffuse. Unlike the parable about cheese, publishers have choices about how they respond. New opportunities also beckon. This area is still very fluid, and eludes any simple list of best practices to follow. Publishers would be foolish, however, to ignore the many signals that collectively suggest a shift from individual websites and toward more integrated content destinations. They need to engage with these trends to be able to capitalize on them effectively.