Categories
Intelligent Content

When is Adaptive Content Appropriate?

Publishers want their content to be appropriate for their audiences.  They need to know when it is appropriate to adapt their content to specific situations.

Until recently, publishers presumed audiences would adapt to their content.  They supplied the same content to everyone, and people were expected to find what interested them in that content.  In some circumstances, they created different versions of the same content targeted for different segments of readers, perhaps people in different countries.  But audiences still needed to find what was relevant to them in that version.

What happens if we reverse the equation, so that the content adapts to the individual, rather than the individual adapting to the content? On an intuitive level it sounds great, but how is it done in practice?  Does it now mean everyone is not getting the same content?

Discussion of adaptive content has increased noticeably in the past year. The motivation behind adaptive content is to give people precisely what they want, when they want it, how they want it. Marketers imagine if their brand that can satisfy the egocentric needs of their customers, they will cement their relationship with them.

Now a buzzy topic: Sample headlines of recent posts about adaptive content.
Now a buzzy topic: Sample headlines of recent posts about adaptive content.

Adaptive content is attractive as an ideal.  But much recent discussion of the approach is short on specifics.  Karen McGrane, who introduced the concept several years ago to the wider content strategy community, recently wrote: “I am really, really annoyed with hearing adaptive solutions presented as some kind of magical panacea.”  We need less discussion about adaptive content as an abstract concept, and more focus on how it is implemented.  The critical question is not, “Why adaptive content?” but “How?”  Until we understand more of the how, its value can’t be judged.

What Adaptive Content Means

Adaptive content is difficult to define precisely.  It has various properties, a number of which are also associated with other content concepts, such as personalization, dynamic content, and intelligent content. Those who discuss adaptive content may emphasize different aspects of it.  Perhaps the biggest difference is between those who emphasize the production side of adaptive content (What do producers need to do to deliver content adaptively?) and those who talk about the consumption side (Why do consumers care and what do they notice that’s different?)

Adaptive content is a topic of growing interest in large part due to the smartphone.  The significance of the smartphone goes beyond the difference between a smaller touchscreen and a larger screen with a keyboard.  Smartphones are used in diverse situations and offer many capabilities.  They have cameras, microphones, GPS, a unique ID tied to an individual, and sensors such as gyroscopes.  These features can capture different information to support interaction with content and influence what content is provided to the user.  They’ve changed our assumptions about when and where users might need information.  We can no longer assume users will be making a simple explicit request, and getting content matching that request.

The adjective adaptive implies the user can somehow direct the content.  An adaptive approach involves various possibilities.  It’s an approach in the early stage of its adoption.  Its benefits and limitations at this point aren’t yet well understood.

I’ll pass here on trying to define precisely what adaptive content is.  Others such as Karen McGrane, Joe Goliner and Noz Urbina have valuable things to say on this topic.  I want to focus on what is genuinely useful in the approach. Understanding in more detail what adaptive content could represent helps us assess both its application, and the effort involved.

For me, the core idea of adaptive content is that content variations are available to provide a better, more relevant experience for users.  The key phrases are content variations (production side) and experiences (consumption side).

Many discussions of adaptive content look at the numerous variables relating to people, devices, locations, and so forth.  The number of permutations can seem enormous, and would imply a need for omniscient engineering.

It may be more valuable to focus on variations, which links the content to scenarios of use, and to whom is responsible for it.

Two key questions of adaptive content are:

  1. How much variation is necessary?
  2. How much variation is possible?

The first question speaks to what audiences need, and the second to what businesses can realistically do to meet those needs.

One point needs clarifying.  Adaptive content is not about mind-reading.  There is a big push in the world of big data around predictive analytics.  While predictive analytics might occasionally play a role in determining what content variation to show, it generally will not.  In most cases the intent and needs of the individual user will be clear, and conjecture isn’t necessary.

Examples of Adaptive Content

The best way to illustrate content variation is through examples, looking at use cases where individuals receive different variations of content depending on their situation.  These examples may not be relevant to all organizations, but they offer alternative perspectives on the value of content adaption.  We might even consider these as adaptive archetypes.

One popular archetype is context aware content.  The best known example is the card UI provided by Google.  A Google card might combine information relating to time, location, and the user’s calendar with status information from elsewhere.  The context is often event-focused.  Different people receive different variations of structured information.  People know the structure of the information they will receive, but not the precise information they will be getting.

A related archetype is situationally aware content.  Here, the context is not predefined, but is fluid. The situation is defined by preferences set by the user relating to variables in their environment.  Wearable devices may offer situationally aware content.  You may be a work and can’t watch a football match, but perhaps your wrist will buzz when your team scores a goal.  The focus is less on the structure of the information, and more on what specific content to receive, and how to receive it.  In the future, wearables may have sensors that trigger health advice, possibly on a different platform.  So we have a possibility of trans-device content.

Another kind of adaptive content is omnichannel content, a favorite of the retail sector.  Macy’s, the U.S. department store chain, needs to adapt content to various shopping scenarios.  Some people will go to the store to browse, but others want to know what’s available before going to the store.  A shopper may be looking for a sweater that’s been advertised in a specific color and size, and wants to know if it is in stock at her local store.  The content needs to display the stock availability of the item according to location.  There will be countless variations of content about the sweater depending on the size, color and store location.

A different sort of adaptive content is possible in e-learning.  Pearson, a large educational publisher, provides students with materials that adapt to their understanding of subject matter.  It compares what learning outcomes they need to achieve for different proficiencies with the student’s mastery of these topics, and provides an individualized learning path based on their knowledge of concepts.  Each student will see a different sequence of content, and different students may see different content items.  This is an example of outcome driven content variation based on inference.

In some of these examples, users imagine they are getting unique content.  But we are discussing content for an audience of many people, not personal information such as your fitness tracker information.  Individuals may just be seeing a variation tailored to them, and others matching their circumstances will see similar variations.

Back to the Future: Adaptive Content’s Origins

Adaptive content may seem like a new approach, but much of the thinking around it has been years in the making. The W3C defined core aspects of adaptive content over ten years ago, in 2004.  The proliferation of internet-connected devices with different characteristics and purposes has been evident for a long while, and with that, questions about how to provide content to increasingly diverse users.

The W3C uses the phrase “content adaptation” rather than “adaptive content,” but the two terms refer to the same general topic.  Here’s the W3C definition:

“Content Adaptation is a process that based on factors such as the capabilities of the displaying device or network, or the user’s preferences, adapts the content that has been requested to provide an optimized user experience. This adaptation can occur in a number of places in the content delivery chain: the author may make choices when writing the content, or intermediary automated content transformation proxies could adapt the content based on heuristics and knowledge of the user, or the adaptation could occur within the browser itself.”

This definition is slightly different from how adaptive content is commonly discussed.  Yet it highlights some important issues.   First, there are technical considerations (hardware and network) but also human considerations (preferences).  The goal is to deliver a good user experience, not conversions or network optimization. And there are multiple ways to accomplish this: through content planning, technical transformation of content based according to specific user needs, and using browser technology.

Delivery Context

Over a decade ago a W3C working group documented issues relating to device independent-content: How to provide different versions of the same core content, irrespective of platform.  They looked at the relationship between what is created and what is presented, and also the different dimensions of how content is received and manipulated by users.  A major focus is what they call the delivery context.

Schematic of W3C terms relating to device independence and content adaptation.
Schematic of W3C terms relating to device independence and content adaptation.

The W3C working group believed that users will often need to interact with units of content that are different from the units created by authors.  Authors may create larger content units that are broken down when presented to users (the perceivable unit).   The decomposition approach contrasts with the infinite scrolling people commonly experience these days, regardless of device.  The notion of decomposition also contrasts with some newer ideas of writing small atomic units of content, although the W3C also considered the possibility of aggregating units of content.

The most significant idea was the possibility of variations in content created.  Users weren’t just seeing different presentational views of a single version of content, they were seeing different variants.

The W3C considered how the delivery context shapes the user’s focus of attention: what users notice, and how they need to interact.  They noted interaction might not only be visual, but also gestural or based on speech.  They considered adaptation preferences — how the user indicates they want to experience the content, such as alert preferences.  And they reviewed the impacts of application personalization — things likes settings for video playback, or whether sound or location tracking is on. These variables are already important considerations for content on smartphones.

The delivery context is often overlooked. Some recent adaptive content discussions have focused on predicting implicit user desires and delivering variations based on those predictions. But the other, less explored aspect of adaptive content is making sure users can get content that matches their explicit preferences — especially when they don’t want to use a feature. Many applications assume users will use certain features: to take a selfie, use beacons, talk to a virtual assistant, or something else that designers think would be fun.  A growing number of applications assume people will use their smartphones to do things, including producing content such as bar code IDs or  social media check-ins, for use by the brand.  Except it might not be fun for everyone.   Content needs to adapt to when people opt-out of such experiences.

Adaptive Content Delivery

Before the rise of today’s popular techniques like AJAX, responsive web design and APIs, the W3C identified different techniques that can enable content adaptation. They identified different processes to support content adaption, and listed various client and server side processors to deliver the content. While the specific recommendation details are dated, the range of approaches remains interesting because they are not limited by current conceptions about how content is delivered.

Adaption Processes refer to how to change the content itself.  Examples the working group identified included:

  • Select/Remove
    • Selection via URL redirect
    • In-document Decision Tags (conditional or switch selection)
    • Layout decisions
    • Style conditions
    • Relevancy
  • Navigation
  • Adaptation via Substitution
  • Adaptation via Transformation

Many of these techniques involved markup and other instructions embedded in the content.  A tremendous amount of variation is possible using these techniques in combination.

Adaptation Processors, in the W3C working group’s terminology, refer to the technical means for enabling content adaptation — from the server side, client side, or some combination.  The working group identified:

  • Server-side Adaptation
    • Variant Selection
    • Structural Transformation
    • Media Adaptation
    • Using Meta-information
    • Decomposition
  • Client-side Adaptation
    • Image Resizing
    • Font Substitution
    • Transcoding
    • Contextual Selection

While most of the client-side adaptation techniques focused on alternative renderings of content, the server side techniques focused more on generating substantive variations in the content.  For example, one possibility mentioned for structural transformation is providing auto-summarization of content.

Today’s web environment places a strong emphasis on the client side. Responsive web design provides many of the client-side capabilities identified by the working group.  The extensive use of JavaScript libraries emphasizes user-screen interaction.  Conditional loading helps to manage when content appears on screen.

Much of the substantive variation in content needs to come from the server side.  Server-side data repositories are becoming more flexible delivering mixed types of content from different sources.  The lagginess of server-provided content should improve with true 4G network speeds.  The other major server-side factor, which was not mentioned at all by the working group, is the use of analytics data to shape the content adaptation. Using data to guide the display of content has been a significant transformation in the past five years.  Tracking user behavior over time can provide useful information for providing the right content variant, as the Pearson example shows.

The tools available to adapt content vary in what they accomplish and the effort they entail.  Server side approaches will generally be more complex to do, though they can potentially offer the most value if they provide content that would otherwise be unavailable or not accessed.   We can see this with Macy’s approach.  Having specific inventory information could be a decisive factor for a person making a purchase.  It is an example where the content variation is both high value to the user, and high value to the brand.

Design Parameters for Adaptive Content

What should publishers focus on, given that there are many approaches to adapting content?  Adaptive content can be challenging to implement, given the many factors that influence its success.

The success of adaptive content depends on the alignment of three factors:

  • The profile of the individual user
  • The opportunity that a variation offers the user and the brand
  • The constraints on the ability to execute the variation in a manner that offers value to both parties

The individual user profile is a mix of their current and past behavior (typically clicks, perhaps purchases), together with any preferences they have provided (opt-ins, default settings, etc.)   Brands with loyalty programs may have a range of indicators about a user.  A person who is a frequent patron of a hotel would expect content more adapted to their needs than someone who doesn’t use the hotel often.  This suggests that the opportunity to implement adaptive content is strongest in cases where a relationship already exists.  Adaptive content may be more effective at keeping a customer than it is at creating one.

The opportunities for content variations will often relate to timing and location: when and where users most need specific content.  It may be based on the need variations of different segments. Location and segmentation could even be related in the case of regional segments.

Constraints can be technical or human:

  • Technical constraints: device capabilities, network connection, ability to offer desired content
  • Human constraints: motivation to engage, attention and distraction

Sometimes constraints interact.  Many retailers show an option to pick up merchandise at the nearest store, but not everyone lives near a store.  That information, while useful to those near stores, may seem punishing to those far away.  Ideally, the adaptation needs to account for the possibility that not everyone can take advantage of the variant content, so that the content can “gracefully degrade” to a state where the variant is not in the foreground.

A critical implementation dimension involves timing: how anticipatory the adaptation is.  Some adaptations are real-time, responding to uncertain user interactions.  Other are event-triggered, where the event is already known and being monitored. Still others involve scripts based on knowable interaction pathways.  Here adaptive content overlaps with dynamic content (user-initiated requests) and some forms of personalization (remembering information across sessions.)

Content adapts to what is known within different time horizons:

  • Path-based adaptation, which serves different variations according either to prior actions from past sessions, or the immediate preceding actions of the current session
  • Forecast-based adaptation, which serves variations based on known variables such as calendar information or stages of a lifecycle
  • Real-time adaptation, which provides variations based on matching current behaviors with user profiles or task outcome goals.

Real-time adaptation is a data and algorithmically intensive approach.   It requires fast decisions using multiple variables, some of which may lack data.  The more inputs into the decision, and the more outputs of the decision (different content variations), the more challenging it is. A widely encountered example of real time adaptation are ad exchanges, where display ads are shown according to user profile characteristics and advertiser bids.  An impressive amount of computing power is marshaled to deliver display ads, a cost justified by the big stakes involved.

When is adaptation appropriate?

If done properly adaptive content can benefit audiences.  So should brands implement adaptive content?   The answer depends on many factors.   Brands need to evaluate how important content variants are to the audience, and to the brand.  Brands need to understand how much complexity is involved: the inputs needed to decide on the variant, and the number of variants needed to deliver the expected experience.

Adaptive content will often have the strongest business case when supporting transactions, such as sales.  The stronger the business rationale, the larger the potential investment and sophistication.

Adaptive content encompasses a range of approaches.  Not all require state-of-the-art back-end systems.  Some implementations may be small enhancements that improve the experience of using content without involving complex implementations.

What’s appropriate depends on user needs analysis, an assessment of available technical capabilities, and a development of a business case.

— Michael Andrews

Categories
Content Efficiency

Four approaches to content reuse

How organizations approach reusing content impacts their publishing efficiency, and their ability to serve audience needs. Four distinct approaches to content reuse exist, each of which focuses on different goals. Due to specialization in the content profession, content professionals may be familiar with only some content reuse approaches. To support broader organizational objectives effectively, content strategists should become familiar with all four alternative approaches to reuse, since each offers each unique benefits.

Why content reuse matters

While content reuse is a topic of active discussion in the content profession, no one definition for content reuse adequately captures its various meanings. In practice, there are four distinct types of content reuse:

  • Ad hoc reuse of assets
  • The planned reuse of content components
  • Enabling reuse of content across channels
  • Selective reuse through adaptive content

Nearly everyone agrees reusing content is a good thing. Content professionals sometimes invoke the phrase “single sourcing” to suggest the notion that one “source” can serve all needs, both internally and for audiences. What is being reused, exactly? Is the source a database? A file? A finished piece of content?

Many different specialities work with content. Each specialty is working to solve an aspect of reuse and will tend to promote its approach as a solution the core problems associated with poor content reuse. But specialists are not always aware of the larger picture needs of complex organizations or multidimensional audiences. Solution advocacy can sometimes create own silo problems!

When discussing content reuse, it is important to distinguish between reusing as-is content, recycling (repurposing) content, and providing on-demand, customized content. Is the source granular or whole? For example, is the source a whole video recording, or a collection of video snippets? Is the source a document, or a library of documents?

Different reuse approaches reflect different goals. All are valid, but none are complete. At present, no one approach will address all needs faced by enterprise scale publishers.

Specifying content

The term content is abstract and fuzzy, open to various interpretations. Content may be raw or finish, partial or complete. We need to understand different levels or states of content. Fortunately, we can draw on insights from library science to distinguish different levels of specificity by using a concept called the FRBR. [1]

The FRBR model provides levels to analyze content, divided according to how explicit the description of the content is. The key levels of concern to us are work, expression, and manifestation. If the content item is a book, it might be described as follows:

  • Work (Bible)
  • Expression (King James translation)
  • Manifestation (1994 Oxford University Press edition)

The work is the raw content, the underlying intellectual property. It might be a class of content such as a novel or symphony. It describes the content or asset.

The expression identifies a version of the content.

The manifestation specifies the content’s specific revision or a rendition, for example, the edition, format, mode of access, or date of publication.

The table below illustrates the hierarchy, with rough equivalents in content strategy.

FRBR Concept Level of Identification Rough Equivalent in Content Strategy Example
Work Described by a Title Assets relating to a topic Long, unedited video file
Expression Uniquely ID’ed Collection of content components relating to a topic Tagged video clip highlights
Manifestation Versioned Finished content about topic Linked series of transcript-captioned video segments

Different levels of content reflect different frequencies of change and target audiences. Assets don’t change; they are repurposed. Components can be revised, but there will only be one version of a component at a given time. Content composites seen by audiences may come in multiple versions, which can exist simultaneously.

Rather than describe everything as content, it is more helpful to separate different notions:

  • content (items audiences consume)
  • content components (recurring elements incorporated in audience facing content)
  • assets (intellectual property used to create finished content)

Delivering equivalent content to different platforms: COPE

As content channels have multiplied, publishers have needed to make their content available to different devices and different kinds of content customers. The approach known as COPE (Create Once, Publish Everywhere) addresses the issue. Rather than recreate multiple versions of the same content for different devices or platforms, publishers can use standards and structure to provide the same content through an API that can be accessed by a variety of applications. The same content is used in multiple contexts, often distributed simultaneously. Since reuse can imply using the same content at different points in time, the notion of “content once” being published everywhere may be better thought of as multi-use content distribution.

One goal of COPE is the wide dissemination of content across different channels. COPE started as a technology solution to address point-of-failure concerns when publishing to multiple parties from a single database of content. Over time, it has evolved into an approach to syndicate content to other parties.

What COPE does

In the COPE approach, a central content database provides multiple versions of the same content to different people and devices. The original idea didn’t foresee revisions to the content (hence: create once), and also presumed that core essence within content items pushed to different endpoints would be essentially the same. Different technical packages (formats and associated metadata) allow endusers to consume the version of content they want. Technical endusers (content partners and third party app developers) are able to choose which content items they want, but generally lack the ability to request specific components of content from within an item. The API disseminates a large, structured chunk, but not finely defined, reconfigurable chunks. Content consumers choose which content host to use to access the content. They might use their local radio station’s website, or NPR’s own app to access the same content.

Benefits and limitations of COPE

COPE is an effective approach to disseminate articles to multiple partners and platforms. Because of its push orientation, it is not optimized to offer personalized content that responds to specific requests from content consumers. As originally conceived, the body of the content is static.

Reusing common elements in different content products: the DITA model

While COPE is largely focused on formats and metadata, another reuse approach is focused on reusing components of content within the body-field of an item.

Publishers of technical content have championed reusing specific content components in different items of content. Technical documentation is repetitive. Much writing is redundant, where the same text is being repeated in many places. Technical writers sometimes speak about the ideal of WOOO: Write Once and Once Only.

Component reuse is closely associated with an approach called DITA (Darwin Information Typing Architecture), an XML schema originally developed by IBM. DITA is designed to address specific publishing issues with user assistance for technical products, though many DITA proponents argue it can be successfully used for other kinds of content.

For the most part, the motivations behind DITA have been writing efficiency and consistency, rather than audience needs. Few individuals will ever read the many minor variations of content possible with a DITA document, and content variations are largely defined by topic variants rather than reflect audience preferences.

Reusing Components through Transclusion

Most approaches that reuse content components rely on transclusion. Transclusion is the process of incorporating content into an item of content from another source by use of a link to that source. In its most simple form, it is similar to when one embeds an item of content in another, such as embedding a slideshow or YouTube video hosted elsewhere in an article you’ve written. In DITA, the process is called a conref or content reference. Transclusion is a core concept not only in DITA but also in MediaWiki, which powers Wikipedia among other sites. Transclusion allows the same content to be used in multiple locations in Wikipedia.

Transclusion can be applied to any item of content: a word or phrase, a paragraph, or a large section.

A related approach is to show and hide components depending on certain criteria, perhaps intended audience segment. Business customers might see a certain paragraph, while consumers wouldn’t see that paragraph. The process of showing and hiding XML nodes is called profiling in DITA. It allows the output of multiple documents (variations on the master document) from a single source.

Benefits and limitations of Transclusion

Reusing components is effective when there is a repetition of messages, and regular variations among specific components. It can provide efficiencies and consistency for content that is highly regular and needs to be delivered in a uniform manner. If business requirements mandate that all customers see the same terms and conditions in the content regardless of what content they see, transclusion can be an effective approach.

The weakness of transclusion is that it is not very flexible. DITA, for example, assumes a linear flow of content from the publisher to the content consumer. It presupposes content elements can be planned and compiled into well-structured formats. That vision implies the presence of regular content entities and that one can anticipate the exact circumstances of when these entities are required by endusers.

Embedding content through link referencing, or hiding content through profiling, is not very dynamic. The process can groan when the variations become complex. It is also difficult for the publisher to confidently say precisely what an audience wants, and so there is a tendency to deliver too much content because it is easy to include it. Transclusion, by itself, doesn’t adapt to specific audience demands for information, or marketers’ desire to change the messaging in response to CRM and real time analytics data. The motivation to write once only doesn’t accord with audience desires to pick and choose what content they want to see at a given time. It is not clear if the XML-based structure of DITA will be up to the demands of real time personalization associated with performance-based marketing.

Mark Baker noted recently some other shortcomings of transclusion:

“Reusing text where you would have been writing substantially the same text anyway is clearly the right thing to do. But taking all the various ways in which you might express an important idea and combining them into one expression is a bad idea. Your idea will have more impact and more reach if it is expressed in different ways and in different media for different audiences, different purposes, and different occasions.”

Asset Reuse: the DAM model

A third approach to content reuse relates to assets. Reusing assets allows organizations to exact more value from their intellectual property. It recognizes that rich assets can be potentially applicable to different contexts at different times. A systematic approach to asset reuse requires a centralized repository for the raw material that authors draw upon to create audience-facing content.

How Asset Reuse works

A growing number of web publishers — though still a minority — have repositories to hold digital assets that are used to create content for audiences. They may use:

  • A digital asset management (DAM) system for videos, audio, graphics and photos, including brand assets and templates
  • An enterprise content management (ECM) system for complex documents, such as legal documentation
  • A database or file server to store code or data files that can be repurposed

Such repositories differ in purpose for content management systems, which are geared toward the creation and management of content for audiences. Unlike a CMS, a DAM may contain content that is neither currently published, nor being readied for publication.

The varied types of assets that can be stored in a repository share certain characteristics. Assets frequently involve complex workflows. They may involve substantial editorial oversight, to produce and prepare for publication. Unique approvals may be required, such as for branding assets stored in a DAM, or legal copy stored in an ECM. Data, perhaps from a periodic customer survey, may be stored in databases that require running structured queries and reports before they can be made available for content authors to use. Photo archives may have permissions and licensing requirements that must be vetted before items are available for publication.

When considering asset reuse, it helps to know how stable the asset is. Elizabeth Keathley distinguishes between static assets and living assets.

Static assets are generally stable and don’t change often. If they do change, there will only be one version at a time, with a persistent ID. These assets may have associated use rights governing when and how they are used, and by whom. The asset creator may have an explicit goal of preventing derivative reuse, such as prohibiting unapproved modifications of brand assets.

Living assets can be repurposed to support different goals, and are sometimes converted into different formats. Living assets are commonly composed of compound asset parts and have elaborate workflows to produce them. They are not simply derivative of other assets but are substantially original. A living asset is broadly equivalent to a work in FRBR terminology. Other items of content are derived from a living asset, and these will have identities separate from the master asset. Because the structure of living assets is complex and irregular, they are not as readily broken into content components, especially if an exact need for elements in the asset cannot be predicted in advance. Also, the nature of repurposing content means that the approval process will be different than it is for content components involving planned reuse for defined purposes.

Benefits and limitations of DAMs

DAMs and other asset repositories can offer authors a richer library of content than available in CMSs. Unlike with a CMS, authors are not restricted to a narrow perspective where they only see and have access to currently published content.

DAMs have challenges as well. Unless actively managed, metadata descriptions can be poor, hindering asset retrieval. Some DAM systems are improving auto tagging of assets to reduce the burden on contributors. Another limitation is that DAM assets are generally not directly accessible by audiences, so audience requirements for access to this content needs to be understood and planned in advance.

A framework for content reuse

Shows relationship between DAMs for digital assets, DITA, COPE, and adaptive content

The conceptual diagram reflects different content reuse activities according to their purpose. It is not meant to show specific platforms or systems, which vary considerably in practice. Only a few publishers perform all these activities as part of an integrated end-to-end process. The path from potential assets to ready-to-consume content resembles a waterfall: one is dependent on what content is available upstream.

The limits of specialized solutions

Relying on one approach entails various potential pitfalls. Not having a DAM means that potentially valuable content assets are siloed within different organizational departments and not available to authors. A failure to plan for modular reuse of content components hinders efficiency and consistency, and hurts the audience experience as well. Relying on responsive web design might be effective to reach immediate consumers, but won’t allow partners to reuse your content the way an API would allow, and might therefore reduce the total reach of your content.

Many aggravations arise from a poor conceptual understanding of the granularity of content, and how frequently different elements change and are used within the organization. Authors may try to reuse content that is actually a compound object made up of different assets and components. They may actually need to only reuse some parts of the content.

A core issue with reuse is whether the content continues to be up-to-date and accurate. Unfortunately, just because something is currently published does not indicate it should be reused elsewhere. A table that complements an article might be sufficiently current to stay on a website, but really shouldn’t be incorporated in new content without updating. Content created for one audience may seem to offer a good blueprint for new but similar content for another audience. But in the course of repurposing this content, the authors may conclude that revisions are needed for content that is being reused. What is sufficiently current is often a judgment call based on resources and mission importance.

Publishers face another challenge: the tension between content modularity and integration. While technical documentation can generally be disaggregated into modular components, other content is more powerful when tightly integrated. Ideally content elements should support one another, rather than simply be presented together. But cross-dependency among elements make them less attractive candidates to manage as separate components. A reusable, adaptable template may be a better approach when elements tend to occur together in an integrated manner. Authors may want to reuse the structure of the body of the content without reusing the actual content components.

Adaptive content and reuse

The newest approach to content reuse is known as adaptive content. Unfortunately, there is no widely accepted definition of adaptive content, and content professionals tend to speak about adaptive content in different ways. The phrase provokes two obvious questions:

  1. What adapts?
  2. To what does it adapt?

Sometimes people will speak about “the content” adapting to “the device” the individual is using. That interpretation is not much different from responsive web design, and is not very ambitious. It should be possible to have the content itself change based on any number of criteria, such as contextual factors (location, time of day, user status), and various user preferences or behaviors. I would rather define adaptive content in terms of the goal it supports.

Adaptive content
content that changes what is presented to reflect the intentions of the content consumer.

How Adaptive Content works

Adaptive content relies on the use of algorithms and audience data to change the content. There are significant differences between preplanned content variations such as are specified in DITA, and enabling dynamic, on-demand variations associated with adaptive content. Adaptive content builds on transclusion and COPE, but extends it.

Content reuse to support adaptive content must accommodate on-demand access to content by individuals, to deliver content composed of components that reflect the interests and needs of an individual when they ask for them.

An early example of adaptive content is the NPR One app for audio content. Individuals indicate what kinds of programming they want, rather than having the publisher deciding that for them. NPR extends its API not only to content partners (local radio stations who add local content), but also to the end consumer of the content, giving them control over what content they receive through likes and shares. The app is adaptive, but not entirely a content on-demand solution, since it is based on streaming.

Benefits and limitations of adaptive content

To realize the goal of having content components available on demand, responding to user preferences in real time, will remove the problems associated with publishers making wrong guesses about what someone wants to view. The limitation of this approach is the complexity it introduces for publishers. They need to think even harder about where the value of their content resides, based on actual use analytics, and structure the content elements to allow retrieval. Web searchers can now cherry-pick information in the search results to get the exact content items they want from articles marked up in schema.org. Such behavior provides a preview of how content will need to become adaptive to user needs.

Conclusion

Content reuse is rich with possibilities. Different content specializations are working to improve reuse. It is useful to understand different approaches. By combining approaches, one can support an integrated strategy that improves both internal goals such as efficiency and governance, and external goals such as personalization and engagement.

— Michael Andrews


  1. FRBR stands for Functional Requirements for Bibliographic Records. FRBR’s focus is on bibliographic records for long-form content such as books, sound recordings, and films. Its focus is different from that of content strategy, so it will not be exactly equivalent. It offers helpful insights as long as we don’t expect literal compliance to its terminology. My apologies to librarians if I run roughshod over these concepts.  ↩