Categories
Agility

Adaptive Content: Three Approaches

Adaptive content may be the most exciting, and most fuzzy, concept in content strategy at the moment.  Shapeshifting seems to define the concept: it promises great things — to make content adapt to user needs — but it can be vague on how that’s done. Adaptive content seems elusive because it isn’t a single coherent concept. Three different approaches can be involved with content adaptation, each with distinctive benefits and limitations.

The Phantom of Adaptive Content

The term adaptive content is open to various interpretations. Numerous content professionals are attracted to the possibility of creating content variations that match the needs of individuals, but have different expectations about how that happens and what specifically is accomplished. The topic has been muddled and watered-down by a familiar marketing ploy that emphasizes benefits instead of talking about features. Without knowing the features of the product, we are unclear what precisely the product can do.

People may talk about adaptive content in different ways: for example, as having something to do with mobile devices, or as some form of artificial intelligence. I prefer to consider adaptive content as a spectrum that involves different approaches, each of which delivers different kinds of results.  Broadly speaking, there are three approaches to adaptive content, which vary in terms of how specific and how immediately they can deliver adaptation.

Commentators may emphasize adaptive content as being:

  • Contextualized (where someone is),
  • Personalized (who someone is),
  • Device-specific (what device they are using).

All these factors are important to delivering customized content experiences tailored to the needs of an individual that reflect their circumstances.  Each, however, tends to emphasize a different point in the content delivery pipeline.

Delivery Pipelines

There are three distinct windows where content variants are configured or assembled:

  1. During the production of the content
  2. At the launch of a session delivering the content
  3. After the delivery of the content

Each window provides a different range of adaptation to user needs.   Identifying which window is delivering the adaptation also answers a key question: Who is in charge of the adaption?  Is it the creator of the content, the definer of business rules, or the user themself?  In the first case the content adapts according to a plan.  In the second case the content adapts according to a mix of priorities, determined algorithmically.  In the final case, the content adapts to the user’s changing priorities.

Content variations can occur at different stages
Content variations can occur at different stages

Content Variation Possibilities

Content designers must make decisions what content to include or exclude in different content variations.  Those decisions depend on how confident they are about what variations are needed:

  • Variants planned around known needs, such as different target segments
  • Variants triggered by anticipated needs reflecting situational factors
  • Variants generated by user actions such as queries that can’t be determined in advance

On one end of the spectrum, users expect customized content that reflects who they are based on long-established preferences, such as being a certain type of customer or the owner of an appliance. On the other end of the spectrum, users want content that immediately adapts to their shifting preferences as they interact with the content.

Situational factors may invoke contextual variation according to date or time of day, location, or proximity to a radio transmitter device. Location-based content services are the most common form of contextualized content.  Content variations can be linked to a session, where at the initiation of the session, specific content adapts to who is accessing it, and where they are — physically, or in terms of a time or stage.

Variations differ according to whether they focus on the structure of the content (such as including or excluding sections), or on the details (such as variables that can be modified readily).

Different point of content adaptation
Different forms of variation in content adaptation

Customization, Granularity and Agility

While many discussions of adaptive content consciously avoid talking about how content is adapted, it’s hard to hide from the topic altogether. There is plenty of discussion about approaches to create content variations, however.  On one side are XML-based approaches like DITA that focus on configuring sections of content, while on the other side are JSON-based approaches involving JavaScript that focus on manipulating individual variables in real-time.

Contrary to the wishes of those who want only to talk about the high concepts, the enabling technologies are not mere implementation details. They are fundamental to what can be achieved.

Adaptive content is realized through intelligence. The intelligence that enables content to adapt is distributed in several places:

  • The content structure (indicating how content is expected to be used),
  • Customer profile (the relationship history, providing known needs or preferences)
  • Situational information from current or past sessions (the reliability of which involves varying degrees of confidence).

What approach is used impacts how the content delivery system defines a “chunk” of content — the colloquial name for a content component or variable. This has significant implications for the detail that is presented, and the agility with which content can match specific needs.

Different approaches to delivering content variations are solving different problems.

The two main issues at play in adaptive content are:

  1. How significant is the content variation that is expected?
  2. How much lead time is needed to deliver that variation?

The more significant the variant in content that is required, the longer the lead time needed to provide it.  If we consider adaptive content in terms of scope and speed, this implies narrow adaptation offers fast adaptation, and that broad adaptation entails slow adaptation.  While it makes sense intuitively that global changes aren’t possible instantly, it’s worth understanding why that is in the context of today’s approaches to content variation.

First, consider the case of structural variation in content. Structure involves large chunks of content.  Adaptive content can change the structure of the content, making choices about what chunks of content to display.  This type of adaptation involves the configuration of content.  Let’s refer to large chunks of content as sections.  Configuration involves selecting sections to include in different scenarios, and which variant of a section to use.  Sections may have dependencies: if including  one section, related detailed sections will be included as well.  Sectional content can entail a lot of nesting.

Structural variation is often used to provide customized content to known segments.  XML is often used to describe the structure of content involving complex variations.  XML is quite capable when describing content sections, but it is hard to manipulate, due to the deeply nested structure involved.  XSLT is used to transform the structure into variations, but it is slow as molasses.  Many developers are impatient with XSLT, and few users would tolerate the latency involved with getting an adaptation on demand.  Structural adaptation tends to be used for planned variations that have a long lead time.

Next, consider the assembly of content when it is requested by the user — on the loading of a web page. This stage offers a different range of adaptive possibilities linked to the context associated with the session.    Session-based content adaptation can be based on IP, browser or cookie information.  Some of the variation may be global (language or region displayed) while other variations involve swapping out the content for a section (returning visitors see this message).    Some pseudo personalization is possible within content sections by providing targeted messages within larger chunks of static content.

Finally, adaptive content can happen in real-time.  The lead time has shrunk to zero, and the range of adaptation is more limited as well.  The motivation is to have content continuously refresh to reflect the desires of users.  Adaptation is fast, but narrow. Instead of changing the structure of content, real-time adaptation changes variables while keeping the structure fixed.

It is easier to swap out small chunks of text such as variables or finely structured data in real-time than it is to do quick iterative adaptations of large chunks such as sections.  JSON and Javascript are designed to manipulate discrete, easily identified objects quickly.  Large chunks of content may not parse easily in JavaScript, and can seem to jump around on the screen. Single page applications can avoid page refreshes because the content structure is stable: only the details change. They deliver a changing “payload” to a defined content region.  Data tables change easily in real time.  Single page applications can swap out elements that can be easily and quickly identified — without extensive computation.

Conclusion

Content adaptation can be a three stage process, involving different sets of technologies, and different levels of content.

The longer the lead time, the more elaborate the customization possible. When discussing adaptive content, it’s important to distinguish adaptation in terms of scope, and immediacy.

A longer-term challenge will be how to integrate different approaches to provide the customization and flexibility users seek in content.

— Michael Andrews

Categories
Agility Big Content

Making content updates an intelligent process

In the first part of this two-part post, “Why your content is never up to date,” I discussed how common approaches to managing out-of-date content are focused on first searching for content that’s dated, and then updating it as appropriate.  In this post, I want to explore how to prevent content from being becoming out-of-date.  Making sure content is always current requires more than willpower.  It requires more sophisticated tools than are widely available today.

Unfortunately, for all the bells an whistles in many content management systems, they are generally poorly designed to support real-time enterprise management of content’s “nowness”.  The intelligence of what’s up-to-date resides in the heads of the content creators, and the CMS is largely oblivious to what is involved with that judgment.  The cognitive load of having to keep track of how up-to-date content is, and why, is doubtless one of the frustrations that contributes to user disillusionment with CMSs.

Due to the limitations of existing tools, I will propose some new approaches.  In some cases, organizations will need to build new software tools and business processes themselves to enable proactive management of content.  While this option is not for everyone, it is clear to me that content innovation comes from publishers and not from the CMS industry, and that content leaders are often the ones who build their own solutions.

The solutions I propose fall in three main areas:

  1. understanding the temporal lifecycle of content elements
  2. developing more robust business rules for content
  3. building intelligence into content workflows

Why does content change?

Few organizations at the enterprise level have a good understanding of why their content changes over time, and how often.  Since they tend to devolve responsibility to individuals, they don’t monitor this dimension.  But without insights into what’s happening, they are unable to manage the process more effectively.  They need to understand what elements of content are routinely updated, what business areas those elements relate to, and how often the updates happen.

Organizations need forensic insights into content change.  Content can go through at least three patterns of changes in state:

  1. content that is thrown away because it is no longer useful
  2. content that is temporarily replaced by other content before returning, such as when a limited time offer replaces the standard offer
  3. content that is updated, and evolves from one state to another

The difference between throw-away content and revisable content may not be clear cut. Sometimes content is thrown away because it is too burdensome to revise.  Other times content looks like a revision when in reality is a repurposing of content about one product for use about another (a forking or mutation change).  It’s valuable to know what kinds of content change often (or should change often), and what about the content changes, to anticipate what is a problem area in terms of generating out-of-date content, or generating revision effort.

Gaining an understanding of what content changes is not typically developed during a content inventory, which is one of the few times organizations ever thoroughly examine their content.

Another challenge is to understanding change is knowing what level of detail to examine.  Even interviewing content owners about change will not necessarily reveal all the changes that happen.  Owners will likely focus on changes specific to their content, and then only the most substantive ones.  But changes relating to specific details can happen on a global basis, and can become tedious or worse.  The VP for Customer Relations, for example, one day may decide that henceforth all customers will no longer be described as “members” but instead as “guests.”

Most CMSs are not robust tracking the many content components that can change, such as the terminology used to describe a customer.   Content strategists often advocate structured modularity in content to help manage such issues.  Modularity can be helpful, but is infrequently practiced when it comes to embedded content — content within content. (Notable exception: CMSs optimized for structured online catalog content.) Some CMSs don’t support modular component embedding, and of those that do, they are often cumbersome for endusers.  To avoid having unstructured content embedded with larger content, some strategists recommend avoiding embedded content all together, for example, never having links in-line with the text body.  But scattering content elements in different places can degrade the audience experience.  Content creators reflexively embed content in other content to create a more naturalistic content experience, publishing content that feels integrated rather than fragmented.

A key need is to understand changes that happen within embedded content. Most CMSs don’t offer good visibility into how pervasively specific content components, structured or unstructured, are used across digital publications.  Conducting an analysis of how these components change will help your organization manage them better.

Ideally, a reliable and repeatable process for understanding change will involve something like this:

  1. a snapshot is taken of a consistent representative sample of content at different time intervals
  2. the sample snapshots are compared using file comparisons to identify what aspects of the content have changed over time
  3. the text of content that is found to have changed is analyzed as to its type, meaning and purpose
  4. patterns of change for components of content are identified according to element and the context in which it appeared, to provide a basis for developing content business rules
Example of CMS track changes functionality.  It does not indicate what kinds of content components are being changed, or why
Example of CMS track changes functionality. It does not indicate what kinds of content components are being changed, or why.  Are the wording changes substantive, impacting other content, or merely stylistic?

Another area where most CMSs are weak relates to versioning content, especially at the component level.  There isn’t much intelligence relating to versioning in most CMSs.  Typically, the CMS auto-creates a new version each time there’s a revision, for whatever reason.  The version number is meaningless.  The publisher can “roll back” the version to a prior one in case there was a mistake, but you can’t see what was different about the content three versions ago compared with the current version.  Even for the few CMSs that let you track changes over time, there is no characterization of what the change represents, and why it was made.  A few CMSs let the author add comments to each version, but such free text entry is generally going to be idiosyncratic and not trackable at an aggregated level.  Comments might say something like “Revised wording based on Karen’s feedback” — meaningful in a local workgroup context perhaps, but not meaningful elsewhere.

At a minimum CMSs need to provide publication date-based version management, so that administrators can easily identify what content about a topic was published before or after a certain date.  This capability allows one to at least see how much content may be impacted by an event-driven change.  This is basic stuff, and easy to do, but it falls short of what’s actually needed.  It would be helpful to be able to apply conditions to such search, such as finding items published prior to a date with content containing some variable.

An even better solution would provide an easy way to record the business reason for the update.  These could be formalized as trackable data elements, that could be applied as a batch when clusters of content are updated at the same time.  Examples of reasons you might want to track are: product model change, warranty change, branding update, campaign language revision, etc.

Having such changes tracked will enable organizations to monitor how much updating is happening, and the status of updates.  It allows content owners to examine the status of all their content without having to read each item.

More robust business rules

As organizations begin to look at content updating as an organizational issue, instead of as the problem for individual content owners, the opportunity arises that different kinds of updates can be prioritized according to business value.  I would be surprised if many organizations today have an explicit policy on how to prioritize the updating of content.  Instead, it is common for updating to be based either on what’s easy to do, or what seems urgent based on immediate management prerogatives.

While any content that is out-of-date should be updated, provided the content has continuing value, it’s obvious that some content is more important than others.  Broken links are always a lousy experience, but unless they are on a high traffic page that’s a key part of a conversion funnel, they probably aren’t mission critical.

Different kinds of updates need to be characterized by their business criticality, and an estimation of effort involved with the update.  Errors and changes to regulatory, legal and price related content are business critical.  Changes to unstructured content, such as branding changes involving photographic imagery, often take longer when done on a large scale.  Each organization needs to develop its own prioritization based on its business factors and content readiness.

Once an organization has a better understanding of what drives content updates, it can begin to define business rules relating to content so that it is kept current.  The goal is to formalize the changes of state for content, so it can be better managed.

The content update analysis performed earlier will provide the foundation for the development of business rules. To do this, map the content changes you observe against the content contexts (larger content containers) and against a timeline.  Map what changing content elements (fragments of text, images, whatever) are associated with content types and topics.  Identify common patterns.  Some content elements will be used many places.  Some topics or types of content will have multiple changes associated with them at a given time, others will only experience minor changes.  After you have performed this analysis (using either a computer-based cluster tool, or doing it manually through affinity diagramming), you should start to see some common scenarios.  If it is not obvious why the updates occurred, work with content owners and other stakeholders to reconstruct what happened.  You should end up with a series of common scenarios that describe cases where your content requires updating.

From the scenarios, you will want to identify specific triggers that generate the need for updates.  This will be internal or external events that impact the content, or situations where some variable relating to the content has changed.

In the case of situational change (e.g., something changed, but the actor or the timing is not well defined), it is important to understand how small scale change can ripple through content.  Perhaps a product line has been renamed, or messaging has been revised slightly.  When such details impact many items of content, they should be managed through content templates where such details are structurally controlled.  There is always a trade off between the overhead of managing components and the efficiency of updating them.  Having a solid grasp of the relative frequency of items, their prevalence of use, and frequency of updates will allow content designers to strike an appropriate balance.  Even if such content elements are not all centrally managed, it is important to know where they are being used.

In the case of event triggered change, it is useful to characterize the types of events and associated actors, and the elements typically updated as a result.  Triggers can be internal, such as a new marketing campaign, sale of a division, the introduction of a new product line, or a new partnership.  Triggers may also be external: a new regulation, a dramatic market shift, or the adoption of third party guidelines.  Such events potentially impact multiple content elements, and involve more complex coordination.  By identifying typical events that impact content, as well as major corporate-level changes that may be less frequent but have huge consequences, you can build workflows needed to assure necessary updates happen.

These recommendations may appear simply to follow the principles of good content design.  But effective content design also needs to be transparent, so all stakeholders can understand the linkages, and status of updates.  Such visibility is essential to being able to revise the model as business requirements change.  Unfortunately, even in well designed content implementations, it is often difficult to understand what’s under the hood, and know how the pieces fit together.

Implementing a more intelligent approach

Content administrators, content owners, and the executives who depend on content to deliver business outcomes have common needs:

  • knowing what to do when updating is needed
  • knowing the status of updates
  • being sure their effort is efficient and effective

An effective process needs to accommodate the various parties who are involved in content updating.   One approach would be to empower a central team with lead responsibility for major update initiatives.  It might involve a command center or newsroom, where company initiatives that impact company content are identified, and the updates needed cascade through the organization.  Suppose the company announced a new initiative, or a change in policy. The central command center would query a database of content to identify impacted content.  If the changes were global, they could make the updates themselves.  If the changes impact selected content, the team would identify the specific content and send a notification to the content owner to make revisions. The notification would include a message about the business criticality of the update, the reason for needing the update, and an estimation of effort.  As updates are made, the team would monitor progress on a dashboard.  This approach assumes a degree of central management of content within an organization.

Another situation is when large scale content changes are unplanned.  Such changes might be harder for a central team to identify, especially if they arise from a peripheral division that doesn’t have a close relationship with the central team.  Suppose a content owner initiates an update that has an impact on other content she does not own.  Assuming this owner has authority to make such an update, there needs to be a way to alert other parties of the change.  Ideally, the content system will be smart enough to have a file conflict detection capability, so that it could spot a conflict between the revisions the content owner has made, and other similar instances of content.  The inspiration for this approach is the conflict detection capability in repositories such as GitHub, though the user experience would need to be radically simpler and more informative.  Complex, marked-up content is unquestionably more elaborate that the flat files managed in file repositories.  The task is not trivial, and there could be a lot of noise to overcome, such as false alarms, or missed alerts.  Having good taxonomic structure would be imperative.  But if it could work, the alert would serve two functions.  First, it would make the content owner aware that the change will make her content out of sync with other content, and ask for conformation of intent.  Second, it would trigger notification of the central team and affected content owners that updates are necessary.

Costs and opportunities of an intelligent process

The vision I have outlined is ambitious, and requires resources to realize.  No doubt some will object to its apparent complexity, the expense it might entail, the uncertainties of trying an approach that hasn’t already been thoroughly tested by many others.  Some CMS vendors might object that I undersell their product’s capabilities, that I am exaggerating the severity of the problem of keeping content up-to-date.  I can’t claim to be an authority on all the 1000+ CMSs available, but most I see seem to emphasize making themselves appear easy to use (“drag and drop inline editing!”), aiming to convince selection committees that content management should be no more complicated than an iPad game.   Vendors deemphasize harder questions of enterprise level productivity and long term strategic value.  Once installed, few endusers find their new CMS nearly as fun as they had hoped.  The emphasis on eye candy is an attempt to deflect that enduser unhappiness.

As I noted in the my earlier post, relying on existing approaches is simply not an effective option for large organizations.  It’s costly to always be playing catch up with content updates and never be on top of them.  Organizations that are always behind on updating their content miss business opportunities as they exhaust their staff. It’s risky not to know that all your content is even up to date: an expensive lawsuit could result.  Playing catch up impairs a business’s ability to operate agilely.

Yes resources are required to develop the capability to proactively update content before it becomes out-of-date.  But content has no value unless it is up-to-date, so there is little choice. In this era of mining big data and precision enterprise resource planning, it’s not unrealistic to expect more granular control over one’s content.  It’s not acceptable for large organizations to be presenting information to their customers that’s not the newest available.

I don’t assume my suggestions are the only approach to making the process more intelligent, but radical change of some kind seems needed.   If you agree this is a problem that needs new solutions, I encourage you to share your views on your favorite social media channel and encourage the development of something better.

— Michael Andrews