Categories
Content Effectiveness

Untangling Content ROI

How to measure content ROI is a recurring question in forums and at conferences.  It’s a complex topic — I wish it were simple.  Some people present the topic in a simple way, or claim only one kind of measurement matters.  I don’t want to judge what other people care about: only they know what’s most important to their needs.  But broad, categorical statements about content ROI tend to mislead, because content is complicated and organizational goals are diverse.  I can’t provide a simple formula to calculate the value of content, but I hope to offer ideas on how to evaluate its impact.

The Bad News: The ROI of Content is Zero

First, I need to share with you some bad news about content that nearly everyone is hiding from you.  There is no return on investment from content.  If you don’t believe me, ask your CPA how you can depreciate your content.

A widespread misconception about content ROI is that content is an investment.  Yet accountants don’t consider most content as an investment.  They consider content as an expense.  The corporations that hire the accountants consider content as an expense as well.  In the eyes of accountants, content isn’t an asset that will provide value over many years.  It is a cost to be charged in the current year.  From a financial accounting perspective, you can’t have a return on investment when the item is considered as an expense, instead of as an investment.

Many years ago I took an accounting class at Columbia Business School. I remember having a strong dislike of accounting. Accounting operates according to its own definitions.  It may use words that we use in everyday conversation, but have specific ideas about what those words mean.  Take the word “asset.”  Many of us in the content strategy field love to talk about content assets.  Our content management systems manage content assets.  We want to reuse content assets.  The smart use of content assets can deliver greater value to organizations.  But what we refer to as a content asset is not an asset in an accounting sense.  When we speak of value, we are not necessarily using the word in the way an accountant would.

I warned that broad statements about ROI are dangerous, and that content is complicated.  There are cases where accountants will consider content as an investment — if you happen to work at Disney.  Disney creates content that delivers monetary value over many years.  They defy the laws of content gravity, creating content that often makes money over generations.  Most of us don’t work for Disney.  Most of us make content that has a limited shelf life.  Until we can demonstrate content value over multiple years, our content will be treated as an expense.

So the first task toward gaining credibility in the CFO office is to talk about the return on content in a broader way.  Just because content is an expense, that doesn’t mean it doesn’t offer value.  Advertising is an expense that corporations willingly spend billions on.  Few people talk about advertising as an investment: it’s a cost of business, accepted as necessary and important.

The Good News: Content Influences Profitability

Content is financially valuable to businesses.  It can be an asset — in the commonsense meaning of the word.  It’s entirely appropriate to ask what the payoff of content is, because creating content costs money.  We need ways to talk about the relationship between the costs of content, and revenues they might influence.

Profitability is determined by the relationship between revenues and costs.  Content can influence revenues in multiple ways.  Content is a cost, but that cost can vary widely according to how the content is created and managed.  The overall goal is to use content to increase revenues while reducing the costs of producing content, where possible.  The major challenge is that the costs associated with producing content are often not directly linked to the revenue value associated with the content.  As a result, it can be hard to see the effects on revenues of content creation costs.  Content’s influence on profitability is often indirect.

Various stakeholders tend to focus on different financial elements when evaluating the value of content. Some will seize on the costs of the content. How can it be done more cheaply?  Others will focus exclusively on the revenue that’s related to a set of content items. How many sales did this content produce?  These are legitimate concerns for any business.  But narrowly framed questions can have unintended consequences.  They can lead to optimizing of one aspect of content to the detriment of other aspects.  Costs and revenues can involve tradeoffs, where cost-savings hurt potential revenue.  Costs also involve choices about kinds of content to produce, so that choices to spend on content supporting one revenue opportunity can involve a decision not to produce content supporting another revenue opportunity.   For example, a firm might prioritize content for current customers over the needs of content for future customers, especially if revenue associated with current customers is easier to measure.

The key to knowing the value of content is to understand its relationship to profitability.

Customers Generate Profits, Not Content

Spreadsheets tend to represent things and not people.  There are costs associated with different activities, or different outputs, such as content.  There are revenues, actual and forecast, associated with products and services.  The customers that actually spend money for these products and services are often represented only indirectly.  But they are the link between one set of numbers (expenses involved with stuff they see and use) and another set (revenues associated with stuff they buy, which is generally not the content they see).

Unfortunately, the financial scrutiny of content items tends to obscure the more important issue of customer value.  Content is not valuable or costly in its own right.  Its financial implications are meaningful only with respect to the value of the customers using the content, and their needs.  The financial value of content is intrinsically related to the expected profitability of the customer.

The financial value of content is clear only when seen from the perspective of the customer.  Let’s look at a very simplified customer lifecycle.  The customer first enters a stage of awareness of a brand and its products.  Then the customer may move to a stage where she considers the product.  Finally, if all goes well, she may become an advocate for the brand and its products.  At each stage, content is important to how and what the customer feels, and how likely he or she may be to take various actions.  So what kind of content is most important?  Content to support awareness, content to support consideration, or content to support advocacy? Asked as an abstract hypothetical, the question poses false choices.   The business context is vital as well.  Is it more important to get a specific sale, or to acquire a new customer?  Such questions involve many other issues, such as buying frequency, brand loyalty, purchase lead times, product margins, etc.

There can be no consideration of a product without awareness, and no advocacy without favorable consideration (and use).  And awareness is diminished without advocacy by other customers.   The lifecycle shows that the customer’s value is not tied to one type of content — it is cultivated by many types.  At the same time, it is clear that content is only playing a supporting role.  The customer is not evaluating the content: she is evaluating the brand and its products.  Content is an amplifier of customer perception.  The content doesn’t create the sale — the product needs to fulfill a customer need.  While bad content can hurt revenues for otherwise excellent companies, content doesn’t have the power to make a bad company overcome poor quality products and services. Content’s role is to bring focus to what customers are interested in learning about.

Conversion is a Process, Not an Event

Marketing has become more focused on metrics, and so it is not surprising that content is being measured in support of sales. A/B testing is widely used to measure what content performs better in supporting sales.  Marketers are looking at how content can increase revenue conversion. This has often resulted in a tunneling of vision, to focus on the content on the product pages. Conversion is seen as an event, rather than as a process.

Below is a landing page for a product I heard about, and was interested in possibly purchasing.  It represents a fairly common pattern for page layouts for cloud based subscription services.  The page is simple, and unambiguous about what the brand wants you to do.  The page is little more than a button asking you to sign up (and, in the unlikely event you missed the button on the page, a second button is provided at the top).  I presume that this page has been tested, and the designers decided that less content resulted in more conversions per session.  If people have few places to go, they are more likely to sign up than if they get distracted by other pages. What’s harder to judge is how many people didn’t sign up because of the dearth of information.

A product page that is entirely about a Call-To-Action. The product remains a mystery until the prospect agrees to sign up.
A product page that is entirely about a Call-To-Action. The product remains a mystery until the prospect agrees to sign up.

Some online purchases are impulsive.  Impulse online purchases tend to be for inexpensive items, or from brands the customer has used before and is confident in knowing what to expect.  Most other kinds of purchases involve some level of evaluation of the product, or of the seller, sometimes over different sessions.  In the case if this product, the brand decided that it could encourage impulsive sign-ups by offering a two-week free trial.  This model is known as “buy before you try”, since you are presumed to have bought the product at sign-up, as your subscription is automatically renewed until you say otherwise.

A focus on conversion will often result in offering trials in lieu of content.  Free trials can be wonderful ways to experience a product. I enjoy sampling a new food item in a grocery, knowing I can walk away. But trials often involve extra work for prospective customers.  Online, my trial comes with strings attached.  I need to supply my email address.  I need to create an account, and make up a new password for a product I don’t know I want.  If it is a buy-before-you-try type trial, I’ll be asked for my credit card, and hope there is no drama if I do decide to cancel.  And I’m being forced to try the product on their schedule, and not my own.

Paradoxically, content designed to convert may end up not converting.  The brand provides little information about their service, such as what one could expect after signing up.  The only information available is hidden in a FAQ (how we love those), where you learn that the service will cost $100 a year — not an impulse buy for most people.  When prospective customers feel information is hidden, they are less likely to buy.

Breaking the Taboo of Non-Actionable Content

There is a widespread myth that all content must be designed to produce a specific action by the audience. If the content didn’t produce an action, then nothing happened, and the content is worthless.  It’s a seductive argument that appeals to our desire to be pragmatic.  We want to see clear outcomes from our content.  We don’t want to waste money creating content that doesn’t deliver results for our organization. So the temptation is to purge all content that doesn’t have an action button on it. And if we decide we have to keep the content, we should add action buttons so we have something we can measure.

I don’t want to minimize the problem of useless content that offers no value to either the organization or to audiences.  But it is unrealistic to expect all pages of content to contribute directly to a revenue funnel.  By all means weed out pages that aren’t being viewed.  But audiences do look at content with no intention to take action right away.  And that’s fine.

Creating content biased for action only makes sense when the content is discussing the object of the action.  Otherwise, the call to action is incongruous with the content.  A UX consultant may tell a nonprofit that people have trouble seeing the “donate now” button. But the nonprofit shouldn’t compensate by putting a “donate now” button on every page of their website — it looks pushy, and is unlikely to increase donations.

Conversion metrics measure an event, and can miss the broader process.  Most analytics are poor at tracking behavior across different sessions.  It is hard to know what happened between sessions — we only see events, and not the whole process.  Even sophisticated CRM technology can only tell part of the story.  It can’t tell us why people drop out, and if inadequate content played a role. It can’t tell us if people who bought supplemented their knowledge of the product with other sources of information — talking to colleagues or friends, or seeing a third party evaluation. To compensate for these gaps in our knowledge of customer behavior, businesses often try to force customers to make a decision, before they seem to disappear.

By far the biggest limitation of analytics is that they can’t measure mental activity easily.  We don’t know what customers are thinking as they view content, and therefore we tend to care only about what they do.  The opacity of mental activity leads some people to believe that the opinions of customers aren’t important, and that only their behavior counts.

The Financial Value of Customer Opinion

Customers have an opinion of a brand before they buy, and after they buy.  Those opinions have serious revenue implications. They shape whether a person will buy a product, whether they will recommend it, and whether they will buy it again.  Content plays an important role in helping customers form an opinion of a brand and product.  But it’s hard to know precisely what content is responsible for what opinions that in turn result in revenue-impacting decisions.  Humans just aren’t that linear in their behavior.  Often many pieces of content will influence an opinion, sometimes over a period of time.

Just because one can’t measure the direct revenue impact of content items does not mean these items have no revenue impact.  A simple example will illustrate this.  Most organizations have an “about us” page.  This page doesn’t support any revenue generating activity.  It doesn’t even support any specific customer task.  Despite not having a tightly defined purpose, these pages are viewed.  They may not get the highest traffic, but they can be important for smaller or less well known organizations.  People view these pages to learn who the organization is, and to assess how credible they seem.  People may decide whether or not to contact an organization based on the information on the “about us” page.

Non-transactional content is often more brand-oriented than product-oriented.  Such content doesn’t necessarily talk about the brand directly, but will often provide an impression of the brand in the context of talking about something of interest to current and potential customers.  These impressions influence how much trust a customer feels, and their openness to any persuasive messaging.  Overall content also shapes how loyal customers feel.  Do they identify with being a customer of a brand, or do they merely identify has being someone who is shopping, or as someone who was a past-purchaser of a product?

Another type of non-transactional content is post-purchase product information.  A focus on content for conversion can overlook the financial implications of the post-purchase experience. People often make purchase decisions based on a general feeling about a brand, plus one or two key criteria used to select a specific product.  If they are looking to book a hotel, they have an expectation about  the hotel chain, and may look for the price and location of rooms available.  They may not want to deal with too many details while booking.  But after booking, they may focus on the details, such as the availability of WiFi and hairdryers.  If information about these needs isn’t available, the customer may be disappointed with his decision.  Other forms of post-purchase product information include educational materials relating to using a product or service, on-boarding materials for new customers, and product help information.

The financial value of non-transactional content will vary considerably for two reasons.  First, no one item of content will be decisive in shaping a customer’s opinion. Many items, involving different content types, can be involved. Second, the level of content offered can be justified only in terms of the customer’s value to the organization.  Content that’s indirectly related to revenues is easiest to justify when it’s important to developing customer loyalty. Perhaps the product is high value, has high rates of repurchase, or involves a novel approach to the product category that requires some coaching to encourage adoption. Developing non-transactional content makes most financial sense when aimed at customers who will have a high lifetime value.

Measuring the impact of content that influences customer opinions is hard — much harder than measuring content designed around defined outcomes, such as the conversions on product pages.  But with clear goals, sound measurement is possible.  Content that’s not created to support a concrete customer action needs to be linked to specific brand and customer goals.  Customer goals will consider broader customer journeys where the brand and product are relevant, and where is there is a realistic opportunity to present content around these moments.  Appropriate timing is often critical for content to have an impact.  The goals of a brand will reflect a detailed examination of the customer lifecycle, and a full understanding of the future revenue implications of different stages and the brand’s delivery of services prior to and following revenue events.

The Ultimate Goal: Content that Supports Higher Margins

The two most common approaches to “Content ROI” involve improving conversion rates, and reducing content costs.  These tactics are incremental approaches —useful when done properly, although potentially counterproductive if done poorly.

To realize the full revenue potential of content, one can’t be a prisoner of one’s metrics. The things that are easiest to quantify financially are not necessarily the most important financial factors.  Many organizations fine-tune their landing pages with A/B testing.  Many of the changes they make are superficial: small visual and wording changes.  They are important, and have real consequences, lifting conversions.  But they only scratch the surface of the content customers consider.  The placement and color of buttons gets much attention partly because they are relatively simple things to measure.  That does not imply they are the most important things — only that their measurement is simple to do, and the results are tangible.

Conversion metrics measure the bottom of the marketing funnel: making sure people don’t drop out after they’ve reached the point of purchase page.  What’s harder to do, but potentially more financially valuable, is to expand the funnel by focussing on who enters it.  Content can attract more people to consider a brand and its products, and attract more profitable customers as well.

The biggest opportunity to increase revenues is by attracting people who would be unlikely to ever reach your product landing page.  How to do this is no mystery — it’s just hard to measure, and so gets de-emphasized by many metrics-driven organizations.  The first approach is to offer educational content, so that prospective buyers can learn about the benefits of a product or service without all those pesky calls-to-action.  People interested in educational content are often skeptics, who need to be convinced a solution or a brand is the right fit.  The second approach is through personalization.  The approach of intelligent content points to many ways in which content can be made less generic, and more relevant to specific customers. Many potential customers can’t see the relevance of the product or brand, and accordingly don’t even consider them in any detail, because existing content is too generic.

But profitability is not just about units of sale.  Profitability is about margins.

The first avenue to improving margins is reducing the cost of service.  Many content professionals focus on reducing the cost of producing content, which can potentially harm content quality if done poorly.  The bigger leverage can come from using content to reduce the cost of servicing customers.  Well-designed and targeted content can reduce support costs — a big win, provided the quality is high, and customers prefer to use self-service channels, instead of feeling forced to use them.

The second avenue to improving margins involves pricing.  Earlier I noted that the financial value of content depends on the financial value of the customers for which the content is intended.  A corollary holds true as well: the financial value of prospective customers is influenced by the content they see.  Valuable content can attract valuable customers.  It’s not only the volume of sales, it’s about the margin each sale results in.

Customers who see the brand as being credible and as being leaders are prepared to pay a premium over brands they see as generic.  This effect is most pronounced in the service industry, where experience is important to customer satisfaction, and content is important to experience.  Imagine you are looking to hire a professional services firm: a lawyer, an accountant (who appreciates the value of content), or perhaps a content strategist (maybe me!).  What you read about them online affects how you view their competency.  And those impressions will impact how much you are prepared to pay for their services.

These effects are real, but require a longer period to realize. Long-term projects may not be appealing to organizations that only care about quarterly numbers, or to product managers who are plotting their next job hop.  But for those committed to improving the utility of content offered to prospective customers, the financial opportunity is big.

Discovering Value

When seen from the perspective of how brand credibility affects margins, content marketing that often doesn’t seem linked to any specific outcome, now matters significantly.  It is not simply who knows about your firm that matters: it is about how they evaluate your capabilities, and what they are prepared to pay for your product or service.  Potential customers not only need to be aware of a firm, and have a correct understanding of what it offers, they need to have a favorable impression of it as well.

Content that provides a distributed rather than direct financial contribution needs its own identity. Perhaps we should call it margin-enhancing content.  Such content enables brands to be more profitable, but does so indirectly.  The task of modeling and monitoring the impact of such content requires a deep awareness of how pieces may interact with and influence each other.  By its nature, estimating the strength of these relationships will be inexact.  But the upside of endeavoring to measure them is great.  And through experience and experimentation, the possibilities for more reliable measurements can only improve.

Measurement is important, but it’s not always obvious how to do it. For much of human history, people were unaware of radiation, because it could not be directly seen.  Eventually, the means to detect and measure it were developed.  The process of measuring the financial value of content involves a similar process of investigation: looking for evidence of its effects, and experimenting with ways to measure it more accurately.

— Michael Andrews

Categories
Content Experience

What is Content Design?

The growing interest in content design is a welcome development.  Such interest recognizes that content decisions can’t be separated from the context in which the content will be used.  Consideration of content design corrects two common misperceptions: the notion that content presentation is simply visual styling, and the belief that because content may need to exist in many contexts, the context in which content is displayed becomes irrelevant.  Direct collaboration between writers and UI designers is now encouraged.  Content must fit the design where it appears — and conversely, UI designs must support the content displayed.  Content has no impact independently of a container or interaction platform for which it has been designed, and is being relied upon by users.  Content depends on context.  And context frames the content experience.

Yet content design is more than a collaborative attitude. What content design actually entails is still not well understood. Content design requires all involved to consider how different elements should work together as a system.

“Content and Design Are Inseparable Work Partners”  — Jared Spool

Current Definitions of Content Design

There is no single accepted definition of content design.  Two meanings are in use, both of which are incomplete.

The first emphasizes layout and UI decisions relating to the presentation of content.  It looks at such questions as will the text fit on the screen, or how to show and hide information.  The layout perspective of content design is sometimes referred to as the application of content patterns.

The second, popularized by the Government Digital Service (GDS) in Britain, focuses on whether the words being presented in an article support the tasks that users are trying to accomplish.  The GDS instructs: “know your users’ needs and design your content around them” and talks about “designing by writing great content.”  The GDS’ emphasis on words reflects the fixed character of their content types —a stock of 40 formats.  These structures provide ready templates for inserting content, but don’t give content creators a voice in how or what to present apart from wording.

Content design encompasses much more than wording and layout.

The design of content, including printed media, has always involved layout and wording, and the interaction between the two. Comprehensive content design today goes further by considering behavior: the behavior of the content, and the behavior of users interacting with the content.  It designs content as a dynamic resource.  It evaluates and positions content within a stream of continuous interaction.

Most discussion of content design approaches content from a “one size fits all” perspective.  What’s missing in current discussions is how to design content that can serve multiple needs.  User needs are neither fixed, nor uniform.  Designs must be able to accommodate diverse needs.  Formulaic templates generally fall short of doing this.  Content must be supported by structures that are sophisticated enough to accommodate different scenarios of use.

Breaking Free from the Static Content Paradigm

Content creators typically think about content in terms of topics.  Topics are monolithic.  They are meant to be solid: to provide the answers to questions the audience has. In an ideal scenario, the content presented on the topic perfectly matches the goals of the audience.

The problem with topics is that they too often reflect a publisher-centric view of the world.  Publishers know people are seeking information about certain topics — their web logs tell them this.  They know the key information they need to provide on the topic.  They strive to provide succinct answers relating to the topic.  But they don’t consider the wide variation of user needs relating to the topic.  They can’t imagine that numerous people all reading the same content might want slightly different things.

Consider the many factors that can influence what people want and expect from content:

  • Their path of arrival — where they have come from and what they’ve seen already
  • Their prior knowledge of the topic
  • Their goals or motivations that brought them to the content
  • The potential actions they might want to take after they’ve seen the content

Some people are viewing the content to metaphorically “kick the tires,” while others approach the content motivated to take action.  Some people will choose to take action after seeing the content, but others will defer action.  People may visit the content with one goal, and after viewing the content have a different goal. Regardless of the intended purpose of the content, people are prone to redefine their goals, because their decisions always involve more than what is presented on the screen.

In the future, content might be able to adjust automatically to accommodate differences in user familiarity and intent.  Until that day arrives (if it ever does), creators of content need to produce content that addresses a multitude of users with slightly varying needs.  This marks the essence of content design: to create units of content that can address diverse needs successfully.

A common example of content involving diverse needs relates to product comparison.  Many people share a common task of comparing similar products.  But they may differ in what precisely they are most interested in:

  • What’s available?
  • What’s best?
  • What are the tradeoffs between products?
  • What options are available?
  • How to configure product options and prices?
  • How to save options for use later?
  • How to buy a specific configuration?

A single item of content providing a product comparison may need to support many different purposes, and accommodate people with different knowledge and interests.  That is the challenge of content design.

Aspects of Content Design

How does one create content structures that respond to the diverse needs of users in different scenarios? Content design needs to think beyond words and static informational elements.  When designs include features and dynamic information, content can accomplish more.  The goal is to build choice into the content, so that different people can take away different information from the same item of content.

Design of Content Features

A feature in content is any structural element of the content that is generated by code.  Much template-driven content, in contrast, renders the structure fixed, and makes the representation static.  Content features can make content more “app-like” — exhibiting behaviors such as updating automatically, and offering interactivity.  Designing content features involves asking how functionality can change the representation of content to deliver additional value to audiences and the business.  Features can provide for different views of content, with different levels of detail or different perspectives.

Consider a simple content design decision: should certain information be presented as a list, in a table, or as a graph?  Each of these options are structures.  The same content can be presented in all three structures.  Each structure has benefits. Graphs are easy to scan, tables allow more exact information, while lists are better for screen readers.  The “right” choice may depend on the expected context of use — assuming only one exists.  But it is also possible that the same content could be delivered in all three structures, which could be used by different users in different contexts.

Design of Data-driven Information

Many content features depend on data-driven information.  Instead of considering content as static — only reflecting what was known at the time it was published — content can be designed to incorporate information about activities related to the content that have happened after publication of the article.

Algorithmically-generated information is increasingly common.  A major goal is to harvest behavioral data that might be informative to audiences, and use that data to manage and prioritize the display of information.  Doing this successfully requires the designer to think in terms of a system of inter-relationships between activities, needs, and behavioral scenarios.

Features and data can be tools to solve problems that words and layout alone can’t address.  Both these aspects involve loosening the control over what the audience sees and notices.  Features and data can enrich the content experience.  They can provide different points of interest, so that different people can choose to focus on what elements of information interest them the most.   Features and data can make the content more flexible in supporting various goals by offering users more choice.

Content Design in the Wild

Real-world examples provide the best way to see the possibilities of content design, and the challenges involved.

Amazon is famous for both the depth of its product information, and its use of data.  Product reviews on Amazon are sometimes vital to the success of a product.  Many people read Amazon product reviews, even if they’ve no intention of buying the product from Amazon.  And people who have not bought the product are allowed to leave reviews, and often do.

Amazon’s product reviews illustrate different aspects of content design.  The reviews are enriched with various features and data that let people scan and filter the content according to their priorities.  But simply adding features and data does not automatically result in a good design.

Below is a recent screenshot of reviews for a book on Amazon.  It illustrates some of the many layers of information available.  There are ratings of books, comments on the books, identification of the reviewers, and reactions to the ratings.  Seemingly, everything that might be useful has been included.

Design of product review information for a book
Design of product review information for a book

The design is sophisticated on many levels. But instead of providing clear answers for users trying to evaluate the suitability of a book, the design raises various questions.  Consider the information conveyed:

  • The book attracted three reviews
  • All three reviewers rated the book highly, either four or five stars
  • All the reviewers left a short comment
  • Some 19 people provided feedback on the reviews
  • Only one person found the reviews helpful; the other 18 found the reviews unhelpful

Perhaps the most puzzling element is the heading: “Most Helpful Customer Reviews.”  Clearly people did not find the reviews helpful, but the page indicates the opposite.

This example illustrates some important aspects of content design.  First, different elements of content can be inter-dependent.  The heading depends on the feedback on reviews, and the feedback on reviews depend on the reviews themselves. Second, because the content is dynamic, what gets displayed is subject to a wide range of inputs that can change over time.  Whether what’s display makes sense to audiences will depend on the design’s capacity to adapt to different scenarios in a meaningful way. Content design depends on a system of interactions.

Content Design as Problem Solving

Content design is most effective when treated as the exploration of user problems, rather than as the fulfillment of user tasks.  Amazon’s design checks the box in terms of providing information that can be consulted as part of a purchase decision.  A purely functional perspective would break tasks into user stories: “Customer reads reviews”, etc.  But tasks have a tendency to make the content interaction too generic. The design exploration needs to come before writing the stories, rather than the reverse. The design needs to consider various problems the user may encounter.  Clearly the example we are critiquing did not consider all these possibilities.

An examination of the content as presented in the design suggests the source of problems readers of the reviews encountered.  They did not find the comments helpful.  The comments are short, and vague as to what would justify the high rating.  A likely reason the comments are vague is that the purchasers of the product were not the true endusers of the product, so they refrained from evaluating the qualities of the product, and commented on their purchase experience instead.  The algorithms that prioritize the reviews don’t have a meaningful subroutine for dealing with cases where all the reviews are rated as unhelpful.

 Critique as the Exploration of Questions

Critiquing the design of content allows content creators to consider the interaction of content as seen from the audience perspective.  As different scenarios are applied to various content elements, the critique can ask more fundamental questions about audience expectations, and in so doing, reconsider design assumptions.

Suppose we shift the discussion away from the minutiae of screen elements to consider the people involved.  The issue is not necessarily whether a specific book is sold.  The lifetime value of customers shopping on Amazon is far more important.  And here, the content design is failing in a big way.

Customers want to know if a book, which they can’t look at physically or in extensive detail, is really what they want to purchase.  Amazon counts on customers to give other customers confidence that what they purchase is what they want.  Returned merchandise is a lose-lose proposition for everyone.  Most customers who leave reviews do so voluntarily, without direct benefit — that is what makes their reviews credible.   So we have buyers of a book altruistically offering their opinion about the product.  They have taken the trouble to log-in and provide a review, with the expectation the review will be published, and the hope it will be helpful to others.  Instead, potential buyers of the book are dinging the reviews.  The people who have volunteered their time to help others are being criticized, while people who are interested in buying the book are unhappy they can’t get reliable information.  Through poor content design, Amazon is alienating two important customer constituencies at once: loyal customers who provide reviews on which Amazon depends, and potential buyers considering a product.

How did this happen, and how can it be fixed?  Amazon has talented employees, and unrivaled data analytics.  Despite those enviable resources, the design of the product review information nonetheless has issues.  Issues of this sort don’t lend themselves to A/B testing, or quick fixes, because of the interdependencies involved.  One could deploy a quick fix such as changing the heading if no helpful reviews exist, but the core problems would remain.  Indeed, the tendency in agile IT practices to apply incremental changes to designs is often a source of content design problems, rather than a means of resolving them.  Such patchwork changes mean that elements are considered in isolation, rather than as part of a system involving interdependencies.

Many sophisticated content designs such as the product review pages evolve over time.  No one person is directing the design: different people work on the design at different stages, sometimes over the course of years.  Paradoxically, even though the process is trumpeted as being agile, it can emulate some of the worst aspects of a “design by committee” approach where everyone leaves their fingerprints on the design, but no holistic concept is maintained.

News reports indicate Amazon has been concerned with managing review contributors.  Amazon wants to attract known reviewers, and has instituted a program called Vine that provides incentives to approved reviewers.  At the same time, it wants to discourage reviewers who are paid by outside parties, and has sued people it believes provide fake reviews.  To address the issue of review veracity, reviews use badges indicating the reviewer’s status as being a verified purchaser, a top reviewer, or a Vine reviewer.  The feedback concerning whether a review is helpful is probably also linked to goals of being able to distinguish real reviews from fake ones.  It would appear that the issue of preventing fake reviews has become conflated with the issue of providing helpful reviews, when in reality they are separate issues.  The example clearly shows that real reviews are not necessarily helpful reviews.

The content design should support valid business goals, but it needs to make sure that doing so doesn’t work at cross-purposes with the goals of audiences using the design.  Letting customers criticize other customers may support the management of review content, but in some cases it may do so at the cost of customer satisfaction.

A critique of the design also brings into focus the fact that the review content involves two distinct user segments: the readers of reviews, and the writers of reviews.  The behavior of each affects the other.  The success of the content depends on meeting the needs of both.

The design must look beyond the stated problem of how to present review information.  It must also solve second-order problems.  How to encourage useful reviews?  What to do when there are no useful reviews?  Many critical design issues may be lurking behind the assumptions of the “happy path” scenario.

Re-examining Assumptions

A comprehensive content design process keeps in mind the full range of (sometimes competing) goals the design needs to fulfill, and the range of scenarios in which the design must accommodate.  From these vantage points, it can test assumptions about how a design solution performs in different situations and against different objectives.

When applied to the example of product reviews, different vantage points raise different core questions.   Let’s focus on the issue of encouraging helpful reviews, given its pivotal leverage.  The issue involves many dimensions.

Who is the audience for the reviews: other customers, or the seller or maker of the product?  Who do the reviewers imagine is seeing their content, and what do they imagine is being done with that information?  What are the expectations of reviewers, and how can the content be designed to match their expectations — or to reset them?

What are the reviewers supposed to be rating?  Are they rating the product, or rating Amazon?  When the product is flawed, who does the reviewer hold accountable, and is that communicated clearly?  Do raters or readers of ratings want finer distinctions, or not?  How does the content design influence these expectations?

What do the providers of feedback on reviews expect will be done with their feedback?  Do they expect it to be used by Amazon, by other customers, or be seen and considered by the reviewer evaluated?  How does the content design communicate these dimensions?

What is a helpful review, according to available evidence?  What do customers believe is a helpful review?  Is “most helpful” the best metric?  Suppose long reviews are more likely to be considered helpful reviews. Is “most detailed” a better way to rank reviews?

What kinds of detail are expected in the review comments?  What kinds of statements do people object to?  How does the content design impact the quality of the comments?

What information is not being presented?  Should Amazon include information about number of returns?  Should people returning items provide comments that show up in the product reviews?

There are of course many more questions that could be posed.  The current design reflects a comment moderation structure, complete with a “report abuse” link.  The policing of comments, and voting on reviews hits on extrinsic motivators — people seeking status from positive feedback, or skirting negative feedback. But it doesn’t do much to address intrinsic motivators to participate and contribute.  A fun exercise to shift perspective would be to try imagining how to design the reviews to rank-order them according to their sincerity. Because people can be so different in what they seek in product information, it is always valuable to ask what different people care about most, and never to assume to know the answer to that with certainty.

Designing Experiences, Not Tasks

Tasks are a starting point for thinking about content design, but are not sufficient for developing a successful design.  Tasks tend to simplify activities, without giving sufficient attention to contextual issues or alternative scenarios.  A task-orientation tends to make assumptions about user motivations to do things.

Content design is stronger when content is considered experientially.  Trust is a vitally important factor for content, but it is difficult to reduce into a task.  Part of what makes trust so hard is that it is subjective.  Different people value different factors when assessing trustworthiness — rationality or emotiveness, thoroughness or clarity.  For that reason, content designs often need to provide a range of information and detail.

Designing for experiences frees us from thinking about user content needs as being uniform. Instead of focusing only on what people are doing (or what we want them to be doing), the experiential perspective focuses on why people may want to take action, or not.

People expect a richly-layered content experience, able to meet varied and changing needs.  Delivering this vision entails creating a dynamic ecosystem that provides the right kinds of details. The details must be coordinated so that they are meaningful in combination. Content becomes a living entity, powered by many inputs.  Dynamic content, properly designed, can provide people with positive and confidence-inducing experiences. Unless people feel comfortable with the information they view, they are reluctant to take action.  Experience may seem intangible, and thus inconsequential.  But the content experience has real-world consequences: it impacts behavior.

— Michael Andrews

Categories
Agility

Adaptive Content: Three Approaches

Adaptive content may be the most exciting, and most fuzzy, concept in content strategy at the moment.  Shapeshifting seems to define the concept: it promises great things — to make content adapt to user needs — but it can be vague on how that’s done. Adaptive content seems elusive because it isn’t a single coherent concept. Three different approaches can be involved with content adaptation, each with distinctive benefits and limitations.

The Phantom of Adaptive Content

The term adaptive content is open to various interpretations. Numerous content professionals are attracted to the possibility of creating content variations that match the needs of individuals, but have different expectations about how that happens and what specifically is accomplished. The topic has been muddled and watered-down by a familiar marketing ploy that emphasizes benefits instead of talking about features. Without knowing the features of the product, we are unclear what precisely the product can do.

People may talk about adaptive content in different ways: for example, as having something to do with mobile devices, or as some form of artificial intelligence. I prefer to consider adaptive content as a spectrum that involves different approaches, each of which delivers different kinds of results.  Broadly speaking, there are three approaches to adaptive content, which vary in terms of how specific and how immediately they can deliver adaptation.

Commentators may emphasize adaptive content as being:

  • Contextualized (where someone is),
  • Personalized (who someone is),
  • Device-specific (what device they are using).

All these factors are important to delivering customized content experiences tailored to the needs of an individual that reflect their circumstances.  Each, however, tends to emphasize a different point in the content delivery pipeline.

Delivery Pipelines

There are three distinct windows where content variants are configured or assembled:

  1. During the production of the content
  2. At the launch of a session delivering the content
  3. After the delivery of the content

Each window provides a different range of adaptation to user needs.   Identifying which window is delivering the adaptation also answers a key question: Who is in charge of the adaption?  Is it the creator of the content, the definer of business rules, or the user themself?  In the first case the content adapts according to a plan.  In the second case the content adapts according to a mix of priorities, determined algorithmically.  In the final case, the content adapts to the user’s changing priorities.

Content variations can occur at different stages
Content variations can occur at different stages

Content Variation Possibilities

Content designers must make decisions what content to include or exclude in different content variations.  Those decisions depend on how confident they are about what variations are needed:

  • Variants planned around known needs, such as different target segments
  • Variants triggered by anticipated needs reflecting situational factors
  • Variants generated by user actions such as queries that can’t be determined in advance

On one end of the spectrum, users expect customized content that reflects who they are based on long-established preferences, such as being a certain type of customer or the owner of an appliance. On the other end of the spectrum, users want content that immediately adapts to their shifting preferences as they interact with the content.

Situational factors may invoke contextual variation according to date or time of day, location, or proximity to a radio transmitter device. Location-based content services are the most common form of contextualized content.  Content variations can be linked to a session, where at the initiation of the session, specific content adapts to who is accessing it, and where they are — physically, or in terms of a time or stage.

Variations differ according to whether they focus on the structure of the content (such as including or excluding sections), or on the details (such as variables that can be modified readily).

Different point of content adaptation
Different forms of variation in content adaptation

Customization, Granularity and Agility

While many discussions of adaptive content consciously avoid talking about how content is adapted, it’s hard to hide from the topic altogether. There is plenty of discussion about approaches to create content variations, however.  On one side are XML-based approaches like DITA that focus on configuring sections of content, while on the other side are JSON-based approaches involving JavaScript that focus on manipulating individual variables in real-time.

Contrary to the wishes of those who want only to talk about the high concepts, the enabling technologies are not mere implementation details. They are fundamental to what can be achieved.

Adaptive content is realized through intelligence. The intelligence that enables content to adapt is distributed in several places:

  • The content structure (indicating how content is expected to be used),
  • Customer profile (the relationship history, providing known needs or preferences)
  • Situational information from current or past sessions (the reliability of which involves varying degrees of confidence).

What approach is used impacts how the content delivery system defines a “chunk” of content — the colloquial name for a content component or variable. This has significant implications for the detail that is presented, and the agility with which content can match specific needs.

Different approaches to delivering content variations are solving different problems.

The two main issues at play in adaptive content are:

  1. How significant is the content variation that is expected?
  2. How much lead time is needed to deliver that variation?

The more significant the variant in content that is required, the longer the lead time needed to provide it.  If we consider adaptive content in terms of scope and speed, this implies narrow adaptation offers fast adaptation, and that broad adaptation entails slow adaptation.  While it makes sense intuitively that global changes aren’t possible instantly, it’s worth understanding why that is in the context of today’s approaches to content variation.

First, consider the case of structural variation in content. Structure involves large chunks of content.  Adaptive content can change the structure of the content, making choices about what chunks of content to display.  This type of adaptation involves the configuration of content.  Let’s refer to large chunks of content as sections.  Configuration involves selecting sections to include in different scenarios, and which variant of a section to use.  Sections may have dependencies: if including  one section, related detailed sections will be included as well.  Sectional content can entail a lot of nesting.

Structural variation is often used to provide customized content to known segments.  XML is often used to describe the structure of content involving complex variations.  XML is quite capable when describing content sections, but it is hard to manipulate, due to the deeply nested structure involved.  XSLT is used to transform the structure into variations, but it is slow as molasses.  Many developers are impatient with XSLT, and few users would tolerate the latency involved with getting an adaptation on demand.  Structural adaptation tends to be used for planned variations that have a long lead time.

Next, consider the assembly of content when it is requested by the user — on the loading of a web page. This stage offers a different range of adaptive possibilities linked to the context associated with the session.    Session-based content adaptation can be based on IP, browser or cookie information.  Some of the variation may be global (language or region displayed) while other variations involve swapping out the content for a section (returning visitors see this message).    Some pseudo personalization is possible within content sections by providing targeted messages within larger chunks of static content.

Finally, adaptive content can happen in real-time.  The lead time has shrunk to zero, and the range of adaptation is more limited as well.  The motivation is to have content continuously refresh to reflect the desires of users.  Adaptation is fast, but narrow. Instead of changing the structure of content, real-time adaptation changes variables while keeping the structure fixed.

It is easier to swap out small chunks of text such as variables or finely structured data in real-time than it is to do quick iterative adaptations of large chunks such as sections.  JSON and Javascript are designed to manipulate discrete, easily identified objects quickly.  Large chunks of content may not parse easily in JavaScript, and can seem to jump around on the screen. Single page applications can avoid page refreshes because the content structure is stable: only the details change. They deliver a changing “payload” to a defined content region.  Data tables change easily in real time.  Single page applications can swap out elements that can be easily and quickly identified — without extensive computation.

Conclusion

Content adaptation can be a three stage process, involving different sets of technologies, and different levels of content.

The longer the lead time, the more elaborate the customization possible. When discussing adaptive content, it’s important to distinguish adaptation in terms of scope, and immediacy.

A longer-term challenge will be how to integrate different approaches to provide the customization and flexibility users seek in content.

— Michael Andrews