Categories
Personalization

Improving content discovery through typologies

Brands face a challenge: how to improve the content discovery process. They want to offer fresh, interesting content to audiences, but aren’t sure what an individual might like. The individual may also not be sure: they have a hard time specifying which content seems interesting, and which kind seems dull. Fortunately, content discovery can be improved. Brands can use the concept of typologies to improve the relevance of content they recommend.

Why content discovery is an issue

General interest content has grown dramatically. Audiences seek content to relax with, and make them feel better informed. Most general interest content is content people want to use, rather than need to use. Brands hope to create sticky content that audiences like and share. But brands can’t rely solely on social media referrals to position interesting content in front of audiences. Audiences are flooded with content that is pushed at them, including from their social contacts, but only a fraction of that content really resonates.

General interest content can be tricky to recommend. What one person finds interesting about a topic will be different from another person. Two people both like stories about food, but one person wants to know what’s new, while another wants to improve his cooking knowledge. People, and content systems, tend to think about content in terms of topics, but for general interest topics, just naming the topic of interest isn’t enough. People have trouble saying what exactly they find interesting. Subconsciously, they search for ”stories about gardening that aren’t boring.” They don’t want any story about gardening, but aren’t prepared to limit the content by type of plant.

What’s the opportunity?

Marketing and other forms of branded content make extensive use of general interest content. Presented in the right way, it attracts a diverse audience. But general interest content needs to be distinctive enough to stand out, to make an impression on audiences. Such content needs to be differentiated, not simply good.

Audiences find distinctive content more relevant. Brands benefit when they offer more relevant suggestions to their audiences. Better content recommendations increase the usage of brand content, resulting in happier, more loyal audiences.

To improve the relevance of recommendations, brands should focus on defining the elements that make their content distinctive. Typologies are a tool that can enable that.

What’s a typology?

Typology is a term not often used in content strategy — but should be. Sometimes content strategists talk about content types to refer to a content format with a regular structure such as a press release. I am using typology in a different sense, to refer to the qualities of content, not its structural elements. Typologies are a well-established approach used in the social sciences “A typology is generally multidimensional and conceptual” with a goal of reducing complexity by identifying similarities, notes Kenneth Bailey in his book, Typologies and Taxonomies: An Introduction to Classification Techniques. Archeologists use typologies to characterize items they unearth, looking at the qualities of artifacts to determine commonalities among these items. Psychologists rely on typology to map distinct personality types based on different dimensions, such as whether a person’s social orientation tends toward extraversion or introversion.

Typologies examine the attributes or dimensions of stuff, seeking to determine the most important dimensions that form the essence of something. For each separate dimension, two or more values are possible. The goal of a typology is to find patterns, to examine which values tend to occur in which dimensions for which items. Not all combinations of dimensions and values are important. Some combinations are more common, and seem familiar to people. We all instinctively recognize different styles of music, but don’t generally think of these styles in terms of their individual dimensions, such as tempo, rhythm, mood, instrumentation, loudness, and so on. Sometimes we don’t even have a label in our minds for the music we like, we just know we like certain music that has certain qualities.

Typologies serve a different role than taxonomies, the standard way to categorize content. Taxonomies are hierarchical and generally focused on concrete attributes (nouns), aiming for precise specificity. In contrast to the specificity and literalism of taxonomies, typologies focus on qualities (adjectives and concepts), and seek to make generalizations based on these qualities.

illustration of urns to show relationship to typology
All are urns, but what distinctions matter to their users: style, symbolism, status? (image courtesy Getty open images)

What typologies reveal in content

To develop a typology for content, one needs to think like the audience. The easiest way to do that is to talk to them. Brands can ask their audiences what content on a general topic they like, and what they don’t like. Ask fans what they most like about certain content. Ask them what they don’t like about other content that is on the same topic. Assuming the content is accurate and free of defects, the feedback should yield insights into the emotional qualities of content different audiences most value.

When doing this research, listen for when someone mentions dimensions such as the style of the content, its perspective or point of view, its approach to help, and the kind of occasion it would be viewed. These factors are dimensions you should consider including in your content typology.

Another way to get insights into these dimensions is by looking at specific content that is popular with a specific segment, and what you know about that segment’s lives and values. If a segment with an especially busy lifestyle likes certain content, then it may offer a clue that other people with busy lives might consider the content as time-saving. You can validate that assumption in user research.

How to develop and use a typology

Let’s look at how to characterize content according to content dimensions. I’ve developed an illustrative list of content dimensions, based on a review of some leading examples of branded content, as identified by Kapost (mostly B2B), as well as some typical B2C content. I also summarize how these qualities can vary.

Dimension Value 1 (No value for dimension) Value 2
Educational value Practical N/A Expertise and thought leadership
Curation style What’s new and notable N/A What experts say
Forum approach Peer-to-peer discussion N/A Ask an expert
How news reported Surprise: you didn’t think this could happen N/A or neutral What you suspected is true
Trendiness Trend embracing Neutral Fad-wary
Attitude toward social change Advocacy, trying to make change happen Neutral Adaptation — how to deal with change
Content personality We are just like you Neutral People look to us for authoritative advice

This is just a sample of dimensions and is by no means a complete list. I’ve included just two values (plus the empty value of not applicable) for simplicity, and some dimensions may not be applicable to your content. You’ll want to find the dimensions and values most relevant to your own content, by identifying content items with distinctive qualities.

Suppose a nonprofit needs to address several audiences, who may view a range of content depending on their interests at a given time. The nonprofit has three different core topics they address. Some of the content is meant to help people take action in their personal lives. Other content is intended to catalyze collective action. Some content is meant to build community discussion and solidarity around deeply held perspectives, while other content needs to get people aware of new issues. Using a typology, the organization might classify one piece of content as “practical advice for people having to deal with [topic x].” Another content item may be “breaking advocacy news on [topic x].” Even though both items of content address the same broad topic, they do so in different ways. By recommending an article about a topic that has similar qualities, instead of any article about that topic, the brand can improve the likelihood audiences view recommended content.

A content typology will be used to develop a audience-responsive recommendation engine. The closer the match between the qualities of the current content, and recommended content, the more likely the recommendation will be relevant.

Who is using content typologies?

Content typologies work behind the scenes, so it is not obvious to audiences when they are used. But in general, few brands use content typologies. At most, they focus on one quality of content only, and consider that quality a unique category. They might classify content that uses anecdotes as a feature, and place it in a category called “feature article.” Or they rely too heavily on audience segmentation, and categorize their content by audience segment, making broad assumptions about the qualities each segment wants. They haven’t yet made the effort to characterize their content according to multiple, distinctive qualities. As a result, discovery is hindered, because audiences can’t see content outside the narrow category in which the content was placed.

One notable brand using typologies is Netflix. Netflix has developed a very rich and detailed typology of film genres generated through the tagging of film attributes, looking at everything from how funny the film is, the personality of the audience to which the film might appeal, to the qualities of a lead actor or actress in the film. Netflix uses these taggings, together with extensive data analytics, to make recommendations of other films it believes are of a similar type.

Netflix’s typology is impressive in its sophistication, and the scope of content it covers. Fortunately, most organizations have far simpler content to characterize, and can use a simple system to do that. A content typology need not be complex, and a recommendation engine can use simple rules to improve relevance.

Making content emotionally intelligent

Intelligent content is “structurally-rich and semantically categorized that is, therefore, automatically discoverable,” according to Ann Rockley in the Language of Content Strategy. Structure is key to discoverability. But most of the focus of intelligent content thus far has been on factual details, rather than the essence of the content, its rhetorical intentions and its appeal.

Discoverability needs to include desirability. Categories need to include the distinctive qualities that matter to audiences, not just topics. Fully intelligent content will be content that is emotionally intelligent, self-aware of how it presents itself to audiences. Content typologies can provide additional metadata that improves content relevance.

— Michael Andrews

Categories
Storytelling

What makes an effective story?

Storytelling has emerged as one of the hottest categories of digital content.  But as with other kinds of content, it is important to distinguish between popularity and effectiveness.  Brands need clear goals for their stories, and know how their stories will benefit the audiences they want to reach.

image courtesy Getty Museum open content program
image courtesy Getty Museum open content program

Ineffective stories lecture

To understand how storytelling can be ineffective, let’s consider a typical example used by a software startup.  I’m not going to embarrass anyone by singling them out, especially startups working hard to build their customer base.  But the kind of example I’ll illustrate is a story pattern I see used widely, and I expect you may have seen it as well.  Many firms making apps have a short animated video pitching their product that appears beside their “Get it now” button on their homepage.  They try to make the pitch a story, but it doesn’t work effectively from an audience perspective.  The prototypical story might sound like this:

Meet Mary.  Mary is a busy graphic designer at a design firm.  She’s always having trouble keeping track of all the tasks she needs to coordinate with her clients.  Then one day Mary’s friend Beth mentioned NewApp.  NewApp can help Mary manage everything.  Mary has discovered the power of NewApp, and now has more free time to spend with her dog Checkers.  Mary’s delighted with NewApp, and Checkers is pretty happy too.

The story may be cute (especially the dog), and it helps convey a bit of what NewApp does.  But it presumes the audience has already bought into this vision of NewApp managing their stuff.  NewApp enters the story deus ex machina, and solves all problems. Mary can’t resist.

The story has a clear goal: to drive conversion.  But the didactic “you will feel this way” kind of storytelling doesn’t help audiences make decisions.  In real life, Mary may have looked at other products similar to NewApp, and resisted using them.  We have no hint of what the hesitation might be. A story that glosses over deeper concerns appears facile.

Stories can showcase decisions

The renowned advertising creative director Sir John Hegarty counsels: “you don’t instruct people to do something — you inspire them.”

Providing inspiration involves speaking to the audience’s concerns.

Effective stories for brands need to have what Berkeley narrative theorist Seymour Chatman calls a “kernel” event that “advances the plot by raising and satisfying questions…branching points which force a movement into one of two (or more) possible paths.”

The story protagonist needs to make a choice that isn’t clear, and there’s some tension around that decision, because they may be wrong.  That choice needs to reflect an existential issue your audience is facing themselves.

Effective stories reveal dilemmas

To illustrate an effective brand story, we will look at a product announcement from another young firm, from thirty years ago.  Apple’s famous 1984 ad for the MacIntosh, produced by the film director Ridley Scott, is widely familiar, but what makes it effective as a brand story is less immediately apparent.

The ad of course generated extreme publicity when it aired during the Superbowl in 1984, the year of Orwell’s eponymous novel.  The ad presented the story of a lone woman defying the mindless behavior of an enslaved populace and escaping pursuing police to rise up and smash the screen of Big Brother.  The story told about the ad’s narrative was that it represented David (Apple) challenging Goliath (IBM), who was  endowed with the sinister qualities of Orwell’s Big Brother.  Apple and the press loved that interpretation, but for the general public, the ad presented a deeper narrative.

While Apple obsessed about IBM, comparatively view Superbowl viewers were worried about IBM or thought of it negatively.  But IBM was significant on a symbolic level to the audience, and the ad played on this symbolism.

The personal computer was still new, and its role and destiny were still largely undefined.  People were excited by personal computers, but anxious as well.

One big source of anxiety concerned what technical standard to choose.  Consumers were already familiar with the standards wars for another consumer product, the video tape player, and knew firsthand the confusion and worry such choices forced on them.  Among personal computers, consumers had many standards to choose from: IBM, Commodore, Atari, Apple and various others.

Apple’s story had to address the appeal of going with the herd and embracing the apparent safety of choosing IBM.  Computer buyers worried about being enslaved by the wrong choice, something they’d regret later.  Apple reframed the choice from being about choosing which computer would be victorious in the market, to being which computer would be triumphant for the individual.  The individual viewer could identify with the hero smashing the screen of Big Brother, and be inspired to choose something that might feel outside the range of comfort in one respect, but feel more reassuring in another respect.

The other anxiety the story played on was concern about the office-like character of the personal computer.  At that time (unlike today), the separation between home and work was sacred, so the idea that you were bringing the office into your home was unappealing.  So the storyline in the ad suggested the corporate-provenance of IBM represented the incursion of the corporation into home life.

The defiance of the lone woman portrayed a decision for the audience: go with the herd, or make one’s own choice.  The story raised and answered a question: what is the real danger, and do you have the courage to challenge it?

Why story criteria matter

Storytelling can help brands reach and engage audiences in ways other forms of content can’t.  There is a difference between whether a story is liked, and whether it is effective.  Without defining what makes a story relevant, stories risk being bric-a-brac that gets seen and perhaps prompts smiles but has no lasting impact.

Stories need to address audience concerns to deliver outcomes.  The most effective stories are ones that speak to deep emotional worries, desires, or even sources of indifference to show how choices matter to the audience.  Lots of brands are trying to create stories that will be liked, but it’s more important that the story be deeply relevant to the lives of individuals.

— Michael Andrews

Categories
Content Effectiveness

Don’t build your personalization on data exhaust

A lot of content that looks like it’s just for you, isn’t just for you.  You are instead seeing content for a category segment in which you have been placed.  Such targeting is a useful and effective approach for marketers, but it shouldn’t be confused with personalization.   The choice of what people see rests entirely with the content provider.

When providers both rely on exclusively their own judgments, and base those judgments on how they read the behaviors of groups of people, they are prone to error.  Despite sophisticated statistical techniques and truly formidable computational powers, content algorithms can appear to individuals as clueless and unconcerned.  To understand why the status quo is not good enough, we first need to understand the limitations of current approaches based on web usage mining.

Targeting predefined outcomes

Increasingly, different people see different views of content.   Backend systems use rules to make decisions concerning what to present to offer such variation.  The goal is a simple one: to increase the likelihood that content presented will be clicked on.  It is assumed that if the content is clicked on, everyone is happy.  But depending on the nature of the content, the provider may be more happy — get more benefit —  than the viewer by the act of clicking, and as a consequence present content with only a minor chance of being clicked.

A business user who is viewing a vendor sales website may see specific content, based on the vendor’s ability to recognize the user’s IP address.  The vendor could decide to present content about how the business user’s competitor is using the vendor’s product.  The targeted user is in a segment: a sales prospect in a certain industry.  Such a content presentation reflects the targeting of a type of customer based on their characteristics.  It may or may not be relevant to the viewer coming to the site (the viewer may be looking for something else, and does not care about what’s being presented).  The content presentation does not reflect any declared preference by the site visitor.  Indeed, officially, the site visitor is anonymous, and it is only through the IP address combined with database information from a product such as Demandbase that the inference of who is visiting is made.  This is a fairly common situation: guessing who is looking for content, and then guessing what they want, or at least, what they might be willing to notice.

Targeted ads are often described as personalized, but a targeted ad is simply a content variation that is presented when the viewer matches certain characteristics.  Even when the ad you see tested better with others in a segment of people who are like you, the ad you see is merely optimized (the option that scored highest) not personalized, reflecting your preferences.   In many respects it is silly to talk about advertising as personalized, since it is rare for individuals to state advertising preferences.

The behavioral mechanisms behind content targeting resemble in many respects other content ranking and filtering techniques used for prioritizing search results and making recommendations.  These techniques, whether they involve user-user collaborative filtering, or page-ranking, aim to prioritize the content based on other people’s use of the content. They employ web usage mining to guess what will get most clicked.

What analytics measure

It is important to bear in mind that analytics measure actions that matter to brands, and not actions that matter to individuals.  The analytics discipline tends to provide the most generous interpretation of a behavior to match the story the brand wants to hear, rather than the story the audience member experiences.  Take the widely embraced premise that every click is an expression of interest.  Many people may click on a link, but quickly abandon the page they are taken to.  The brand will think: they are really interested in what we have, but the copy was bad so they left, so we need to improve the copy.  The audience may think: that was a misleading link title and the brand wasted my time; it needs to be more honest.  The link was clicked, but we can’t be sure of the intent of the clicking, so we don’t know what the interest was.

Even brands that practice self awareness are susceptible to misreading analytics.  The signals analyzed are by-products of activity, but the individual’s mind is a black box.  More extensive tracking and data won’t reliably deliver to individuals what they seek when individual preferences are ignored.

Why behavioral modeling can be tenuous

There are several important limitations of behavioral data.  The behavioral data can be thin, misleading, flattened, or noisy.

Thin data

One of the major weaknesses of behavioral data is when there isn’t sufficient data on which to base content prioritization or recommendations.  Digital platforms are supposed to enable access to the “long tail” of content, the millions of items that physical media couldn’t cope with.  But discovery of that content is a problem unsolved by behavioral data, since most of it has little or no history of activity by people similar to any one individual.  If only 20 per cent of content accounts for 80 per cent of activity, then 80 per cent of content has little activity on which to base recommendations.  It may nonetheless be of interest to individuals. Significantly, the content that is most likely to matter to an individual may be what is most unique to them, since special interests strongly define the identity of the individual.  But what matters most to an individual can be precisely what matters least to the crowd overall.  Content providers try to compensate for thin data by aggregating categories and segments at even higher levels, but the results are often widely off the mark.

Misleading signals

Even when there is sufficient data, it can be misleading.  The analytics discipline confuses matters by equating traffic volume with “popularity.”  Content that is most consumed is not necessarily most popular, if we take popularity to mean liked rather than used.  A simple scroll through YouTube confirms this.  Some widely viewed videos draw strong negative comments due to their controversy.  Other may get a respectable number of views but little reaction from likes or dislikes.  And sometimes a highly personal video, say a clip of someone’s wedding, will appeal to only a small segment but will get an enthusiastic response from its viewers.

Analytics professionals may automatically assume that content that is not consumed is not liked, but that isn’t necessarily true.  Behavioral data can tell us nothing about whether someone will like content when a backend system has no knowledge of it having been consumed previously.  We don’t know their interests, only their behavior.

Past behavior does not always indicate current intent.  Log into Google and search intensively about a topic, and you may find Google wants to keep offering content results you no longer want, because it prioritizes items similar to ones you have viewed previously.  The person’s interests and goals have evolved faster than the algorithm’s ability to adapt to those changes.

Perversely, sometimes people consume content they are not satisfied with because they’ve been unable to find anything better.  The data signal assumes they are happy with it, but they may in fact be wanting something more specific.  This problem will be more acute as content consumption becomes increasingly driven by automatic feeds.

Flattened data

People get “averaged” when they are lumped into segment categories.  Their profile is flattened in the process — the data is mixed with other people’s data to the point that it doesn’t reflect the individual’s interests.  Not only can their individual interests be lost, but spurious interests can be presumed of them.

Whether segmentation is demographic or behavioral, individuals are grouped into segments that share characteristics.  Sometimes people with shared characteristics will be more likely to share common interests and content preferences.   But there is plenty of room to make mistaken assumptions.  That luxury car owners over-index on interest in golf does not translate into a solid recommendation for an individual.  Some advertisers have explored the relationship between music tastes and other preferences.  For example, country music lovers have a stronger than average tendency to be Republican voters in the United States.  But it can be very dangerous for a brand to present potentially loaded assumptions to individuals when there’s a reasonable chance it’s wrong.

Even people who exhibit the same content behaviors may have different priorities.  Many people check the weather, but not all care about the same level of detail.  As screens proliferate, the intensity of engagement diminishes, as attention gets scattered across different devices.  Observable behavior becomes a weaker signal of actual attention and interest.  Tracking what one does, does not tells us whether to give an individual more or less content, so the system assumes the quantity is right.

Noisy social data

Social media connections are a popular way to score users, and social media platforms argue that people who are connected are similar, like similar things, and influence each other.  Unfortunately, these assumptions are more true for in-person relationships than for online ones.  People have too many connections to other people in social channels for there to be a high degree of correlation of interests, or influence between them.  There is of course some, but it isn’t as strong as the models would hope.  These models mistake tendencies observable at an aggregated level, with predictability at the level of an individual.

Social grouping can be a basis for inferring the interests of a specific individual, provided people you know share your interests to a high degree, so you will want to view things they have viewed or recommend viewing.  That is most true for common, undifferentiated interests.  Some social groups, notably among teens, can have a strong tendency toward herd behavior.  But the strength and relevance of social ties cannot be assumed without knowing the context of the relationship.  One’s poker buddies won’t necessarily share one’s interests in religion or music.  Unless both the basis of the group and the topic of content are the same, it can be hard to assume an overlap.  And even when interests are similar, they intensity of interest can vary.

Social targeting of content considers the following:

  • how much you interact with a social connection
  • how widely viewed an item is, especially for people deemed similar to you
  • what actions your social connections take with respect to different kinds of content
  • what actions you take relating to a source of content

While it is obvious that these kinds of information can be pertinent, they are often only weakly suggestive of what an individual wants to view.  It is easy for unrelated inputs to be summed together to prioritize content that has no intrinsic basis for being relevant: your social connection “liked” this photo of a cat, and you viewed several photos last week and talk often to your friend, so you are seeing this cat photo.

At the level of personalization, it’s flawed to assume that one’s friends interests are the same as one’s own.  There can be a correlation, but in many cases it will be a very weak one. Social behavioral researchers are now exploring a concept of social affinity instead of social distance to strengthen the correlation.  But the weakness of predicting what you want according to who your acquaintances are will remain.

Mind-reading is difficult

The most recent hope for reading into the minds of individuals involves contextualization.  The assumption behind contextualization is that if everything is known about an individual, then their preferences for content can be predicted.  Not surprisingly, this paradigm is presented in a way that highlights the convenience of having information you need readily available.  It is, of course, perfectly possible to take contextual information and use this against the interests of an individual.  Office workers are known to ask for urgent decisions from their bosses knowing their boss is on her way to a meeting and can’t wait to provide a more considered analysis.  Any opportunistic use of contextual information about an individual by someone else is clearly an example of the individual losing control.

Contextual information can be wrong or unhelpful.  The first widespread example of contextual content was the now infamous Microsoft Clippy, which asked “it looks like you are about to write a letter…”   Clippy was harmless, but hated, because people felt a lack of control over his appearance.

Even with the best of intentions, brands have ample room to misjudge the intentions of an individual.

Can content preferences be predicted?

The problem with relying on behavior to predict individual content preferences comes down to time frame.  Because targeting treats individuals as members of a category of people, it ignores the specific circumstances that time introduces.  People may be interested in content on a topic, but not necessarily at the time the provider presents it.  The provider responds by trying again, or trying some other topic, but in either case may have missed an opportunity to understand the individual’s real interest in the content presented.  People may pass on viewing content they have a general interest in.  They think “not now” (it’s not the best time) or “not yet” (I have more urgent priorities).  Often times readiness comes down to the mood of the individual, which even contextualization can’t factor in.  Over time a person may desire content about something, but they don’t care to click when the provider is offering it too them.

If the viewer doesn’t have a choice over what they see, it’s not personalized.

A better way

There are better approaches to personalization.  The big data approach of aggregating lots of behavioral data has been widely celebrated as mining gold from “data exhaust.”  Data exhaust can have some value, but is a poor basis for a brand’s relationship with its customers.  People need to feel some control, and not as if they are being tracked for their exhaust.  Brands need an alternative approach to personalization not only to build better relationships, but to increase their understanding of their audiences so they can serve them more profitably.  In the following post, I will discuss how to put the person back into personalization.

— Michael Andrews