Categories
Content Sharing

Thinking Beyond Semantic Search

Publishers are quickly adopting semantic markup, yet often get less value from it than they could. They don’t focus on how audiences can directly access and use their semantically-described content. Instead, publishers rely on search engines to boost their engagement with audiences. But there are limits to what content, and how much content, search engines will present to audiences.  Publishers should leverage their investment in semantic markup.  Semantically-described content can increase the precision and flexibility of content delivery.  To realize the full benefits of semantic markup, publishers need APIs and apps that can deliver more content, directly to their audiences, to help individuals explore content that’s intriguing and relevant.

The Value of Schema.org Markup

Semantic search is a buzzy topic now. With the encouragement of Google, SEO consultants promote marking up content with Schema.org so that Google can learn what the content is. A number of SEO consultants suggest that brands can use their markup to land a coveted spot in Google’s knowledge graph, and show up in Google’s answer box. There are good reasons to adopt Schema.org markup.  It may or may not boost traffic to your web pages.  It may or may not boost your brand’s visibility in search.  But it will help audiences get the information they need more quickly.  And every brand needs to be viewed as helpful, and not as creating barriers to access to information customers need.

But much of the story about semantic search is incomplete and potentially misleading. Only a few lucky organizations will manage to get their content in Google’s answer box. Google has multiple reasons to crawl content that is marked up semantically. Besides offering search results, Google is building its own knowledge database it will use for its own applications, now and in the future.  By adding semantic annotation to their content that Google robots then crawl, publishers provide Google a crowd-sourced body of structured knowledge that Google can use for purposes that may be unrelated to search results. Semantic search’s role as a fact-collection mechanism is analogous to Google’s natural-language machine learning it developed through their massive book-scanning program several years ago.

Publishers rely on Google for search visibility, and effectively grant Google permission to crawl their content unless they indicate no-robots. Publishers provide Google with raw material in a format that’s useful to Google, but they can fail to ask how that format is useful to them as publishers. As with most SEO, publishers are being told to focus on what Google wants and needs. Unless one pays close attention to what is happening with developments with Schema.org, one will get the impression that the only reason to create this metadata is to please Google.  Google is so dominant that it seems as if it is entirely Google’s show.  Phil Archer, data activity lead at the W3C, has said: “Google is the killer app of the semantic web.”  Marking up content in Schema.org clearly benefits Google, but it often doesn’t help publishers nearly as much as it could.

Schema.org provides schemas “to markup HTML pages in ways recognized by major search providers, and that can also be used for structured data interoperability (e.g. in JSON).” According to its FAQs, its purpose is “to improve the web by creating a structured data markup schema supported by major search engines.”  Schema.org is first and foremost about serving the needs of search engines, though it does provide the possibility for data interoperability as well.  I want to focus on the issue of data interoperability, especially as it relates to audiences, because it is a widely neglected dimension.

Accessing Linked Data

Semantic search markup (Schema.org), linked data repositories such as GeoNames, and open content such as Wikipedia-sourced datasets of facts (DBpedia) all use a common, non-proprietary data model (RDF).  It is natural to view search engine markup as another step in the growth in the openness of the web, since more content is now described more explicitly.  Openness is a wonderful attribute: if data is not open, that implies it is being wasted, or worse, hoarded.  The goal is to publish your content as machine-intelligible data that is publicly accessible.  Because it’s on the web in a standardized format, anyone can access it, so it seems open.  But the formal guidelines that define the technological openness of open data are based more on standards-compliance by publishers than approachability by content consumers.  They are written from an engineering perspective. There is no notion of an audience in the concept of linked data. The concept presumes that the people who need the data have the technical means to access and use it.  But the reality is that much content that is considered linked data is effectively closed to the majority of people who need it, the audience for whom it was created. To access the data, they must rely on either the publisher, or a third party like Google, to give them a slice of what they seek.  So far, it’s almost entirely Google or Bing who have been making the data audience-accessible.  And they do so selectively.

Let’s look at a description of the Empire State Building in New York.  This linked data might be interesting to combine with other linked data concerning other tall buildings.  Perhaps school children will want to explore different aspects of tall buildings.  But clearly, school children won’t be able to do much with the markup themselves.

json-ld for empire state building
Schema.org description of Empire State Building in JSON-LD, via JSON-LD.org

If one searches Google for information on tall buildings, they will provide an answer that draws on semantic markup.  But while this is a nice feature, it falls short of providing the full range of information that might be of interest, and it does not allow users to explore the information the way they might wish.  One can click on items in the carousel for more details, but the interaction is based on drilling-down to more specific information, or requiring a new search query, rather than providing a contextually dynamic aggregation of information.  For example, if the student wants to find out which architect is responsible for the most tall buildings in the world, Google doesn’t offer a good way to get to that information iteratively.  If the student asks Google “which country has the most tall buildings?” she is simply given a list of search results, which includes a Wikipedia page where the information is readily available.

Relying on Google to interpret the underlying semantic markup means that the user is limited to the specific presentation that Google chooses to offer at a given time.  This dependency on Google’s choices seems far from the ideals promised by the vision of linked open data.

screenshot of google search results
Screenshot of Google search for tallest buildings

Google and Bing have invested considerable effort in making semantic search a reality: communication campaigns to encourage implementation of semantic markup, and technical resources to consume this markup to offer their customers a better search experience.  They crawl and index every word on every page, and perform an impressive range of transformations of that information to understand and use it.  But the process that the search engines use to extract meaning from content is not something that ordinary content consumers can do, and in many ways is more complicated that it needs to be.  One gets a sense of how developer-driven that semantic search markup is by looking at the fluctuating formats used by Schema.org.  There are three different markup languages (microdata, RDFa, and JSON-LD) with significantly different ways of characterizing the data.  Google’s robots are sophisticated enough to be able to interpret any of the types of markup.  But people not working for a search engine firm need to rely on something like Apache Any23, a Java library, to extract semantic content marked up in different formats.

Screenshot of Apache Any23
Screenshot of Apache Any23

Linked Data is Content that needs a User Interface

How does an ordinary person link to content described with Schema.org markup? Tim Berners-Lee famously described linked data as “browseable data.” How can we browse all this great stuff that’s out there, that’s been finally annotated so that we get the exact bits we want?  Audiences should have many avenues to retrieving content so that they can use it in the context where they need it. They need a user interface to the linked data.  We need to build this missing user interface.  For this to happen, there need to be helpful APIs, and easy-to-use consumer applications.

APIs

The goal of APIs is to find other people to promote the use of your content.  Ideally, they will use your content in ways you might not even have considered, and therefore be adding value to the content by expanding its range of potential use.

APIs play a growing role in the distribution of content.  But they often aren’t truly open in the sense they offer a wide range of options to data consumers.  APIs thus far seem to play a limited role in enabling the use of content annotated with  schema.org markup.

Getting data from an API can be a chore, even for quantitatively sophisticated people who are used to thinking about variables.  AJ Hirst, an open data advocate who teaches at the Open University, says: “For me, a stereotypical data user might be someone who typically wants to be able to quickly and easily get just the data they want from the API into a data representation that is native to the environment they are working in, and that they are familiar with working with.”

API frictions are numerous: people need to figure out what data is available, what it means, and how they can use it.  Hirst advocates more user-friendly discovery resources. “If there isn’t a discovery tool they can use from the tool they’re using or environment they’re working in, then finding data from service X turns into another chore that takes them out of their analysis context.”  His view: “APIs for users – not programmers. That’s what I want from an API.”

The other challenge is that query-possibilities for semantic content go beyond the basic functions commonly used in APIs.

Jeremiah Lee, an API designer at Fitbit, has thought about how to encourage API providers and users to think more broadly about what content is available, and how it might be used.  He notes: “REST is a great starting point for basic CRUD operations, but it doesn’t adequately explain how to work with collections, relational data, operations that don’t map to basic HTTP verbs, or data extracted from basic resources (such as stats). Hypermedia proponents argue that linked resources best enable discoverability, just as one might browse several linked articles on Wikipedia to learn about a subject. While doing so may help explain resource relationships after enough clicking, it’s not the best way to communicate concepts.”

For Linked Data, a new API standard is under development called hydra that aims to address some of the technical limitations of standard APIs that Lee mentions.  But the human challenges remain, and the richer the functionality offered by an API, the more important it is that the API be self-describing.

Fitbit’s API, while not a semantic web application, does illustrate some novel properties that could be used for semantic web APIs, including a more visually rich presentation with more detailed descriptions and suggestions available via tooltips.  These aid the API user, who may have various goals and levels of knowledge relating to the content.

Screenshot of Fitbit API
Screenshot of Fitbit API

Consumer apps

The tools available to ordinary content users to add semantic descriptions have become more plentiful and easier to use.  Ordinary web writers can use Google’s data highlighter to indicate what content elements are about.  Several popular CMS platforms have plug-ins that allow content creators to fill-in forms to describe the content on the page.  These kinds of tools hide the markup from the user, and have been helpful in spurring adoption of semantic markup.

While the creation of semantic content has become popularized, there has not been equivalent progress in developing user-friendly tools that allow audiences to retrieve and explore semantic content. Paige Morgan, an historian who is developing a semantic data set of economic information, notes: “Unfortunately, structuring your data and getting it into a triplestore is only part of the challenge. To query it (which is really the point of working with RDF, and which you need to do in order to make sure that your data structure works), you need to know SPARQL — but SPARQL will return a page of URIs (uniform resource identifiers — which are often in the form of HTML addresses). To get data out of your triplestore in a more user-friendly and readable format, you need to write a script in something like Python or Ruby.  And that still isn’t any sort of graphical user interface for users who aren’t especially tech-savvy.”

We lack consumer-oriented applications that allow people to access and recombine linked data.  There is no user interface for individuals to link themselves to the linked data.  The missing UI reflects a legacy of seeing linked data as being entirely about making content machine-readable.  According to legacy thinking, if people needed to directly interact with the data, they could download it to a spreadsheet.  The term “data” appeals to developers who are comfortable thinking about content structured as databases, but it doesn’t suggest application to things that are mentioned in narrative content.  Most content described by Schema.org is textual content, not numbers, which is what most non-IT people consider as data.  And text exists to be read by people.  But the jargon we are stuck with to discuss semantic content means we emphasize the machine/data side of the equation, rather than the audience/content side of it.

Linked data in reality are linked facts, facts that people can find useful in a variety of situations.  Google Now is ready to use your linked data and tell your customers when they should leave the house.  Google has identified the contextual value to consumers of linked data.  Perhaps your brand should also use linked data in conversations with your customers.  To do this, you need to create consumer facing apps that leverage linked data to empower your customers.

Wolfram Alpha is a well-known consumer app to explore data on general topics that has been collected from various sources.  They characterize their mission, quite appealingly, as “democratizing data.” The app is user friendly, offering query suggestions to help users understand what kinds of information can be retrieved, and refine their queries.  Their solution is not open, however.  According to Wolfram’s Luc Barthelet, “Wolfram|Alpha is not searching the Semantic Web per se. It takes search queries and maps them to an exact semantic understanding of the query, which is then processed against its curated knowledge base.” While more versatile than Google search in the range and detail of information retrieved, it is still a gatekeeper, where individuals are dependent on the information collection decisions of a single company.  Wolfram lacks an open-standards, linked-data foundation, though it does suggest how a consumer-focused application might use of semantic data.

The task of developing an app is more manageable when the app is focused on a specific domain.  The New York Times and other news organizations have been working with linked data for several years to enhance the flexibility of the information they offer.  In 2010 the Times created an “alumni in the news” app that let people track mentions of people according to what university they attended, where the educational information was sourced from DBpedia.

New York Times Linked Data app for alumni in the news.  It relied in part on linked data from Freebase, a Google product that Google is retiring.
New York Times Linked Data app for alumni in the news. It relied in part on linked data from Freebase, a Google product that Google is retiring that will be superseded by Wikidata.

A recent example of a consumer app that is using linked data is a sports-oriented social network called YourSports.  The core metadata of the app is built in JSON-LD, and the app creator is even proposing extensions to Schema.org to describe sports relationships.  This kind of app hides the details of the metadata from the users, and enables them to explore data dimensions as suits their interests.  I don’t have direct experience of this app, but it appears to aggregate and integrate sports-related factual content from different sources.  In doing so, it enhances value for users and content producers.

Screenshot of Yoursports
Screenshot of Yoursports

Opening up content, realizing content value

If your organization is investing in semantic search markup, you should be asking: How else can we leverage this?  Are you using the markup to expose your content in your APIs so other publishers can utilize the content?  Are you considering how to empower potential readers of your content to explore what you have available?  Consumer brands have an opportunity to offer linked data to potential customers through an app that could result in lead generation.  For example, a travel brand could use linked data relating to destinations to encourage trip planning, and eventual booking of transportation, accommodation, and events.  Or an event producer might seed some of its own content to global partners by creating an API experience that leverages the semantic descriptions.

The pace of adoption for aspects of semantic web has been remarkable. But it is easy to overlook what is missing.  A position paper for Schema.org says “Schema.org is designed for extremely mainstream, mass­-market adoption.”  But to consider the mass-market only as publishers acting in their role as customers of search engines is too limiting.  The real mainstream, mass-market is the audience that is consuming the content. These people may not even have used a search engine to reach your content.

Audiences need ways to explore semantically-defined factual content as they please.  It is nice that one can find bits of content through Google, but it would be better if one didn’t have to rely solely only on Google to explore such content.  Yes, Google search is often effective, but search results aren’t really browseable.  Search isn’t designed for browsing: it’s designed to locate specific, known items of information.  Semantic search provides a solution to the issue of too much information: it narrows the pool of results.  Google in particular is geared to offering instant answers, rather than sustaining an engaging content experience.

Linked data is larger than semantic search.  Linked data is designed to discover connections, to see themes worth exploring. Linked data allows brands to juxtapose different kinds of information together that might share a common location or timing, for example. Individuals first need to understand what questions they might be interested in before they are ready for answers to those questions. They start with goals that are hard to define in a search query.  Linked data provides a mechanism to help people explore content that relates to these goals.

While Google knows a lot about many things relating to a person, and people in general, it doesn’t specialize in any one area.  The best brands understand how their customers think about their products and services, and have unique insights into the motivations of people with respect to a specific facet of their lives.  Brands that enable people to interact with linked data, and allow them to make connections and explore possibilities, can provide prospective customers something they can’t get from Google.

— Michael Andrews

Categories
Content Integration

The Benefits of Hacking Your Own Content

How can content strategy help organizations break down the silos that bottle up their content?  The first move may be to encourage organizations to hack their own content.

Silos are the villains of content strategists. To slay the villain, the hero or heroine must follow three steps to enlightenment:

  1. Transcend organizational silos that hinder the coordination and execution of content
  2. Adopt an omnichannel approach that provides customers with content wherever and however they need it, so that they aren’t hostage to incoherent internal organizational processes and separately managed channels that fragment their journey and experience
  3. Reuse content across the organization to achieve a more cost-effective and revenue-enhancing utilization of content

The path that connects these steps is structured content. Each of these rationales is a powerful argument to change fractured activities.  Taken together, they form a compelling motivation to de-silo content.

“Content silo trap: Situation created by authors working in isolation from other authors within the organization. Walls are erected among content areas and even with in content areas, which leads to content being created and recreated and recreated, often with changes or differences in each iteration.”  Ann Rockley and Charles Cooper in Managing Enterprise Content: Unified Content Strategy.

The definition of a content silo trap emphasizes the duplication of effort.  But the problems can manifest in other ways.  When groups don’t share content with each other, it results in a content situation that divides the haves and the have-nots.  Those who must create content with finite resources need to prioritize what content to create.  They may forego providing their target audiences with content relating to a facet of a topic, if it involves more work than the staff available can handle.  Often organizational units devote most of their time to revising existing content rather than creating new content, so what they offer to audiences is highly dependent on what they already have.  Even when it seems like a good idea to incorporate content related to one’s own area of responsibility that’s being used elsewhere, it can be difficult to get it in a timely manner.  It may not be clear if it is be worth the effort to re-produce this content oneself.

What Silos Look Like from the Inside

Let’s imagine a fictional company that serves two kinds of customers: consumers, and businesses.  The products that the firm offers to consumers and businesses are nearly identical, but are packaged differently, with slightly different prices, sales channels, warranties, etc.  Importantly, the consumer and B2B businesses are run as separate operating units, each responsible for their own expenses and revenues.  The consumer unit has a higher profit margin and is growing faster, and decided a couple of years ago to upgrade its CMS to a new system that’s not compatible with the legacy system the entire company had used.  The B2B division is still on the old CMS, hoping to upgrade in the near future.

A while ago, a product manager in the B2B division asked her counterpart in the consumer division if she’d be able to get some of the punchy creative copy that the consumer division’s digital agency was producing.  It seemed like it could enhance the attractiveness of the B2B offering as well.   Obviously only parts were relevant, but the product manager asked to receive the consumer product copy as it was being produced, so it could be incorporated into the B2B product pages.  After some discussion, the consumer division product manager realized that sharing the content involved too much work for his team.  It would suck up valuable time from his staff, and hinder his team’s ability to meet its objectives.  In fact, making the effort to do the laborious work of sending each item of content on a regular basis wouldn’t bring any tangible benefit to his team’s performance metrics.

This scenario may seem like a caricature of a dysfunctional company.  But many firms face these kinds of internal frictions, even if the most prevalent cases happen more subtly.

Many organizations know on a visceral level that silos are a burden and hinder their capability to serve customers and grow revenues. But they may not have a vivid understanding of what specific frictions exist, and the costs associated with these frictions. Sometimes they’ve outlined a generic high-level business case for adopting structured content across their organization that talks in terms of big themes such as delivery to mobile devices and personalization.  But they often don’t have a granular understanding of what exact content to prioritize for structuring.

The Dilemma of Moving to Structured Content

Many organizations that try to adopt structured content in a wholesale manner find the process more involved than they anticipated.  It can be complex and time-consuming, involving much organizational process change, and can seem to jeopardize their ability to meet other, more immediate goals.  Some early, earnest attempts at structured content failed, when the enthusiasm for a game-changing future collided with the enormity of the task.  De-siloing projects also run the risk of being ruthlessly de-scoped and scaled-back, to the point where the original goal looses its potency.  When the effort involved comes to the foreground, the benefits may seem abstract and distant, receding to the background. Consultant Joe Pairman speaks about “structured content management project failure” as a problem that arises when the expectations driving the effort are fuzzy.

Achieving a unified content strategy based on coordinated, structured content involves a fundamental dilemma.  Firms  with the most organizational complexity and that stand to benefit most are the ones that have the most silos to overcome.  They frequently have the most difficulty transitioning to a unified structured content approach.  The more diverse your content, the more challenging it is to do a total redesign of it based on modular components.

“The big bang approach can be difficult,” Rebecca Schneider, President of Azzard Consulting, noted during the panel discussion [at the Content Strategy Applied conference]. “But small successes can yield broad results,”  according to a Content Science blog post

Content Hacking as an Alternative to Wholesale Restructuring

If wholesale content restructuring is difficult to do quickly in a complex organization, what is the alternative?  One approach is to borrow ideas from the Create Once, Publish Everywhere (COPE) paradigm by using APIs to get content to more places.

Over the past two years, a number of new tools have emerged that make shifting content easier.  First, there are simple web scraping tools, some browser-based, that can lift content from sections of a page.  Second, there are build-your-own API services such as IFTTT and Zapier that require little or no programming knowledge.

Particularly interesting are newer services such as Import.IO and Kimono that combine web scraping with API creation.  Both these services suggest that programming is not required, though the services of a competent developer are useful to get their full benefits.  Whereas previously developers needed to hand-code using say, PHP, to scrape a web page, and then translate these results into an API, now much of this background work can be done by third party services.  That means that scraping and republishing content is now easier, faster and cheaper.  This opens new applications.

Screenshots of kimono
Screenshots of Kimono (via Kimono Labs)

Lowering the Barriers to Sharing Content

The goal for the B2B division product manager is to be able to reuse content from the consumer division without having to rely on that division’s staff, or on access to their systems.  Ideally, she wants to be able to scrape the parts she needs, and insert them in her content.  Tools that combine web scraping and API creation can help.

Generic process of web scraping/content extraction and API tools
Generic process of web scraping/content extraction and API tools

The process for scraping content involves highlighting sections of pages you want to scrape, labeling these sections, then training the scraper to identify the same sorts of items on related pages you want to scrape.  The results are stored in a simple database table.  These results are then available to an API that can be created to pull elements and insert them onto other pages.  The training can sometimes be fiddly, depending on the original content characteristics.  But once the content is scraped, it can be filtered and otherwise refined (such as given a defined data type) before republishing.  The API can specify what content to use and its source in a range of coding languages compatible with different content delivery set-ups.

The scrape + API approach mimics some of the behavior of structured content.  The party needing the content identifies what they need, and essentially tags it.  They define the meaning of specific elements.   (The machine learning in the background still needs the original source to have some recognizable, repeating markup or layout to learn the elements to scrape, even if it doesn’t yet know what the elements represent.)

While a common use case would be scraping content from another organizational unit, it might also have applications to reuse content within one’s own organizational unit.  If a unit publishing content doesn’t have well-defined content themselves, they are likely having trouble reusing their own content in different contexts.  They may want to reuse elements for content that address different stages of a customer journey, or different audience variations.

Benefits of Content Hacking

This approach can benefit a party that needs to use content published elsewhere in the organization.  It can help bridge organizational silos, technical silos, and channel silos that customers encounter when accessing content.  The approach can even be used to jump across the boundaries that separate different firms.  The creators of Import.IO, for example, are targeting app developers who make price comparison apps.  While scraping and republishing other firms’ content without permission may not be welcomed, there could be cases where two firms agree to share content as part of a joint business project, and a scraping + API approach could be a quick and pragmatic way to amplify a common message.

As a fast, cheap, and dirty method, the scrape + API approach excels at highlighting what content problems need to be solved in a more rigorous way, with true content structuring and a common, well-defined governance process.  One of the biggest hurdles to adopting a unified, structured approach to content is knowing where to start, and knowing what the real value of the effort will be.  By prototyping content reuse through a scrape + API approach, organizations can get tangible data on the potential scope and utilization of content elements.  APIs make it possible for content elements to be sprinkled in different contexts.  One can test if content additions enhance outcomes: for example, driving more conversions. One can A/B test content with and without different elements to learn their value to different segments in different scenarios.

Ultimately, prototyping content reuse can provide a mapping of what elements should be structured, and prioritize when to do that.  It can identify use cases where content reuse (and supporting content structure) is needed, which can be associated with specific audience segments (revenue-generating customers) and internal organizational sponsors (product owners).

Why Content Hacking is a Tactic and not a Strategy

If content hacking sounds easy, then why bother with a more methodical and time-consuming approach to formal content structuring?  The answer is that though content hacking may provide short-term benefits, it can be brittle — it’s a duct tape fix.  Relying on it too much can eventually cause issues.  It’s not a best practice: it’s a tactic, a way to use “lean” thinking to cut through the Gordian knot of siloed content.

Content hacking may not be efficient for content that needs frequent, quick revision, since it needs to go through extra steps of being scraped and stored. It also may not be efficient if multiple parties need the same content but want to do different things with the content — a single API might not serve all stakeholder needs.  Unlike semantically structured content, scraped content doesn’t enable semantic manipulation, such as the advanced application of business logic against metadata, or detailed analytics tracking of semantic entities. And importantly, even a duck tape approach requires coordination between the content producer and the person who reuses the content, so that the party reusing content doesn’t get an unwelcome surprise concerning the nature and timing of content available.

But as a tactic, content hacking may provide the needed proof of value for content reuse to get your organization to embark on dismantling silos and embracing a unified approach.

— Michael Andrews

Categories
Content Experience Content Integration

Content sources across the customer journey

Customers are always on the run, checking information, making evaluations, and tracking how well and quickly they are getting things done. This momentum — being always on and always moving — has profound implications for content strategy. The best way to gain a holistic view of what’s involved is to look at the full customer journey, and the various services needed to support that journey, whomever provides them.  At different stages, the user has different tasks, and needs content to support these tasks.  When brands examine the journey from end to end,  they often discover that they do not have some of the content needed to support many of the user’s tasks.

Content comes from many different kinds of sources.  Brands are a major creator of content, but so are individuals, communities of people, as well as governments and non-government organizations.  Content can take many forms as well: it can be articles and videos, but also items of information commonly described as data.  One shouldn’t make a artificial distinction between authored content and factual data when these resources need to be visible and are meaningful to users.

To see how to join-up different sources of content to support user journeys, let’s consider a scenario.  Neil is a 41 year American software developer, recently divorced and living by himself in Research Triangle, North Carolina.  He recently had his blood pressure checked, which was found to be a bit high.  He is told he should consider modifying his diet to reduce his blood pressure.   Neil is someone into “lifehacking” so he decides to dig deeper into the topic to find out what’s best for him.

Step one: Goal setting with personal content

Neil reviews his device’s app store to see what’s useful.  He finds a healthy living app that can track his diet and makes recommendations on how to improve it.  He enters what he eats and drinks for a week and graphs the results.  The app flags his coffee and processed food consumption as areas he should watch — processed food contains a lot of sodium.  He likes the taste and convenience of processed food, but decides he should try to cook more for himself.  He fiddles with some parameters on his healthly living app and gets some recommendations on kinds of foods he should consider eating.  He likes some recommendations, hates others, and believes others are worthy but difficult.  He sets some goals for eating, and will track these in his app.  At this  goal setting stage, the content is personal to Neil: his recommendations based on parameters he selected, his goals, and his behavioral data.

Step two: Planning using community-contributed content

Neil doesn’t particularly enjoy cooking, because in the past he’s found it time consuming, and his results have been disappointing.  He searches for a source of recipes that are easy to make and don’t sound awful.  He finds a recipe community that specializes in easy to make dishes.  Community members submit recipes they like and can vote and comment on ones they’ve tried based on taste, ease of making, ease of storing ingredients, and ease of saving leftovers.  He likes the reputational dimensions of the community: members get recognition for their submissions and the votes cast and recieved.  Neil can link his healthy living app to this community, so that he can compare his profile goals with those of other community contributors.  He scans pictures of dishes that match his criteria and notices that some are favorites of people who follow protein rich diets and avoid carbohydrates.  On closer inspection of the ingredients, he sees these dishes avoid starches.  Neil likes his carbs, so filters out these options.  He looks for people more like him who are most concerned with the sodium dimension, and looks over their favorities.  He finds a couple of cassorole dishes that sound easy to make, and easy to save as leftovers.   For planning his meals, Neil has relied on community content: what’s popluar, and with whom it is popular.  He saves these recipes to his “to try” list in his healthy living app, so he can track when he has them.

Step three: Evaluation using public content and open data

Neil has two dishes he wants to make: a tuna casserole, and a Mexican casserole.  Both use ingridents easily obtained with a long shelf life: things like cans of tuna, cans of onion soup, cans of beans, bags of chips, jars of salsa, and processed cheese.  He hates having to worry about food spoiling in the fridge.  He notices a new detail about the ingredients: he must use low sodium varieties of these ingredients if the dish will qualify as low sodium.  Neils starting to feel overwhelmed: his supermarkets seems to have endless varieties of similar items, and he finds it a pain to read the tiny nuitrial lables on products.  He’s been warned that advertised claims of “reduced salt” can be misleading.  He wants to be able to search across different brands to find which ones have the lowest sodium.  Fortunately he finds a new website that is aggregating nutritional information of food products from many brands.  Ideally the USDA would aggregate all the information from nutritional labels of food products, and make it available in an open format with a public API.  But the USDA does not offer this information itself, so instead Neil uses a website that relies of voluntary submission by vendors, or the scrapping info from their websites.  The information is useful, though incomplete.  Neil is able to search for food products such as salsa, and find candidate brands that are low sodium.  He exports this list of brands to his shopping app on his phone.  He has relied on aggregated public information to evaluate which brands are most suitable.  Third party aggregators are credible providers of such information.

Step four: Purchase selection using company content

Neil now feels ready to visit his cavernous supermarket.  He chooses to shop at a supermarket that is employing new technology that allows shoppers to use their mobile phones to navigate through the store and check inventory.  The supermarket has it’s own app that can link to Neil’s shopping list.  It tells Neil which brands it has in stock, what the prices are, and what isle they are located on.  The store only carries one brand of low sodium salsa, but has three brands of low sodium beans, and he can compare the prices on his phone before hunting for them on the shelf.  Also the app shows photos of the items, so Neil knows what to look for.  So many products look similar, so it’s important to be sure you are picking up the one you really want, and not something that’s similar but different in a critical aspect (e.g., getting the extra spicy low sodium beans, instead of unflavored low sodium ones).  For the purchase phase, Neil has relied on company provided content.  He is motivated by ease of purchase, and individual retailers are in a primary position to offer content supporting such convenience.

neils content journey

Insights and lessons

Neil’s journey illustrates three major issues audiences and brands face when integrating content from different sources:

1.  technical constraints and functional gaps that create friction

2.  fuzzy ownership of responsibilities across the customer journey

3.  balancing the financial motivations of the brand with the incentives motivating the customer

Gaps, constraints, and friction

Everything in Neil’s scenario is technically feasible, even if parts seem magical compared with today’s reality.   For the user, a journey like this is often fragemented across many separate sites and apps, which may not share content with each other.  Users often rely on different kinds of content, from different sources, at different stages.

When Neil moves between apps or sites focused on different primary tasks there is obvious potential for friction.  As a computer professional, Neil is able to take content from one task domain and use it in another, using tools like IFTTT.  Other users, however, may have to manually re-enter content from one task domain to another, unless content linking and import is built-in.  Such built-in functionlity requires common exchange formats and APIs.   There are microformats for recipes, government-mandated nutritional information follows a standardized format, and retailers track products using standardized nomenclature such as UPCs and SKUs.  But content addressing higher level tasks such as dietary goals or ease of preparation do not follow open standards, meaning the exchange of such information between applications is more difficult.  In these cases, forging partnerships to create own’s own format to exchange content may be the best option.  Obviously, any connections between task domains (sharing log-in credentials, and sharing data) will help customers carry forward their journey, and help to drive adoption of your solution.

Whose problem is it?

The scenario highlights the fuzzy boundaries surrounding who offers the right solution for Neil.  In many cases, such as outlined in this scenario, no one party will orginate all the content needed to support a complex user task journey.  From a user perspective, it may seem desirable to have a “one stop” solution where he or she can perform all the tasks.  Such an approach would eliminate hopping between applications and websites, and potentially enable users to see connections between different tasks and their associated data and content.  But it isn’t obvious that one solution can obtain all the content needed to support the user.  Typically, integrated solutions do not offer the best content available.   Rather they offer content that is easy to obtain, or content that selectively promotes the goals of the brand behind the solution.  If you want to buy a camera, reading customer reviews on the Walmart website isn’t your best source of customer evaluations — buyers can get more complete and higher quality review information from a third-party photography website.  If a customer wants recipes, your supermarket may offer some that use products that the supermarket is promoting, but these recipes are not necessarily the best ones, and will certainly represent only a small sample of what’s available.

Brands need to think about what kinds of content their customers seek and consider during their journeys, and figure out how they can be a part of the conversation.  The goal should be to make your content available at whatever stage it is needed.  Look at opportunities to incorporate outside content where appropriate.  Think about where is the main source of content relating to this user task.  Can the brand get the content itsself, or does it make sense for it to offer its content to that source?

Being helpful with your content

Jeff Bezos reportedly said why brands earn or lose customer love: “Defeating tiny guys is not cool” while “Defeating bigger, unsympathetic guys is cool.”  To earn customer love, brands also need to consider how they treat other parties’ content.  Do they seem to be freely sharing a great resource, or are they seeming to throttle choice and push their own agenda with what they present? Whether a brand chooses to incorporate other parties’ content into their solution, or offer their content to others (via an API), it needs to come across as generous, and unbiased, to earn credibility and trust.

Audiences invest time and effort evaluating content, saving content, and creating their own content, motivated by the value they derive from different content sources.  It is important to respect that effort.  Content linking and sharing is a classic example of a network effect, where the content becomes more valuable the more different task scenarios it can be used.   Brands need to consider the network effect dynamics when choosing what content to offer, and where to offer it.

There can be a natural tendancy for brands to want to only invest in content that shows immediate payoffs.  Consider the supermarket chain.  It did not choose to submit the nutritional information of its house brands to the third party website.  As a result, its house brands were not part of Neil’s consideration set.  When it created its in-store app, some members of the supermarket exective team didn’t want to include photos of the products.  They reasoned that it was an unnecessary expense.  The price and inventory information were already available in their inventory system, but that system didn’t store photographic content.  But by making the investment, they improved the customer experience, and greatly increased adoption of their app.

The supermarket executives also debated how to understand more fully what their customers wanted to buy, so they could better forecast demand.  Their prior attempt to tie their loyalty card with their own recipe app, offering coupons, didn’t result in much adoption.  They were interested in figuring out how to get people like Neil to give them his dietary goal setting information.  While this is valuable content for the supermarket chain, helping them better target ads and offers, it isn’t clear what Neil would get in return for providing this information.  More coupons?  Neil gets a clear benefit using his goals to plan his meals, but the value of providing his goals to the supermarket after he’s already decided what he wants to buy isn’t clear.  The supermarket needs to think how Neil can use this information in the context of his relationship with the supermarket, so that Neil is in charge of what he does with the information, and derives value using it.  Perhaps he could be rewarded for participating in a program to test new products that are aligned with his dietary goals.

Final thoughts

Brands, especially retail brands and service providers such as banks, hotels and airlines, are thinking more about omnichannel communication with their customers.  Customers can need help at any point, can seek content through many channels and from many sources (including those of rivals) and expect answers instantly.  A strategy that shares content across tasks is the best approach to meeting customers needs as they arise.  If customers are doing a task that involves other sources of content in addition to your own, your brand needs to figure out how customers can integrate both kinds of content to provide the level of support they increasingly expect.  Having your content play well with others is not just a nice thing to do, but a business imperative.

— Michael Andrews