Categories
Content Experience

Learning from PDFs

PDFs don’t seem terribly interesting.  Few people would say they love them, and more than a few would say they hate them. But PDFs can offer content strategists important insights into the needs of content users who want to build an understanding of a topic.

In 2001, Jakob Nielsen pronounced: “Avoid PDF for On-Screen Reading.”  Nearly a decade and a half later, 1.8 billion PDFs are on the web. PDFs don’t seem to be losing momentum either.  A recent article on Econsultancy stated: “Optimising PDFs for search is one of the most overlooked SEO opportunities available today.”

Among digerati, PDFs have a reputation nearly as bad as Adobe Flash or Microsoft Word.  Ask a content strategist about PDFs, and you are likely to hear:

  • PDFs are for dinosaurs
  • No one reads PDFs
  • PDFs are unusable
  • PDFs reflect legacy thinking of putting print-first, digital last
  • You can’t read a PDF on a mobile phone
  • (Various curse words)

It’s time to talk about the elephant in the room. The title for this post borrows from the name of a classic book on architecture by Robert Venturi called “Learning from Los Vegas” which, in the words of its publisher, called “for architects to be more receptive to the tastes and values of ‘common’ people.”   That book critiqued rigid, rationalist solutions promoting the supposed perfections of modernist design.  Venturi’s approach foreshadowed the spirit of user centered design, which encourages designers to look at how people actually use things, instead of focusing on how designers would like them to. Building on existing social practices is sometimes referred to as paving the cowpaths.

Unlike Venturi, I’m not going to issue a manifesto. PDFs do have numerous issues, and scarcely exemplify ideals of smart, flexible, modular content.  Nonetheless, the popularity of PDFs with knowledge-centric professionals such as doctors, scholars, scientists, and lawyers, challenges any smug beliefs we may have that PDFs are only used by hapless paper-pushers awaiting retirement.

ReadCube, an app developer that works with publishers such as Wiley, Springer and Palgrave Macmillan, notes that readers often reject HTML versions of content.  They state: “Publishers and platform providers find that despite the significant amount of value added to the full text HTML pages on their platforms, the vast majority of users choose to click on the PDF download link.”  Apparently no one told the research scientists who are downloading these PDFs that HTML is superior.

While it is true that much content in PDFs is never viewed, it is also true that some people choose to have their most important content in the PDF format.  PDFs are notorious for burying nuggets of content in a larger body.  PDFs fuse together everything: all the content and presentation sealed in one package, making the output inflexible.  But some people have figured out how to turn that vice into an asset.

The Dream of Digital Paper

PDFs have long traded on the notion that they represent paper in digital form.  Originally they were simply a format used to allow people to print out content onto physical paper.  But with the rise of tablets, they have more closely mimicked some of the affordances of paper.  The iPad is the platform of choice for viewing PDFs.  The name iPad is a portmanteau of interactive and pad (of paper).  Numerous PDF viewing apps are available for the iPad such as Papers, Papership, and Docphin.  Last year, Sony introduced a dedicated PDF-tablet with a 13 inch E Ink screen and a stylus.   Sony’s Digital Paper is targeted at law professionals (to read and annotate legal documents and take notes) and entertainment professionals (to annotate scripts and share revisions with cast and crew).

Readers often favor the PDF format because of its ability to present content with sophisticated layouts like those used in paper documents.  Layouts for long form content are different from short form, because of the need to scan, look ahead, and back track while reading.  Even though CSS can be used with HTML to deliver complex tables and multi-column text, the creation of such layouts can be challenging, especially when the content is also expected to work on small screens. As a result, such layouts are rarer for HTML content.

A feature unique to PDFs is the ability to scrawl on them. People can add markings of different kinds: sweeping arrows and brackets, idiosyncratic symbols, impromptu diagrams, and doodles.  It is a rare example of where the audience can bring their own personality to what they are viewing.  By leaving own’s one digital handwriting on the content, the reader can show “I was here,” and others who see the markings will know that too. The ability to draw on top of the content symbolizes how people who use PDFs are often active users of the content, not passive ones.

Audience Control Over Content

Power users of PDFs share several traits.  They need:

  • reliable access to the content
  • to know the provenance of the content
  • to reuse the material in the content

All these goals align well with principles in content strategy.  If people believe that PDFs support these needs better than HTML, we have an opportunity to consider how to support these needs more effectively with HTML content.

Access

One motivation for using PDFs is certainty over access.  Being able to download something reduces the risk that the content might not be available in the future.  People have had the experience of online content seeming to disappear.  Sometimes content that’s wanted has been taken down, but other times it is moved so that links no longer work.  If someone needs to rely on a search engine to find content again, the task can be daunting, given how much content is available.  As content ages, its search ranking sinks, and people forget what search terms yielded results originally.

I download PDF copies of manuals for devices and software I own.  If I didn’t do that, and need to find the manual online, the task of finding the content can be annoying, since many spammy content farms have been built around searches for product manuals.

Defensive downloading is a lousy experience.  The best strategy to help people re-locate content online is to maintain current links and redirects, and make sure that site search works well, so if the user only remembers the source of the content, but not a precise description of it, they can still locate what they need.

Provenance

A weakness of most online content is the quality of information about the origin of the content.  Historically, people viewed content online and could see what site hosted the content.  Yet content is increasingly becoming separated from its source.  PDFs offer a preview of some broader issues relating to content provenance.

More sophisticated PDF viewers recognize that users will later need to know where the content came from.  They add metadata about the content.  Often, they collect the metadata automatically, by either finding an identifier on the content, such as a Digital Object Identifier (DOI) number, or by matching the title and other text in the article with online bibliographic databases that contain records for articles.  If there is no metadata already available, users have an option to add their own to the PDF.

searching for metadata (screenshot via Paperclip)
locating metadata (screenshot via Papership)

Most HTML content lacks identifying metadata.  If you separate the content from the source, you don’t know who created it.  Nimble content, which goes to people rather than expects people to come to it, needs to indicate its identity so that people know where it has come from. Brands need to identify their content using standards such as schema.org metadata for articles and blog posts.

Distilling and Reusing Material

HTML content can seem like a disconnected fragment when encountered outside of the information architecture of a website or app.

PDFs liberate readers from relying on the context provided by the publisher.  PDFs can provide content at many different levels of detail, and give readers control over how they combine and sort through content. Readers can create their own context to understand the information.

Supply Your Own Context

Let me illustrate how PDFs let you supply your own context with a personal example. Sometimes I need to consult standards documents — tedious tomes to wade through.  Because they are so long, some organizations break them into separate articles, but then you’d need to bounce between the articles to find the information you seek.  Fortunately most present the standard as one long HTML article.  But even with hyperlinks within the articles, it can still be a lot of information to digest.  So I convert the article to PDF, and view the PDF in an app on my Mac called Highlights.  In Highlights I can (you guessed it) highlight the parts of the standard of interest to me.  But what’s even more useful is that I can export these highlighted passages directly into a new Evernote document.  So the long standards document gets transformed into a selection of greatest hits.

My example illustrates a more general information use pattern: Survey, Distill, Apply.

PDFs are generally multipage documents discussing a common theme.  This allows them to deliver three levels of information: A collection of PDFs about a theme, a single PDF concerning a theme, and specific content within the PDF addressing a theme.   With HTML content, single page articles address a smaller theme, and it is less common for users to organize items into collections.

When users survey what content is available relating to a theme, they may look at information they’ve collected in one of several ways. They can:

  • search for items mentioning the theme
  • look at tags associated with items
  • look at summaries of items.

PDF management applications can let users find content by filtering according to metadata, and even locate themes using text analytics.  A number of applications offer these features, but a free app called Qiqqa may offer the most comprehensive range of tools.

Collections can be searched according to various metadata criteria such as topic tags and fields such as author.

filtering a collection (screenshot via Qiqqa)
filtering a collection (screenshot via Qiqqa)

In addition to filtering, Qiqqa supports information exploration of content coming from different sources.  It can identify related items, such as other content by the same author or on the same topics.  It also allows users to create their own associations between content items, but letting them mind-map topics and incorporate PDF items as nodes in their mind maps.

Once users have identified items of interest, they want to distill want in the item is most important to them.

Qiqqa provides text analysis of PDF content to determine themes.

analysis of text (screenshot via Qiqqa)
analysis of text (screenshot via Qiqqa)

Much of the distillation will involve reading the content, and making notes.  PDF apps allow users to highlight passages, sometimes with different colors to represent different themes.  Users can add notes about the content.  An app called TagNote lets users tag mentions of specific people or things.

annotation tagging (screenshot via X)
annotation tagging (screenshot via Tagnote)

Finally, users want to take what they have done in the PDF and be able to use it elsewhere.  PDF apps provide export functionality, so that users can export highlights, notes, and article metadata. The exported material can then be used in another application.

Comparison with HTML content

Using HTML content is difficult when wanting to survey, distill and apply information. People have largely given up on curating favorite links to HTML content.  At the same time, cloud-based personal repositories have become more popular, which let people store content that can be accessed everywhere.

Using a browser to save links to items of online content has declined in popularity, and link sharing sites like Delicious have been displaced by social media.  Pinterest offers a counter example of active online content curation, though its organizational focus is strongly visual.

Sites such as Quartz and Medium have introduced annotation-based comments, though they are geared for public display rather than personal use.  The chief challenge for HTML content is developing solutions that can integrate items from different content domains.  Most solutions have been browser-based.  The web service Diigo, aimed at students, offers some of these capabilities.  The Hypothesis platform allows people to make annotations, hosted on a server, that may be either private or public.  Hypothesis is also developing text analysis capabilities.  The hurdle for browser-based solutions is that they depend on the security and architecture of the browser, which can vary.  Bookmarklets are starting to fall out of favor, and extensions will differ by browser type.

At least right now, server-hosted curation and annotation tools don’t emphasize functionality that let people export their content. Readers can’t manage snippets of content using their own tools; they are dependent on hosted services to allow them to integrate information from different sources. This limits their ability to create their own context for the information.

Current browser-dependent options for HTML content are fussy, and wide use of annotation is slowed by the pace of developments in standards and browsers. One reason that PDF apps can offer the capabilities they do is that the content is simple and well understood.  There is no Javascript, browser compatibility, or security issues to worry about.

Things we can Learn

What can content strategy learn from PDFs?  That some people want to interact with words, and HTML content doesn’t offer them good options to do that.

PDF usage suggests that some audiences want control over their content.  It reveals a blindspot in the intelligent content approach: the assumption that publishers can reliably predict the specific needs of audiences.  Publishers should not just disburse information to audiences, but support tools that let audiences do things with information.   For complex topics, publishers need to accept that they alone won’t provide all the information that audiences will consider to arrive at a decision or understanding.

These insights are not meant to suggest that all audiences want to download content, or that people who download PDFs want all their content in the PDF format.  In the majority of cases people want to touch content once only: to view it online once, never to return to it.  But for multi-session decisions such as buying a home, choosing a university, planning a vacation, or financing a loan, people appreciate having the ability to gather and compare information, distill important aspects of it, and apply those findings to decisions on their own terms.

Intelligent content approaches premised on dynamic personalization can be myopically transactional, focused on a single online session only. People aren’t going to find “the right content at the right time” always: they need to evolve their understanding of a topic.  Content strategy needs to consider the content experience as a multi-session exploration, which may not follow a predictable “buyers journey” that some content marketers imagine.  The brand doesn’t control what content means; the audience does.

The evolution of content experience is far from over, despite the proclamations that the future of content has arrived.  Smart, flexible, modular content is powerful. But on the topics that matter most, people want to choose what’s important to them, and not have that decision made for them.

—Michael Andrews

Categories
Content Experience

Getting our Content Ready for the Spoken Word

More of us are starting to talk to the Internet and listen to it, in lieu of tapping our screens and keyboards and reading its endless flow of words. We are changing our relationship with content. We expect it to be articulate. Unfortunately, it sometimes isn’t.

Tired of Text

To understand why people want to hear text, it is useful to consider how they feel about it on the screen. Many people feel that constant reading and writing is tiring. People from all walks of life say they feel spelling-challenged. Few people these days boast of being good spellers. Spelling can be a chore that often gets in the way of us getting what we want. Many basic web tasks assume you’ll type words to make a request, and a computer depends on you to tap that request correctly.

Spelling is hard. According the to American Heritage Dictionary, in standard English, the sound of the letter k (as in “kite”) can be represented 11 ways:

  • c (call, ecstasy)
  • cc (account)
  • cch (saccharin)
  • ch (chorus)
  • ck (acknowledge)
  • cqu (lacquer)
  • cu (biscuit)
  • lk (talk)
  • q (Iraqi)
  • qu (quay)
  • que (plaque)

Other sounds have equally diverse ways of being written. If we factor in various foreign words and made-up words, the challenge of spelling feels onerous. And the number of distinct written words we encounter, from people’s names to product names, grows each year. Branding scholars L. J. Shrum and Tina M. Lowrey note the extent to which brands go to sound unique: “There are also quite a number of nonobvious, nonsemantic ways in which words can convey both meaning and distinctiveness. Some examples include phonetic devices such as rhyming, vowel repetition, and alliteration, orthographic devices such as unusual spellings or abbreviations, and morphological devices such as the compounding or blending of words.” This “distinctiveness” creates problems for people trying to enter these names in a search box.

The Writing – Speaking Disconnect

screenshot of MTV Facebook
Celebrity finds her name mispronounced by her fans. (Screenshot of MTV’s Facebook page.)

People face two challenges: they may not know how to say what they read, or how to write what they hear.

People are less confident of their spelling as they become more reliant on predictive text and spellchecking.

screenshot of article on spelling ability
News articles cast doubt on our ability to spell correctly. (Screenshot of Daily Mail)

Readers encounter a growing range of words that are spelled in ways not conforming to normal English orthography. For example, a growing number of brand names are leaving out vowels or are using unusual combinations of consonants to appear unique. Readers have trouble pronouncing the trademarks, and do not know how to spell them.

A parallel phenomenon is occurring with personal names. People, both famous and ordinary, are adopting names with unusual spellings or nonconventional pronunciations to make their names more distinctive.

As spelling gets more complicated, voice search tools such as Google Voice, Apple Siri, Microsoft Cortana and the Amazon Echo are gaining in popularity. Dictating messages using Text-to-Speech synthesis is becoming commonplace. Voice-content interaction changes our assumptions about how content needs to be represented. At present, voice synthesis and speech recognition are not up to the task of dealing with unusual names. Android, for example, allows you to add a phonetic name to a contact to improve the ability to match a spoken name. Facebook recently added a feature to allow users to add phonetic pronunciation to their names. [1] These developments suggest that the phonetic representation of words is becoming an increasingly important issue.

The Need for Content that Understands Pronunciation

These developments have practical consequences. People may be unable to find your company or your product if it has an unusual name. They may be unable to spell it for a search engine. They may become frustrated when trying to interact with your brand using a voice interface. Different people may pronounce your product or company name in different ways, causing some confusion.

How can we make people less reliant on their ability to spell correctly? Unfortunately there does not seem to be a simple remedy. We can however learn from different approaches that are used in content technology to determine how we might improve the experience. Let’s look at three areas:

  • Phonetic search
  • Voice search
  • Speech synthesis

Phonetic Search

Most search still relies on someone typing a query. They may use auto-suggest or predictive text, but they still need to know how something is spelled to know if the query written matches what they intend.

quora question screenshot
Question posted on Quora illustrates problem posed when one doesn’t know correct spelling and need to search for something.

Phonetic search allows a user to search according to what a word sounds like. It’s a long established technology but is not well known. Google does not support it, and consequently SEO consultants seldom mention it. Only one general purpose search engine (Exalead, from France’s Dassault Systèmes) supports the ability to search on words according to what they “sound like.” It is most commonly seen in vertical search applications focused on products, trademarks, and proper names.

To provide results that match sounds instead of spelling, the search engine needs to be in the mode of phonetic search. The process is fairly simple. The search engine identifies the underlying sounds represented by the query and matches it with homonyms or near-homonyms for that sound. Both the query word and target word are translated into a phonic representation, and when those are the same, a match is returned.

The original form of phonetic search is called Soundex. It predates computers. I first became aware of Soundex on a visit several years ago to the US National Archives in Washington DC. I saw an exhibit on immigration that featured old census records. The census recorded surnames according to the Soundex algorithm. When immigrants arrived in United States, their name might not be spelled properly when written down. Or they may have changed the spelling of their name at a later time. This mutation in the spelling of surnames created record keeping problems. Soundex resolves this problem by recording the underlying phonetic sound of the surname, so that different variants that sounded alike could be related to one another.

The basic idea behind Soundex is to strip out vowels and extraneous consonants, and equalize similar-sounding and potentially confused consonants (so that m and n are encoded the same way). Stressing the core features of the pronunciation reduces the amount of noise in the word that could be caused by mishearing or misspelling. People can use Soundex to do genealogical research to identify relatives who changed the spelling of their names. My surname “Andrews” is represented as A–536, which is the same as someone with the surname of “Anderson.”[2]

Soundex is very basic and limited in the range of word sounds it can represent. But it is also significant because it is used in most major relational database software such as Oracle and MySQL. Newer, NoSql databases such as elasticsearch, also support phonetic search. Newer, more sophisticated phonetic algorithms offer greater specificity and can represent a wider range of sounds. But by broadening the recall of items, it will decrease the precision of these results. Accordingly, phonetic search should only be used selectively for special cases targeting words that are both often-confused and often-sought.

Example of phonetic search.  Pharma products are hard to say and spell.
Example of phonetic search. Pharma products are hard to say and spell.

An example of phonetic search is available from the databases of the World Intellectual Property Organization (WIPO), a unit of the United Nations. I can do a phonetic search of a name to see what other trademarks sound like it. This is an important issue, since the sound of a name is an important characteristic of a brand. Many brand names use Latin or Greek roots and can often sound similar.

Let’s suppose I’m interested in a brand called “XROS.” I want to know what other brands sound like XROS. I enter XROS in WIPO’s phonetic search, and get back a list of trademarks that sound similar. These include:

  • Sears
  • Ceres
  • Sirius
  • XROSS
  • Saurus

Phonetic search provides results not available from fuzzy string matching. Because so many different letters and letter combinations can represent sounds, fuzzy string matching can’t identify many homonyms. Phonetic search can allow you to search for names that sound similar but are spelled differently. A search for “Smythe” yields results for “Smith.” An interesting question arises when people search for a non-word (a misspelled word) that they think sounds like the target word they seek. In the Exalead engine, there is a different mode for spellslike compared with soundslike. I will return to this issue shortly.

Voice Search

With voice search, people expect computers to worry about how a word is spelled. It is far easier to say the word “flicker” and get the photo site Flickr than it is to remember the exact spelling.

Computers however do not always match the proper word when doing a voice search. Voice search works best for common words, not unique ones. As a consequence, voice search will typically return the most common close match rather than the exact match. To deal with homonyms, voice search relies on predictive word matching.

screenshot of Echo
Description by Amazon of its voice-controlled Echo device. With the Echo, the interface is the product.

The challenge voice search faces is most apparent when it tries to recognize people’s names, or less common brand names.

Consider the case of “notable” names: the kind that appear in Wikipedia. Many Wikipedia entries have a phonetic pronunciation guide. I do not know if these are included in Google’s knowledge graph or not, but if they are, the outcomes do not seem consistent. Some voice searches for proprietary names work fine, but others fail terribly. A Google voice search for Xobni, a email management tool bought by Yahoo, provides results for Daphne, the Greek mythological goddess.

Many speech recognition applications use an XML schema called the Pronunciation Lexicon Specification (PLS), a W3C standard. These involve what is called a “lexicon file” written in the Pronunciation Lexicon Markup Language (an XML file with the extension of .pls) that contains pronunciation information that is portable across different applications.

A Microsoft website explains you can use a lexicon file for “words that feature unusual spelling or atypical pronunciation of familiar spellings.” It notes: “you can add proper nouns, such as place names and business names, or words that are specific to specialized areas of business, education, or medicine.” So it would seem ideal to represent the pronunciation of brands’ trademarks, jargon, and key personnel.

The lexicon file consists of three parts: the <lexeme> container, and the <grapheme> (word as spelled) and <phoneme> (word as pronounced.) The schema is not complicated, though there is a little effort to translate a sound represented in the phoneme into the International Phonetic Alphabet, which in turn must be represented in a character-set XML recognizes. A simple dedicated translation tool could help with this task.

While incorporating a lexicon file will not improve visibility on major search engines, these files can be utilized by third party XML-based voice recognition applications from IBM, Microsoft and many others. One can also provide a lexicon profile for specific words in HTML content using the microformat rel=“pronunciation”, though this does not appear to be extensively supported right now. So far, voice search on the web has been a competitive contest between Google/Apple/Amazon/Microsoft to develop the deepest vocabulary. Eventually, voice search may become a commodity, and all parties will want user-supplied assistance to fine-tune their lexicons, just as they do when encouraging publishers to supply schema.org metadata markup.

In summary, digital publishers currently have a limited ability to improve the recall in voice searches of their content on popular search engines. However, the recent moves by Google and Facebook to allow user-defined custom phonetic dictionaries suggests that this situation could change in the future.

Speech Synthesis

Text-to-Speech (TTS) is an area of growing interest as speech synthesis becomes more popular with consumers. TTS is becoming more ubiquitous and less robotic.

Nuance, the voice recognition software company, is focused increasingly on the consumer market. They have new products to allow hands-free interaction such as Dragon TV and Dragon Drive that not only listen to commands, but talk to people. These kinds of developments will increase the desirability of good phonetic representation.

If people have trouble pronouncing your trademarks and other names associated with your brand, it is likely that TTS systems will as well. An increasing number of products have synthetic names or nonstandard names that are difficult to pronounce, or whose correct pronunciation is unclear. FAGE® Greek Yogurt — how does one pronounce that?[3] Many English speakers would have trouble pronouncing and spelling the name of the world’s third-largest smartphone maker, Xiaomi (小米).[4] As business is increasingly global, executives at corporations often come from non-English-speaking countries and will have foreign names that are unfamiliar to many English speakers. You don’t want a speech synthesis program to mangle the name of your product or the name of your senior executive. One can’t expect speech synthesis programs to correctly pronounce unusual names. Brands need to provide some guidance for voice synthesis applications to pronounce these names correctly.

HTML has standards for speech synthesis: the Speech Synthesis Markup Language (SSML), which provides a means of indicating how to pronounce unusual words. Instructions are included within the <speak> tag. Three major options are available. First, you can indicate pronunciation using the <say-as> element. This is very useful for acronyms: for example do you pronounce the letters as a word, or do you sound out each letter individually? Second, you can use the <phoneme> tag to indicate pronunciation using the International Phonetic Alphabet. Finally, you can link to a XML <lexicon> file described using the Pronunciation Lexicon Markup Language mentioned earlier.

SSML is a long-established W3C standard for Text-to-Speech. While the SSML is the primary way to provide pronunciation guidance for web browsers, an alternate option is available for HTML5 content formatted for EPUB 3, which unlike browser-based HTML, has support for the Pronunciation Lexicon Markup Language.

Making Content Audio-Ready

Best practices to make text-based content audio-ready are still evolving. Even though voice recognition and speech synthesis are intimately related, a good deal of fragmentation still exists in the underlying standards. I will suggest a broad outline of how the different pieces relate to each other.

SSML for speech synthesis provides good support for HTML browser content.

Dedicated voice recognition applications can incorporate the Pronunciation Lexicon Specification’s lexicon files, but there currently is little adoption of this files for general purpose HTML content, outside of ebooks. PLS can (optionally) be used in speech applications in conjunction with SSML. PLS could play a bridging role, but hasn’t yet found that role in the web ecosystem.

diagram web standards for pronunciation
Diagram showing standards available to represent pronunciation of text

Phonetic Search Solutions

The dimension that most lacks standards is for phonetic search. Phonetic search is awkward because it asks the searcher to acknowledge they are probably spelling the term incorrectly. I will suggest a possible approach for internal vertical search applications.

The Simple Knowledge Organization System (SKOS) is a W3C standard for representing a taxonomy. It offers a feature called the hidden label which allows “a character string to be accessible to applications performing text-based indexing and search operations, but would not like that label to be visible otherwise. Hidden labels may for instance be used to include misspelled variants of other lexical labels.” These hidden labels can help match phonetically-influenced search terms with words used in the content.

Rather than ask the searcher to indicate they’re doing a “sounds like” search, it would be better to allow them to find sound-alikes at the same time they are doing a general search. The query form could provide a hint that exact spelling is not required and that they can sound out the word. The search term would look for any matches with the terms in the taxonomy including phonetic equivalents.

Let’s imagine your company has a product with an odd name that’s hard for people to recall. The previous marketing director thought he was clever by naming your financial planning product “Gnough”, pronounced “know” (it rhymes with “dough”!) The name is certainly unique, but it causes two problems. Some people see the word, mispronounce it, and remember their mispronounced version. Others have heard the name (perhaps on your marketing video) but can’t remember how it is spelled. You can include variants for both cases in the hidden labels part of your taxonomy:

  • Learned the wrong pronunciation: Include common ways it is mispronounced, such as “ganuff”
  • Learned correct pronunciation but can’t spell it: Include common spellings of the pronunciation, such as “no”, “know” or “noh”

The goal is to expand search term matching from simple misspelling that can be caught by fuzzy matching (e.g., transposing letters) to consider phonetic variations (the substitution of a z for a x or s, or common alternatives for representing vowel sounds, for example.) Because increasing search recall will lower search precision, you may want to offer a “did you mean” confirmation showing the presumed term, if there is doubt as to the intention of the searcher.

Prognosis for Articulate Content

Our goal is to make our digital content articulate — intelligible to people when speaking and listening. It is not an easy task, but it is a worthy one.

These approaches are suitable only for a small subset of the vocabulary you use. You should prioritize according to which terms are most likely to be mispronounced or misspelled because of their inherent pronunciation. From this limited list you can then make choices as to how to represent them phonetically in your content.

Pronunciation list of American words from the Voice of America.
Pronunciation list of American words from the Voice of America.  Major broadcasters maintain a list of preferred pronunciations for often-used, often-mispronounced words.  Digital publishers will need to adopt similar practices as voice-text interaction increases.

Articulate content is an especially complex topic because there are many factors outside of one’s immediate control. There are numerous issues of integration. Customers will likely be using many different platforms to interact with your content. These platforms may have proprietary quirks that interfere with standards.

But watch this space. Easy solutions don’t exist right now, but they will likely become easier in the not too distant future — they will need to.

— Michael Andrews


  1. One can speculate that Facebook doesn’t currently offer voice search because of the additional challenge it faces — much of its content centers on personal names, which are hard for voice recognizers to get right.  ↩
  2. Soundex only encodes the sounds of the first four letters, so longer words can have the same index as shorter ones.  ↩
  3. It is pronounced “fa-yeh”, according to Wikipedia. The trademark stands for an acronym F.A.G.E (Filippou Adelphoi Galaktokomikes Epicheiriseis in Greek, or Filippou Bros. Dairy Co. in English) but fage is coincidentally a Greek verb meaning “to eat” — in case you missed that pun.  ↩
  4. The approximate pronunciation is “sh-how-mee”. The fast-growing brand appears to be using the easier to write and pronounce name of “Mi” (the Chinese word for rice) in much of its English-language marketing and branding.  ↩
Categories
Content Experience Content Integration

Content sources across the customer journey

Customers are always on the run, checking information, making evaluations, and tracking how well and quickly they are getting things done. This momentum — being always on and always moving — has profound implications for content strategy. The best way to gain a holistic view of what’s involved is to look at the full customer journey, and the various services needed to support that journey, whomever provides them.  At different stages, the user has different tasks, and needs content to support these tasks.  When brands examine the journey from end to end,  they often discover that they do not have some of the content needed to support many of the user’s tasks.

Content comes from many different kinds of sources.  Brands are a major creator of content, but so are individuals, communities of people, as well as governments and non-government organizations.  Content can take many forms as well: it can be articles and videos, but also items of information commonly described as data.  One shouldn’t make a artificial distinction between authored content and factual data when these resources need to be visible and are meaningful to users.

To see how to join-up different sources of content to support user journeys, let’s consider a scenario.  Neil is a 41 year American software developer, recently divorced and living by himself in Research Triangle, North Carolina.  He recently had his blood pressure checked, which was found to be a bit high.  He is told he should consider modifying his diet to reduce his blood pressure.   Neil is someone into “lifehacking” so he decides to dig deeper into the topic to find out what’s best for him.

Step one: Goal setting with personal content

Neil reviews his device’s app store to see what’s useful.  He finds a healthy living app that can track his diet and makes recommendations on how to improve it.  He enters what he eats and drinks for a week and graphs the results.  The app flags his coffee and processed food consumption as areas he should watch — processed food contains a lot of sodium.  He likes the taste and convenience of processed food, but decides he should try to cook more for himself.  He fiddles with some parameters on his healthly living app and gets some recommendations on kinds of foods he should consider eating.  He likes some recommendations, hates others, and believes others are worthy but difficult.  He sets some goals for eating, and will track these in his app.  At this  goal setting stage, the content is personal to Neil: his recommendations based on parameters he selected, his goals, and his behavioral data.

Step two: Planning using community-contributed content

Neil doesn’t particularly enjoy cooking, because in the past he’s found it time consuming, and his results have been disappointing.  He searches for a source of recipes that are easy to make and don’t sound awful.  He finds a recipe community that specializes in easy to make dishes.  Community members submit recipes they like and can vote and comment on ones they’ve tried based on taste, ease of making, ease of storing ingredients, and ease of saving leftovers.  He likes the reputational dimensions of the community: members get recognition for their submissions and the votes cast and recieved.  Neil can link his healthy living app to this community, so that he can compare his profile goals with those of other community contributors.  He scans pictures of dishes that match his criteria and notices that some are favorites of people who follow protein rich diets and avoid carbohydrates.  On closer inspection of the ingredients, he sees these dishes avoid starches.  Neil likes his carbs, so filters out these options.  He looks for people more like him who are most concerned with the sodium dimension, and looks over their favorities.  He finds a couple of cassorole dishes that sound easy to make, and easy to save as leftovers.   For planning his meals, Neil has relied on community content: what’s popluar, and with whom it is popular.  He saves these recipes to his “to try” list in his healthy living app, so he can track when he has them.

Step three: Evaluation using public content and open data

Neil has two dishes he wants to make: a tuna casserole, and a Mexican casserole.  Both use ingridents easily obtained with a long shelf life: things like cans of tuna, cans of onion soup, cans of beans, bags of chips, jars of salsa, and processed cheese.  He hates having to worry about food spoiling in the fridge.  He notices a new detail about the ingredients: he must use low sodium varieties of these ingredients if the dish will qualify as low sodium.  Neils starting to feel overwhelmed: his supermarkets seems to have endless varieties of similar items, and he finds it a pain to read the tiny nuitrial lables on products.  He’s been warned that advertised claims of “reduced salt” can be misleading.  He wants to be able to search across different brands to find which ones have the lowest sodium.  Fortunately he finds a new website that is aggregating nutritional information of food products from many brands.  Ideally the USDA would aggregate all the information from nutritional labels of food products, and make it available in an open format with a public API.  But the USDA does not offer this information itself, so instead Neil uses a website that relies of voluntary submission by vendors, or the scrapping info from their websites.  The information is useful, though incomplete.  Neil is able to search for food products such as salsa, and find candidate brands that are low sodium.  He exports this list of brands to his shopping app on his phone.  He has relied on aggregated public information to evaluate which brands are most suitable.  Third party aggregators are credible providers of such information.

Step four: Purchase selection using company content

Neil now feels ready to visit his cavernous supermarket.  He chooses to shop at a supermarket that is employing new technology that allows shoppers to use their mobile phones to navigate through the store and check inventory.  The supermarket has it’s own app that can link to Neil’s shopping list.  It tells Neil which brands it has in stock, what the prices are, and what isle they are located on.  The store only carries one brand of low sodium salsa, but has three brands of low sodium beans, and he can compare the prices on his phone before hunting for them on the shelf.  Also the app shows photos of the items, so Neil knows what to look for.  So many products look similar, so it’s important to be sure you are picking up the one you really want, and not something that’s similar but different in a critical aspect (e.g., getting the extra spicy low sodium beans, instead of unflavored low sodium ones).  For the purchase phase, Neil has relied on company provided content.  He is motivated by ease of purchase, and individual retailers are in a primary position to offer content supporting such convenience.

neils content journey

Insights and lessons

Neil’s journey illustrates three major issues audiences and brands face when integrating content from different sources:

1.  technical constraints and functional gaps that create friction

2.  fuzzy ownership of responsibilities across the customer journey

3.  balancing the financial motivations of the brand with the incentives motivating the customer

Gaps, constraints, and friction

Everything in Neil’s scenario is technically feasible, even if parts seem magical compared with today’s reality.   For the user, a journey like this is often fragemented across many separate sites and apps, which may not share content with each other.  Users often rely on different kinds of content, from different sources, at different stages.

When Neil moves between apps or sites focused on different primary tasks there is obvious potential for friction.  As a computer professional, Neil is able to take content from one task domain and use it in another, using tools like IFTTT.  Other users, however, may have to manually re-enter content from one task domain to another, unless content linking and import is built-in.  Such built-in functionlity requires common exchange formats and APIs.   There are microformats for recipes, government-mandated nutritional information follows a standardized format, and retailers track products using standardized nomenclature such as UPCs and SKUs.  But content addressing higher level tasks such as dietary goals or ease of preparation do not follow open standards, meaning the exchange of such information between applications is more difficult.  In these cases, forging partnerships to create own’s own format to exchange content may be the best option.  Obviously, any connections between task domains (sharing log-in credentials, and sharing data) will help customers carry forward their journey, and help to drive adoption of your solution.

Whose problem is it?

The scenario highlights the fuzzy boundaries surrounding who offers the right solution for Neil.  In many cases, such as outlined in this scenario, no one party will orginate all the content needed to support a complex user task journey.  From a user perspective, it may seem desirable to have a “one stop” solution where he or she can perform all the tasks.  Such an approach would eliminate hopping between applications and websites, and potentially enable users to see connections between different tasks and their associated data and content.  But it isn’t obvious that one solution can obtain all the content needed to support the user.  Typically, integrated solutions do not offer the best content available.   Rather they offer content that is easy to obtain, or content that selectively promotes the goals of the brand behind the solution.  If you want to buy a camera, reading customer reviews on the Walmart website isn’t your best source of customer evaluations — buyers can get more complete and higher quality review information from a third-party photography website.  If a customer wants recipes, your supermarket may offer some that use products that the supermarket is promoting, but these recipes are not necessarily the best ones, and will certainly represent only a small sample of what’s available.

Brands need to think about what kinds of content their customers seek and consider during their journeys, and figure out how they can be a part of the conversation.  The goal should be to make your content available at whatever stage it is needed.  Look at opportunities to incorporate outside content where appropriate.  Think about where is the main source of content relating to this user task.  Can the brand get the content itsself, or does it make sense for it to offer its content to that source?

Being helpful with your content

Jeff Bezos reportedly said why brands earn or lose customer love: “Defeating tiny guys is not cool” while “Defeating bigger, unsympathetic guys is cool.”  To earn customer love, brands also need to consider how they treat other parties’ content.  Do they seem to be freely sharing a great resource, or are they seeming to throttle choice and push their own agenda with what they present? Whether a brand chooses to incorporate other parties’ content into their solution, or offer their content to others (via an API), it needs to come across as generous, and unbiased, to earn credibility and trust.

Audiences invest time and effort evaluating content, saving content, and creating their own content, motivated by the value they derive from different content sources.  It is important to respect that effort.  Content linking and sharing is a classic example of a network effect, where the content becomes more valuable the more different task scenarios it can be used.   Brands need to consider the network effect dynamics when choosing what content to offer, and where to offer it.

There can be a natural tendancy for brands to want to only invest in content that shows immediate payoffs.  Consider the supermarket chain.  It did not choose to submit the nutritional information of its house brands to the third party website.  As a result, its house brands were not part of Neil’s consideration set.  When it created its in-store app, some members of the supermarket exective team didn’t want to include photos of the products.  They reasoned that it was an unnecessary expense.  The price and inventory information were already available in their inventory system, but that system didn’t store photographic content.  But by making the investment, they improved the customer experience, and greatly increased adoption of their app.

The supermarket executives also debated how to understand more fully what their customers wanted to buy, so they could better forecast demand.  Their prior attempt to tie their loyalty card with their own recipe app, offering coupons, didn’t result in much adoption.  They were interested in figuring out how to get people like Neil to give them his dietary goal setting information.  While this is valuable content for the supermarket chain, helping them better target ads and offers, it isn’t clear what Neil would get in return for providing this information.  More coupons?  Neil gets a clear benefit using his goals to plan his meals, but the value of providing his goals to the supermarket after he’s already decided what he wants to buy isn’t clear.  The supermarket needs to think how Neil can use this information in the context of his relationship with the supermarket, so that Neil is in charge of what he does with the information, and derives value using it.  Perhaps he could be rewarded for participating in a program to test new products that are aligned with his dietary goals.

Final thoughts

Brands, especially retail brands and service providers such as banks, hotels and airlines, are thinking more about omnichannel communication with their customers.  Customers can need help at any point, can seek content through many channels and from many sources (including those of rivals) and expect answers instantly.  A strategy that shares content across tasks is the best approach to meeting customers needs as they arise.  If customers are doing a task that involves other sources of content in addition to your own, your brand needs to figure out how customers can integrate both kinds of content to provide the level of support they increasingly expect.  Having your content play well with others is not just a nice thing to do, but a business imperative.

— Michael Andrews