Categories
Content Experience

Getting our Content Ready for the Spoken Word

More of us are starting to talk to the Internet and listen to it, in lieu of tapping our screens and keyboards and reading its endless flow of words. We are changing our relationship with content. We expect it to be articulate. Unfortunately, it sometimes isn’t.

Tired of Text

To understand why people want to hear text, it is useful to consider how they feel about it on the screen. Many people feel that constant reading and writing is tiring. People from all walks of life say they feel spelling-challenged. Few people these days boast of being good spellers. Spelling can be a chore that often gets in the way of us getting what we want. Many basic web tasks assume you’ll type words to make a request, and a computer depends on you to tap that request correctly.

Spelling is hard. According the to American Heritage Dictionary, in standard English, the sound of the letter k (as in “kite”) can be represented 11 ways:

  • c (call, ecstasy)
  • cc (account)
  • cch (saccharin)
  • ch (chorus)
  • ck (acknowledge)
  • cqu (lacquer)
  • cu (biscuit)
  • lk (talk)
  • q (Iraqi)
  • qu (quay)
  • que (plaque)

Other sounds have equally diverse ways of being written. If we factor in various foreign words and made-up words, the challenge of spelling feels onerous. And the number of distinct written words we encounter, from people’s names to product names, grows each year. Branding scholars L. J. Shrum and Tina M. Lowrey note the extent to which brands go to sound unique: “There are also quite a number of nonobvious, nonsemantic ways in which words can convey both meaning and distinctiveness. Some examples include phonetic devices such as rhyming, vowel repetition, and alliteration, orthographic devices such as unusual spellings or abbreviations, and morphological devices such as the compounding or blending of words.” This “distinctiveness” creates problems for people trying to enter these names in a search box.

The Writing – Speaking Disconnect

screenshot of MTV Facebook
Celebrity finds her name mispronounced by her fans. (Screenshot of MTV’s Facebook page.)

People face two challenges: they may not know how to say what they read, or how to write what they hear.

People are less confident of their spelling as they become more reliant on predictive text and spellchecking.

screenshot of article on spelling ability
News articles cast doubt on our ability to spell correctly. (Screenshot of Daily Mail)

Readers encounter a growing range of words that are spelled in ways not conforming to normal English orthography. For example, a growing number of brand names are leaving out vowels or are using unusual combinations of consonants to appear unique. Readers have trouble pronouncing the trademarks, and do not know how to spell them.

A parallel phenomenon is occurring with personal names. People, both famous and ordinary, are adopting names with unusual spellings or nonconventional pronunciations to make their names more distinctive.

As spelling gets more complicated, voice search tools such as Google Voice, Apple Siri, Microsoft Cortana and the Amazon Echo are gaining in popularity. Dictating messages using Text-to-Speech synthesis is becoming commonplace. Voice-content interaction changes our assumptions about how content needs to be represented. At present, voice synthesis and speech recognition are not up to the task of dealing with unusual names. Android, for example, allows you to add a phonetic name to a contact to improve the ability to match a spoken name. Facebook recently added a feature to allow users to add phonetic pronunciation to their names. [1] These developments suggest that the phonetic representation of words is becoming an increasingly important issue.

The Need for Content that Understands Pronunciation

These developments have practical consequences. People may be unable to find your company or your product if it has an unusual name. They may be unable to spell it for a search engine. They may become frustrated when trying to interact with your brand using a voice interface. Different people may pronounce your product or company name in different ways, causing some confusion.

How can we make people less reliant on their ability to spell correctly? Unfortunately there does not seem to be a simple remedy. We can however learn from different approaches that are used in content technology to determine how we might improve the experience. Let’s look at three areas:

  • Phonetic search
  • Voice search
  • Speech synthesis

Phonetic Search

Most search still relies on someone typing a query. They may use auto-suggest or predictive text, but they still need to know how something is spelled to know if the query written matches what they intend.

quora question screenshot
Question posted on Quora illustrates problem posed when one doesn’t know correct spelling and need to search for something.

Phonetic search allows a user to search according to what a word sounds like. It’s a long established technology but is not well known. Google does not support it, and consequently SEO consultants seldom mention it. Only one general purpose search engine (Exalead, from France’s Dassault Systèmes) supports the ability to search on words according to what they “sound like.” It is most commonly seen in vertical search applications focused on products, trademarks, and proper names.

To provide results that match sounds instead of spelling, the search engine needs to be in the mode of phonetic search. The process is fairly simple. The search engine identifies the underlying sounds represented by the query and matches it with homonyms or near-homonyms for that sound. Both the query word and target word are translated into a phonic representation, and when those are the same, a match is returned.

The original form of phonetic search is called Soundex. It predates computers. I first became aware of Soundex on a visit several years ago to the US National Archives in Washington DC. I saw an exhibit on immigration that featured old census records. The census recorded surnames according to the Soundex algorithm. When immigrants arrived in United States, their name might not be spelled properly when written down. Or they may have changed the spelling of their name at a later time. This mutation in the spelling of surnames created record keeping problems. Soundex resolves this problem by recording the underlying phonetic sound of the surname, so that different variants that sounded alike could be related to one another.

The basic idea behind Soundex is to strip out vowels and extraneous consonants, and equalize similar-sounding and potentially confused consonants (so that m and n are encoded the same way). Stressing the core features of the pronunciation reduces the amount of noise in the word that could be caused by mishearing or misspelling. People can use Soundex to do genealogical research to identify relatives who changed the spelling of their names. My surname “Andrews” is represented as A–536, which is the same as someone with the surname of “Anderson.”[2]

Soundex is very basic and limited in the range of word sounds it can represent. But it is also significant because it is used in most major relational database software such as Oracle and MySQL. Newer, NoSql databases such as elasticsearch, also support phonetic search. Newer, more sophisticated phonetic algorithms offer greater specificity and can represent a wider range of sounds. But by broadening the recall of items, it will decrease the precision of these results. Accordingly, phonetic search should only be used selectively for special cases targeting words that are both often-confused and often-sought.

Example of phonetic search.  Pharma products are hard to say and spell.
Example of phonetic search. Pharma products are hard to say and spell.

An example of phonetic search is available from the databases of the World Intellectual Property Organization (WIPO), a unit of the United Nations. I can do a phonetic search of a name to see what other trademarks sound like it. This is an important issue, since the sound of a name is an important characteristic of a brand. Many brand names use Latin or Greek roots and can often sound similar.

Let’s suppose I’m interested in a brand called “XROS.” I want to know what other brands sound like XROS. I enter XROS in WIPO’s phonetic search, and get back a list of trademarks that sound similar. These include:

  • Sears
  • Ceres
  • Sirius
  • XROSS
  • Saurus

Phonetic search provides results not available from fuzzy string matching. Because so many different letters and letter combinations can represent sounds, fuzzy string matching can’t identify many homonyms. Phonetic search can allow you to search for names that sound similar but are spelled differently. A search for “Smythe” yields results for “Smith.” An interesting question arises when people search for a non-word (a misspelled word) that they think sounds like the target word they seek. In the Exalead engine, there is a different mode for spellslike compared with soundslike. I will return to this issue shortly.

Voice Search

With voice search, people expect computers to worry about how a word is spelled. It is far easier to say the word “flicker” and get the photo site Flickr than it is to remember the exact spelling.

Computers however do not always match the proper word when doing a voice search. Voice search works best for common words, not unique ones. As a consequence, voice search will typically return the most common close match rather than the exact match. To deal with homonyms, voice search relies on predictive word matching.

screenshot of Echo
Description by Amazon of its voice-controlled Echo device. With the Echo, the interface is the product.

The challenge voice search faces is most apparent when it tries to recognize people’s names, or less common brand names.

Consider the case of “notable” names: the kind that appear in Wikipedia. Many Wikipedia entries have a phonetic pronunciation guide. I do not know if these are included in Google’s knowledge graph or not, but if they are, the outcomes do not seem consistent. Some voice searches for proprietary names work fine, but others fail terribly. A Google voice search for Xobni, a email management tool bought by Yahoo, provides results for Daphne, the Greek mythological goddess.

Many speech recognition applications use an XML schema called the Pronunciation Lexicon Specification (PLS), a W3C standard. These involve what is called a “lexicon file” written in the Pronunciation Lexicon Markup Language (an XML file with the extension of .pls) that contains pronunciation information that is portable across different applications.

A Microsoft website explains you can use a lexicon file for “words that feature unusual spelling or atypical pronunciation of familiar spellings.” It notes: “you can add proper nouns, such as place names and business names, or words that are specific to specialized areas of business, education, or medicine.” So it would seem ideal to represent the pronunciation of brands’ trademarks, jargon, and key personnel.

The lexicon file consists of three parts: the <lexeme> container, and the <grapheme> (word as spelled) and <phoneme> (word as pronounced.) The schema is not complicated, though there is a little effort to translate a sound represented in the phoneme into the International Phonetic Alphabet, which in turn must be represented in a character-set XML recognizes. A simple dedicated translation tool could help with this task.

While incorporating a lexicon file will not improve visibility on major search engines, these files can be utilized by third party XML-based voice recognition applications from IBM, Microsoft and many others. One can also provide a lexicon profile for specific words in HTML content using the microformat rel=“pronunciation”, though this does not appear to be extensively supported right now. So far, voice search on the web has been a competitive contest between Google/Apple/Amazon/Microsoft to develop the deepest vocabulary. Eventually, voice search may become a commodity, and all parties will want user-supplied assistance to fine-tune their lexicons, just as they do when encouraging publishers to supply schema.org metadata markup.

In summary, digital publishers currently have a limited ability to improve the recall in voice searches of their content on popular search engines. However, the recent moves by Google and Facebook to allow user-defined custom phonetic dictionaries suggests that this situation could change in the future.

Speech Synthesis

Text-to-Speech (TTS) is an area of growing interest as speech synthesis becomes more popular with consumers. TTS is becoming more ubiquitous and less robotic.

Nuance, the voice recognition software company, is focused increasingly on the consumer market. They have new products to allow hands-free interaction such as Dragon TV and Dragon Drive that not only listen to commands, but talk to people. These kinds of developments will increase the desirability of good phonetic representation.

If people have trouble pronouncing your trademarks and other names associated with your brand, it is likely that TTS systems will as well. An increasing number of products have synthetic names or nonstandard names that are difficult to pronounce, or whose correct pronunciation is unclear. FAGE® Greek Yogurt — how does one pronounce that?[3] Many English speakers would have trouble pronouncing and spelling the name of the world’s third-largest smartphone maker, Xiaomi (小米).[4] As business is increasingly global, executives at corporations often come from non-English-speaking countries and will have foreign names that are unfamiliar to many English speakers. You don’t want a speech synthesis program to mangle the name of your product or the name of your senior executive. One can’t expect speech synthesis programs to correctly pronounce unusual names. Brands need to provide some guidance for voice synthesis applications to pronounce these names correctly.

HTML has standards for speech synthesis: the Speech Synthesis Markup Language (SSML), which provides a means of indicating how to pronounce unusual words. Instructions are included within the <speak> tag. Three major options are available. First, you can indicate pronunciation using the <say-as> element. This is very useful for acronyms: for example do you pronounce the letters as a word, or do you sound out each letter individually? Second, you can use the <phoneme> tag to indicate pronunciation using the International Phonetic Alphabet. Finally, you can link to a XML <lexicon> file described using the Pronunciation Lexicon Markup Language mentioned earlier.

SSML is a long-established W3C standard for Text-to-Speech. While the SSML is the primary way to provide pronunciation guidance for web browsers, an alternate option is available for HTML5 content formatted for EPUB 3, which unlike browser-based HTML, has support for the Pronunciation Lexicon Markup Language.

Making Content Audio-Ready

Best practices to make text-based content audio-ready are still evolving. Even though voice recognition and speech synthesis are intimately related, a good deal of fragmentation still exists in the underlying standards. I will suggest a broad outline of how the different pieces relate to each other.

SSML for speech synthesis provides good support for HTML browser content.

Dedicated voice recognition applications can incorporate the Pronunciation Lexicon Specification’s lexicon files, but there currently is little adoption of this files for general purpose HTML content, outside of ebooks. PLS can (optionally) be used in speech applications in conjunction with SSML. PLS could play a bridging role, but hasn’t yet found that role in the web ecosystem.

diagram web standards for pronunciation
Diagram showing standards available to represent pronunciation of text

Phonetic Search Solutions

The dimension that most lacks standards is for phonetic search. Phonetic search is awkward because it asks the searcher to acknowledge they are probably spelling the term incorrectly. I will suggest a possible approach for internal vertical search applications.

The Simple Knowledge Organization System (SKOS) is a W3C standard for representing a taxonomy. It offers a feature called the hidden label which allows “a character string to be accessible to applications performing text-based indexing and search operations, but would not like that label to be visible otherwise. Hidden labels may for instance be used to include misspelled variants of other lexical labels.” These hidden labels can help match phonetically-influenced search terms with words used in the content.

Rather than ask the searcher to indicate they’re doing a “sounds like” search, it would be better to allow them to find sound-alikes at the same time they are doing a general search. The query form could provide a hint that exact spelling is not required and that they can sound out the word. The search term would look for any matches with the terms in the taxonomy including phonetic equivalents.

Let’s imagine your company has a product with an odd name that’s hard for people to recall. The previous marketing director thought he was clever by naming your financial planning product “Gnough”, pronounced “know” (it rhymes with “dough”!) The name is certainly unique, but it causes two problems. Some people see the word, mispronounce it, and remember their mispronounced version. Others have heard the name (perhaps on your marketing video) but can’t remember how it is spelled. You can include variants for both cases in the hidden labels part of your taxonomy:

  • Learned the wrong pronunciation: Include common ways it is mispronounced, such as “ganuff”
  • Learned correct pronunciation but can’t spell it: Include common spellings of the pronunciation, such as “no”, “know” or “noh”

The goal is to expand search term matching from simple misspelling that can be caught by fuzzy matching (e.g., transposing letters) to consider phonetic variations (the substitution of a z for a x or s, or common alternatives for representing vowel sounds, for example.) Because increasing search recall will lower search precision, you may want to offer a “did you mean” confirmation showing the presumed term, if there is doubt as to the intention of the searcher.

Prognosis for Articulate Content

Our goal is to make our digital content articulate — intelligible to people when speaking and listening. It is not an easy task, but it is a worthy one.

These approaches are suitable only for a small subset of the vocabulary you use. You should prioritize according to which terms are most likely to be mispronounced or misspelled because of their inherent pronunciation. From this limited list you can then make choices as to how to represent them phonetically in your content.

Pronunciation list of American words from the Voice of America.
Pronunciation list of American words from the Voice of America.  Major broadcasters maintain a list of preferred pronunciations for often-used, often-mispronounced words.  Digital publishers will need to adopt similar practices as voice-text interaction increases.

Articulate content is an especially complex topic because there are many factors outside of one’s immediate control. There are numerous issues of integration. Customers will likely be using many different platforms to interact with your content. These platforms may have proprietary quirks that interfere with standards.

But watch this space. Easy solutions don’t exist right now, but they will likely become easier in the not too distant future — they will need to.

— Michael Andrews


  1. One can speculate that Facebook doesn’t currently offer voice search because of the additional challenge it faces — much of its content centers on personal names, which are hard for voice recognizers to get right.  ↩
  2. Soundex only encodes the sounds of the first four letters, so longer words can have the same index as shorter ones.  ↩
  3. It is pronounced “fa-yeh”, according to Wikipedia. The trademark stands for an acronym F.A.G.E (Filippou Adelphoi Galaktokomikes Epicheiriseis in Greek, or Filippou Bros. Dairy Co. in English) but fage is coincidentally a Greek verb meaning “to eat” — in case you missed that pun.  ↩
  4. The approximate pronunciation is “sh-how-mee”. The fast-growing brand appears to be using the easier to write and pronounce name of “Mi” (the Chinese word for rice) in much of its English-language marketing and branding.  ↩
Categories
Intelligent Content

Key Verbs: Actions in Taxonomies

When authors tell stories, verbs provide the action. Verbs move audiences. We want to know “what happened next?” But verbs are hard to categorize in ways computers understand and can act on. Despite that challenge, verbs are important enough that we must work harder to capture their intent, so we can align content with the needs of audiences. I will propose two approaches to overcome these challenges: task-focused and situational taxonomies. These approaches involve identifying the “key verbs” in our content.

Nouns and Verbs in Writing

I recently re-read a classic book on writing by Sir Ernest Gowers entitled The Complete Plain Words. Published immediately after the Second World War, the book was one of the first to advocate the use of plain language.

Gowers attacks obtuse, abstract writing. He quotes with approval a now forgotten essayist G.M. Young:

“Excessive reliance on the noun at the expense of the verb will in the end detach the mind of the writer from the realities of here and now, from when and how, and in what mood this thing was done and insensibly induce a habit of abstraction, generalization and vagueness.”

If we look past the delicious irony — a critique of abstraction that is abstract — we learn that writing that emphasizes verbs is vivid.

Gowers refers to this snippet as an example of abstract writing:

  • “Communities where anonymity in personal relationships prevails.”

Instead, he says the wording should be:

  • “Communities where people do not know one another.”

Without doubt the second example is easier to read, and feels more relevant to us as individuals. But the meaning of the two sentences, while broadly similar, is subtly different. We can see this by diagramming the key components. The strengthening of the verb in the second example has the effect of making the subject and object more vague.

role of verb in sentence

It is easy to see the themes in the first example, which are explicitly called out. The first diagram highlights themes of anonymity and personal relationships in a way the second diagram does not. The different levels of detail in the wording will draw attention to different dimensions.

With a more familiar style of writing, the subject is often personalized or implied. The subject is about you, or people like you. This may be one reason why lawyers and government officials like to use abstract words. They aren’t telling a specific story; they are trying to make a point about a more general concept.

Abstract vs Familiar Styles

I will make a simple argument. Abstract writing focuses on nouns, which are easier for computers to understand. Conversely, content written in a familiar style is more difficult for computers to understand and act on. Obviously, people — and not computers — are the audience for our content. I am not advocating an abstract style of writing. But we should understand and manage the challenges that familiar styles of writing pose for computers. Computers do matter. Until natural language processing by computers truly matches human abilities, humans are going to need to help computers understand what we mean. Because it’s hard for computers to understand discussions about actions, it is even more important that we have metadata that describes those actions.

The table below summarizes the orientations of each style. These sweeping characterizations won’t be true in all cases. Nonetheless, these tendencies are prevalent, and longstanding.

Abstract Style Familiar Style
Emphasis
  • Nouns
  • General concepts
  • Reader is outside of the article context
  • Verbs
  • Specific advice
  • Reader is within the article context
Major uses
  • Represents a class of concepts or events
  • Good for navigation
  • Shows instance of a concept or event
  • Good for reading
Benefits
  • Promotes analytic tracking
  • Promotes automated content recommendations
  • Promotes content engagement
  • Promotes social referrals
Limitations
  • Can trigger weak writing
  • Can trigger weak metadata

These tendencies are not destiny. Steven Pinker, the MIT cognitive scientist turned prose guru, can write about abstract topics in an accessible manner — he makes an effort to do so. Likewise, it is possible to develop good metadata for narrative content. It requires the ability to sense what is missing and implied.

Challenges of a Taxonomy of Verbs

Why is metadata difficult for narrative content? Why is so much metadata tilted toward abstract content? There are three main issues:

  • Indexing relies on basic vocabulary matching
  • Taxonomies are noun-centric
  • Verbs are difficult to specify

Indexing and Vocabulary Matching

Computers rely on indexes to identify content. Metadata is a type of index that identifies and describes the content. Metadata indexes may be based the manual tagging of content (application of metadata) with descriptive terms, or be based on auto-indexing and auto-categorization.

Computers can easily identify and index nouns, often referred to as entities. Named entity recognition can identify proper nouns such as personal names. It is also comparatively easy to identify common nouns in a text when a list of nouns of interest has been identified ahead of time. This is done either through string indexing (matching the character string to the index term) or assigned indexing (matching a character string to a concept term that has been identified as equivalent.)

The manual tagging of entities is also straightforward. A person will identify the major nouns used in a text, and select the appropriate index term that corresponds to the noun. When they decide what things are most important in the article (often the things mentioned most frequently), they find tags that describe those things.

When the text has entities that are proper or common nouns, it isn’t too hard to identify which ones are important and should be indexed. Abstract content is loaded with such nouns, and computers (and people) have an easy time identifying key words that describe the content. But as we will see, when the meaning of a text is based on the heavy use of pronouns and descriptive verbs, the task of matching terms to an index vocabulary becomes more difficult. Narrative content, where verbs are especially important to the meaning, is challenging to index. Nouns are easier to decipher than verbs.

Taxonomies are Noun-centric

When we offer a one-word description, we tend to label stuff using nouns. The headings in an encyclopedia are nouns. Taxonomies similarly rely on nouns to identify what an article is about. It’s our default way of thinking about descriptions.

Because we focus on the nouns, we can easily overlook the meaning carried by the verbs when tagging our content. But verbs can carry great meaning. Consider an article entitled “How to feel more energetic.” There are no nouns in the title to match up with taxonomy terms. Depending on the actual content of the article, it might relate to exercise, or diet, or mental attitude, but those topics are secondary in purpose to the theme of the article, which is about feeling better. A taxonomy may have granular detail, and include a thesaurus of equivalent and related terms, but the most critical issue is that the explicit wording of the article can be translated into the vocabulary used in the taxonomy.

Verbs are Difficult to Specify

Verbs also can be included in descriptive vocabularies for content, but they are more challenging to use. Verbs are sometimes looser in meaning than nouns. Sometimes they are figurative.

graph of verb definition
Verbs such as to make can have many different meanings

A verb may have many meanings. These meanings are sometimes fuzzy. Actions and sentiments can be described by multiple verbs and verbal phrases. Consider the most overworked and meaningless verb used on the web today: to like. If Ralph “likes” this, what does that really mean? Compared to what else? The English language has a range of nuanced verbs (love, being fond of, being interested in, being obsessed with, etc.) to express positive sentiment, though it is hard to demarcate their exact equivalences and differences.

Many common verbs (such as work, make or do) have a multitude of meanings. When the meaning of a verbs is nebulous, it is takes more work to identify the preferred synonym used in a taxonomy. Consider this example from a text-tagging tool. The person reading the text needs to make the mental leap that the verb “moving” refers to “money transfer.” The task is not simply to match a word, but to represent a concept for an activity. We often use imprecise verb like move instead of more precise verb like transfer money. Such verbal informality makes tagging more difficult.

Tagging a verb with a taxonomy term.  Screenshot via Brat.
Tagging a verb with a taxonomy term. Screenshot via Brat.

With the semantic web, predicates play the role of verbs defining the relationship between subjects and objects. The predicates can have many variants to express related concepts. If we say, “Jane Blogs was formerly married to Joe Blogs,” we don’t know what other verbal phrase would be equivalent. Did Jane Blogs divorce Joe Blogs? Did Joe Blogs die? Another piece of information may be needed to infer the full meaning. Verbal phrases can carry a degree of ambiguity, and this makes using a standard vocabulary for verbs harder to do.

Samuel Goto, a software engineer at Google, has said: “Verbs … they are kind of weird.”

Computers can’t understand verbs easily. Verb concepts are challenging for humans to describe with standardized vocabulary. Tagging verbs requires thought.

Why Verb Metadata Matters

If verbs are a pain to tag, why bother? So we can satisfy both the needs of our audiences and the needs of the computers that must be able to offer our audiences precisely what they want. As an organization, we need to make sure all this is happening effectively. We need to harmonize three buckets of needs: audience, IT, and brand.

Audience needs: Most audiences expect content written in familiar style, and want content with strong, active verbs. Those verbs often carry a big share of the meaning of what you are communicating. Audiences also want precise content, rather than hoping to stumble on something they like by accident. This requires good metadata.

IT needs: Computers have trouble understanding the meaning of verbs. Computers need a good taxonomy to support navigation through the content, and deliver good recommendations.

Brand needs: Brands need to be able to manage and analyze content according to the activities discussed in the content, not just the static nouns mentioned in it. If they don’t have a plan in place to identify key verbs in their content, and tag their meaning, they run the risk of having a hollow taxonomy that doesn’t deliver the results needed.

A solution to these competing needs is to have our metadata represent the actions mentioned in the content. I’m calling this approach finding your key verbs.[1]

Approaches to a Metadata of Actions

Two approaches are available to represent verb concepts. The first is to make verbs part of your taxonomy. The second is to translate verbs in your content into nouns in your taxonomy.

Task-focused Taxonomies

The first approach is to develop a list of verbs that express the actions discussed in your content. Starting with the general topics about which you produce content, you can do an analysis and see what specific activities the content discusses. We’ll call these activities “tasks.”

Think about the main tasks for the people we want to reach. How do they talk about these tasks? People don’t label themselves as a new-home buyer: they are looking for a new home. They may never actually buy, but they are looking. Verbs help us focus on what the story is. There may be sub tasks that our reader would do, and would want to read about. Not only are they looking for a new home, they are evaluating kitchens and getting recommendations on renovations. This task focus is important to help us manage content components, and track their value to audience segments. We can do this using a task-focused taxonomy.

I am aware of two general-purpose taxonomies that incorporate verbs. The tasks these taxonomies address may differ from your needs, but they may provide a starting point for building your own.

The new “actions” vocabulary available in schema.org is the better known of the two. Schema.org has identified around 100 actions “to describe what can be done with resources.” The purpose is to be able not only to find content items according to the action discussed, but to enable actions to be taken with the content. As a simple example, you might find an event, and click a button to confirm your attendance. Behind the scenes, that action will be managed by the vocabulary.

The schema actions are diverse. Some describe high-level activities such as to travel, while others refer to very granular activities, such as to follow somebody on a social network. Some task are real world tasks, and others strictly digital ones. I presume real-world actions are included to support activity-reporting from the Internet of Things (IoT) devices that monitor real-world phenomena such as exercise.

screenshot of schema.org actions terms
Schema.org actions taxonomy (partial)

Framenet, a semantic tagging vocabulary used by linguists, is a another general vocabulary that provides coverage of verbs. If a sentence uses the verb “freeze” (in the sense of “to stop”), it is tagged with the concept of “activity_pause.” It is easiest to see how Framenet verb vocabulary works using an example from David Caswell’s project, Structured Stories. Verbs that encapsulate events form the core of each story element. [2]

screenshot structured stories
Screenshot from the Structured Stories project, which uses Framenet.

Applications of Task Taxonomies

While both these vocabularies describe actions at the sentence or statement level, they can be applied to an entire article or section of content as well.

A task focus offers several benefits. Brands can track and manage content about activities independently of who specifically is featured doing the activity, or where/what the object or outcome of the activity is. So if brands produce content discussing options to travel, they might want to examine the performance of travel as a theme, rather than the variants of who travels or where they travel.

Task taxonomies also enable task-focused navigation, which lets people to start with an activity, then narrow down aspects of it. A sequence might start: What do you want to do? Then ask: Where would you like to do that? The sequence can work in reverse as well: people can discover something of interest (a destination) and then want to explore what to do there (a task).

Situational taxonomies

A second option uses nouns to indicate the notable events or situations discussed. Using nouns as proxies for actions unfortunately doesn’t capture a sense of dynamic movement. But if you can’t support a faceted taxonomy that can mix nouns and verbs, it may be the most practical option. When you have a list of descriptors that express actions discussed in your content, you are more likely to tag these qualities than if your taxonomy is entirely thing-centric. I’ll call a taxonomy that represents occasions using noun phrases a situational taxonomy. The terms in a situational taxonomy describe situations and events that may involve one or more activities.

If you have ever done business process modeling, you are familiar with the idea of describing things as passing through a routine lifecycle. We reify activities by giving them statuses: a project activity is under development, in review, launched, and so on. Many dimensions of our work and life involve routines with stages or statuses. When we produce content about these dimensions, we should tag the situation discussed.

One way to develop a situational taxonomy is by creating a blueprint of a detailed user journey that includes an end-to-end analysis of the various stages that real-world users go through, including the “unhappy path” where they encounter a situation they don’t want. Andrew Hinton has made a compelling case in his book Understanding Context that the situations that people find themselves in drive the needs they have. Many user journey maps don’t name the circumstances, they jump immediately into the actions people might do. Try to avoid doing that. Name each distinct situation: both the ones actively chosen by them as well as those foisted on them. Then map these terms to your content.

Situational taxonomies are suited to content about third parties (news for example) or when emphasizing the outcomes of a process rather than the factors that shape it. Processes that are complex or involve chance (financial gyrations or a health misfortune, for example) are suited to situational taxonomies. A situational taxonomy term describes “what happened?” at a high level. Thinking about events as a process or sequence can help to identify terms to describe the action discussed in the content.

The technical word for making nouns out of verbs is “nominalization.” For example, the verb “decide” becomes the noun “decision.” Not all nominalizations are equal: some are very clunky or empty of meaning. Decision is a better word than determination, for example. Try to keep situational terms from becoming too abstract.

Situational taxonomies are less granular than task-based ones. They provide an umbrella term that can represent several related actions. They can enhance tracking, navigation and recommendations, but not as precisely as task-based terms. Task taxonomies express more, suggesting not only what happens, but also how it happens.

Key Verbs Mark the Purpose of the Content

Identifying key verbs can be challenging work. Not all headlines will contain verbs. But ideally the opening paragraph should reveal verbs that frame the purpose of the article. Content strategists know that too much content is created without a well-defined purpose. Taxonomy terms focused on actions indicate what happens in the content, and suggest why that matters. Headlines, and taxonomy terms that rely entirely on nouns, don’t offer that.

We will look at some text from an animal shelter. I have intentionally removed the headline so we can focus on the content, to find the core concepts discussed. A simple part-of-speech app will allow us to isolate different kinds of words. First we will focus on the verbs in the text, which includes the terms “match”, “spot”, “suit”, “ask”, and “arrange”. The verb focus seems to be “matching.” Matching could be a good candidate term in a task taxonomy.

part of search view of verbs in narrative

Now we’ll look at nouns. In additional to common nouns such as dogs and families, we see some nouns that suggest a process. Specifically, several nouns include the word “adoption.” Adoption would be a candidate term in a situational taxonomy. Note the shift in focus: adoption suggests a broader discussion about the process, whereas matching suggests a more specific goal.

part of search view of nouns in narrative

When you look at content through the lens of verbs, questions arise. What verbs capture what the content is describing? Why is the content here? What is the reader or viewer supposed to do with this information? Could they tell someone else what is said here?

If you are having trouble finding key verbs, that could indicate problems with the content. Your content may not describe an activity. There is plenty of content that is “background content,” where readers are not expected to take any action after reading the content. If your goal for producing the content is simply to offer information for reference purposes, then it is unlikely you will find key verbs, because the content will probably be very noun-centric. The other possibility is that the writing is not organized clearly, and so key actions discussed are not readily seen. Both possibilities suggest a strategy check-up might be useful.

Avoid a Hollow Taxonomy

Even when tagging well-written content, capturing what activity is represented will require some effort. This can’t be automated, and the people doing the tagging need to pay close attention to what is implied in the content. They are identifying concepts, not simply matching words.

Tagging is easier to do when one already has vocabulary to describe the activities mentioned in your content. That requires auditing, discovery and planning. If your taxonomy only addresses things and not actions, it may be hollow. It can have gaps.

Most content is created to deliver an outcome. Metadata shouldn’t only describe the things that are mentioned. It should describe the actions that the content discusses, which will be explicitly or implicitly related to the actions you would like your customers to take. You want to articulate within metadata the intent of the content, and thus be in a position to use the content more effectively as a result. Key verbs let you capture the essence of your content.

By identifying key verbs, brands can use active terminology in their metadata to deliver content that is aligned with the intent of audiences.

diagram of key verb roles
How key verb metadata can support content outcomes

The Future Web of Verbs

Web search is moving “from the noun to the verb,” according to Prabhakar Raghavan, Google’s Vice President of Engineering.

We are at the start of a movement toward a web of verbs, the fusing of content and actions. Taxonomy is moving away from its bookish origins as the practice of describing documents. Its future will increasingly be focused on supporting user actions, not just finding content. But before we can reach that stage, we need to understand the relationship between the content and actions of interest to the user.

Taxonomies need to reflect the intent of the user. We can understand that intent better when we can track content according to the actions it discusses. We can serve that intent better when we can offer options (recommendations or choices) centered on the actions of greatest interest to the user.

The first area that verb taxonomies will be implemented will likely be transactional ones, such as making reservations using Schema actions. But the applications are much broader than these “bottom of the funnel” interventions. Brands should start to think about using action-oriented taxonomy terms through their content offerings. This is an uncharted area, linking our metadata to our desired content outcomes.

— Michael Andrews


  1. Key verbs build on the pre-semantic idea of key words, but are specific to activities, and represent concepts (semantic meaning) instead of literal word strings.  ↩
  2. You can watch a great video of the process on YouTube.  ↩