Categories
Content Engineering Intelligent Content

Defining Meaning in a Post-Document World

Digital content is undergoing a metamorphosis. It is no longer about fixed documents. But neither is it just a collection of data. It is something in-between, yet we haven’t developed a vivid and shared way to conceive and discuss precisely what that is. We see evidence of this confusion in the vocabulary used to describe content meaning. We talk about content as structurally rich, as semantic, as containing structured data. Behind these labels are deeper convictions: whether content is fundamentally about documents or data.

Content has evolved into a complex experience, composed of many different pieces. We need new labels to express what these pieces mean.

“The moment it all changed for me was the moment when Google Maps first appeared. Because it was a software application—not a set of webpages, not a few clever dynamic calls, but almost aggressively anti-document. It allowed for zooming and panning, but it was once again opaque. And suddenly it became clear that the manifest destiny of the web was not accessibility. What was it? Then the people who advocated for a semantically structured web began to split off from the mainstream and the standards stopped coming so quickly.” — Paul Ford in the Manual

In the traditional world of documents, meaning is conveyed through document-centric metadata. Publishers govern the document with administrative metadata, situate sections of the document using structural metadata, and identify and classify the document with descriptive metadata. As long as we considered digital content as web pages, we could think of them as documents, and could rely on legacy concepts to express the meaning of the content.

But web pages should be more than documents. Documents are unwieldy. The World Wide Web’s creator, Tim Berners-Lee, started agitating for “Raw data now!” Developers considered web pages as “unstructured data” and advocated the creation and collection of structured data that machines could use. What is valuable in content got redefined as data that could be placed in a database table or graph. Where documents deliver a complete package of meaning, data structures define meaning on a more granular level as discrete facts. Meaningful data can be extracted, and inserted into apps when in a structured format. In the paradigm of structured data, the meaning of an entity should be available outside of a context in which it was associated. Rather than define what the parts of documents mean, structured data focuses on what fragments of information mean independently of context.

Promoters of structured data see possibilities to create new content by recombining fragments of information. Information boxes, maps, and charts are content forms that can dynamically refresh with structured data. These are clearly important developments. But these non-narrative content types are not the only forms of content reuse.

The Unique Needs of Component Content

A new form of content emerged that was neither a document nor a data element: the content component. In HTML5, component level content might be sections of text, videos, images and perhaps tables.[1] These items have meaning to humans like documents, but unlike documents, they can be recombined in different ways, and so carry meaning outside the context of a document, much the way structured data does.

Component content needs various kinds of descriptions to be used effectively. Traditional document metadata (administrative, structural, and descriptive) are useful for content components. It is also useful to know what specific entities are mentioned within a component; structured data is also nice to have. But content components have further needs. If we are moving around discrete components that carry meaning to audiences, we want to understand what specific meaning is involved, so we match up the components with each other appropriately. The component-specific metadata addresses the purpose of the component.

Component metadata allows content to be adaptable: to match the needs of the user according to the specific circumstances they are in. We don’t have well-accepted terms to describe this metadata, so its importance tends to get overlooked. Various kinds of component metadata can characterize the purpose of a component. Though metadata relating to these facets aren’t yet well-established, there are signs of interest as content creators think about how to curate an experience for audiences using different content components.

Contextual metadata indicates the context in which a component should be used. This might be the device the component is optimized for, the geolocation it is intended for, the specific audience variation, or the intended sequencing of the component relative to other components.

Performance metadata addresses the intended lifecycle of the component. It indicates whether the component is meant to be evergreen, seasonal or ephemeral, and if it has a mass or niche use. It helps authors answer how the component should be used, and what kind of lifting it is expected to do.

Sentiment metadata describes the mood or the metaphor associated with the component. It answers what kind of impression on the audience the component is expected to make.

We can see how component metadata can matter by looking at a fairly simple example: using a photographic image. We might use different images together with the same general content according to different circumstances. Different images might express different metaphors presented to different audience segments. We might want to restrict the use of certain images to ensure they are not overused. We need to have different image sizes to optimize the display of the image on different devices. While structured data specialists might be preoccupied with what entities are shown in an image, in this example we don’t really care about who the models are appearing in the stock image. We are more concerned about the implicit meaning of the image in different contexts, rather than its explicit meaning.

The Challenges of Context-free Metadata

Metadata has a problem: it hasn’t yet evolved to address the changing context in which a content component might appear. We still talk about metadata as appearing in the head of a document, or in the body of a document, without considering that the body of the document is changing. We run the risk that the head and the body get out of alignment.

The rise of component variation is a key feature of the approach that’s commonly referred to as intelligent content. Intelligent content, according to Ann Rockley’s definition, involves structurally rich and semantically categorized content. Intelligent content is focused on making content components interchangeable.

Discussions of intelligent content rarely get too explicit about what metadata is needed. Marcia Riefer Johnston addressed the topic in an article entitled Intelligent Content: What Does ‘Semantically Categorized’ Mean? She says: “Semantic categories enable content managers to organize digital information in nearly limitless ways.” It’s a promising vision, but we still don’t have a sense of where the semantic categories come from, and what precisely they consist of. The inspiration for intelligent content, DITA, is an XML-based approach that allows publishers to choose their own metadata. DITA is a document-centric way of managing content, and accordingly assumes that the basic structure of the document is fixed, and only specific elements can be changed within that stable structure. Intelligent content, in contrast, suggests a post-document paradigm. Again, we don’t get a sense of what structurally rich means outside of a fixed document structure. How can one piece together items in “limitless ways?” What is the glue making sure these pieces fit together appropriately?

Content intelligence involves not only how components are interchangeable, but also how they are interoperable — intelligible to others. Intelligent content discussions often take a walled-garden approach. They focus on the desirability of publishers providing different combinations of content, but don’t discuss how these components might be discovered by audiences.[2] Intelligent content discussions tend to assume that the audience discovers the publisher (or that the publisher identifies the audience via targeting), and then the publisher assembles the right content for the audience. But the process could be reversed, where the audience discovers the content first, prior to any assembly by the publisher. How do the principles of semantically categorized and structurally rich content relate to SEO or Linked Data? Here, we start to see the collision between the document-centric view of content and the structured data view of it. Does intelligent content require publisher-defined and controlled metadata to provide its capabilities, or can it utilize existing, commonly used metadata vocabularies to achieve these goals?

Document-centric Thinking Hurts Metadata Description

Content components already exist in the wild. Publishers are recombining components all the time, even if they don’t have a robust process governing this. Whether or not publishers talk about intelligent content, the post-document era has already started.

But we continue to talk about web pages as enduring entities that we can describe. We see this in discussions of metadata. Two styles of metadata compete with each other: metadata in the document head of a page, and metadata that is in-line, in the body of a page. Both these styles assume there is a stable, persistent page to describe. Both approaches fail because this assumption isn’t true in many cases.

The first approach involves putting descriptive metadata outside of the content. On a web page, it involves putting the description in the head, rather than the body. This is a classic document-centric style. It is similar to how librarians catalog books: the description of the book is on a card (or in a database) that is separate from the actual book. Books are fixed content, so this approach works fine.

The second approach involves putting the description in the body of the text. Think of it as an annotation. It is most commonly done to identify entities mentioned in the text. It is similar to an index of a book. As long as the content of the book doesn’t change, the index should be stable.

Yet web pages aren’t books. They change all the time. There may be no web page: just a wrapper for presenting a stream of content. What do we need to describe here, and how do we need to do that?

Structured Data’s Lost Bearings

When people want to identify entities mentioned in content, they need a way to associate a description of the entity with the content where it appears. Entity-centric metadata is often called structured data, a confusing term given the existence of other similar sounding terms such as structured content, and semantic structure. While structured data was originally a term used by data architects, the SEO community uses it to refer more specifically to search-engine markup using vocabulary standards such as Schema.org. The structure referred to in the term “structured data” is the structure of the vocabulary indicating the relationships associated with the description. It doesn’t refer to the structure of the content, and here is where problems arise.

While structured data excels at describing entities, it struggles to locate these entities in the content. The question SEO consultants wrestle with is what precisely to index: a web page, or a sentence fragment where the item is mentioned? There are two rival approaches for doing this. One can index entities appearing on a web page using a format called JSON-LD, which is typically placed in the document head of the page (though it does not have to be). Or one can index entities where they appear in the content using a format called RDFa, which are placed in-line in the body of the HTML markup.

Both these approaches presume that the content itself is stable. But content changes continually, and both approaches founder because they are based on a page-centric view of content instead of a component-centric view.

Disemboweled Data

First, consider the use of RDFa to describe the entities mentioned in the sentence. The metadata is embedded in the body of the page: it’s embodied metadata. It’s an appealing approach: one just needs to annotate what these entities are, so a search engine can identify them. But embedded in-line metadata turns out to be rather fragile. Such annotation works only so far as every relevant associated entity is explicitly mentioned in the text. And if the text mentions several different kinds of entities in a single paragraph, the markup gets complicated, because one needs to disambiguate the different entities so as not to confuse the search robots.

The big trouble starts when one changes the wording of texts containing embedded structured data. The entities mentioned change, which has a cascading impact on how the metadata used to describe these entities must be presented. What seemed a unified description of related entities can become disemboweled with even a minor change in a sentence. The structured data didn’t have a stable context with which to associate itself.

Decapitated data

Given the hassles of RDFa, many SEO consultants lately are promoting the virtues of putting the structured data in the head of a page using JSON-LD. The head of the description is separate from the body of the content, much like the library catalog card describing a book is separate from the book and its contents. The description is separate from the context in which it appears.

Supporters of JSON-LD note that the markup is simpler than RDFa, and less prone to glitchiness. That is true. But the cost of this approach is that the structured data looses its context. It too is fragile, in some ways more so than RDFa.

Putting data in the document head, outside of the body of the content, is to decapitate the data. We now have data that is vaguely associated with a page, though we don’t know exactly how. Consider Paul Ford’s recent 32,000-word article for Business Week on programming. He mentioned countless entities in the article, all of which would be placed in the head. You might know the entity was mentioned somewhere, but you can’t be sure where.

What's efficient for one party may not be for another.  (original image via Wikipedia)
What’s efficient for one party may not be so for another. (original image via Wikipedia)

With decapitated data, we risk having the description of the content get out of alignment with what the content is actually discussing. Since the data is not associated with a context, it can be hard to see that the data is wrong. You might revise the content, adding and deleting entities, and not revise the document head data accurately.

The management problem becomes greater when one thinks about content as components rather than pages. We want to change content components, but the metadata is tied to a page, rather than a component. So every variation of a page requires a new JSON-LD profile in the document head that will match the contents of the variation. As a practical matter this approach is untenable. A dynamically-generated page might have dozens or hundreds of variations based on different combinations of components.

Structured data largely exists to serve the needs of search engines. Its practices tend to define content in terms of web pages. Structured data can describe a rendered page, but isn’t geared to describe content components independently of a rendered page. To indicate the main theme of a piece of content, Schema.org offers a tag called “main content of page”, reflecting an expectation that there is one webpage with an overriding theme. Even if a webpage exists for a desktop browser, it may be a series of short sections when viewed on a mobile device, and won’t have a single persistent “main content” theme. Current structured data practices don’t focus on how to describe entities in unbundled content — entities associated with discrete components such as a section of text. Each reuse of content involves a re-creation of structured data in the document head.

It is important not to confuse structured data with structured content. Structured data needs to work in concert with structured content delivered through content management systems, instead of operating independently of it.

When structured data gets separated from the content it represents, it creates confusion for content teams about what’s important. Decapitated data can foster an attitude that audience-facing content is a second class citizen. One presentation on the benefits of JSON-LD for SEO advised: “Keep the Data and Presentation layer separate.” Content in HTML gets reduced to presentation: a mere decoration. Such advocates talk about supplying a data “payload” to Google. It is true that structured data can be used in apps, but some structured data advocates create a false dichotomy between web pages and data-centric apps, because they are stuck in a paradigm that content equals web pages.

This perspective can lead to content reductionism: only the facts mentioned in the content matter. The primary goal is to free the facts from the content, so the facts can be used elsewhere by Google and others. Content-free data works fine for discussing commodities such as gas prices. But for topics that matter most to people, having context around the data is important. Decapitated data doesn’t support context: it works against it, by making it harder to provide more contextually appropriate information. Either the information is hanked out of its context entirely, or the reader is forced to locate it within the body of the content on her own.

The ultimate failure of decapitated data occurs when the data bears no relationship to the content. This is a known bug of the approach, and one no one seems to have a solution for. According to the W3C, “it is more difficult for search engines to verify that the JSON-LD structured data is consistent with the visible human-readable information.” When what’s important gets defined as what’s put in a payload for Google, the temptation exists to load things in the document head that aren’t discussed. Just as black hat operators loaded fake keywords in the document head of the meta description years ago to game search engines, there exists a real possibility that once JSON-LD becomes more popular, unscrupulous operators will put black hat structured data in the document head that’s unrelated to the content. No one, not least the people who have been developing the JSON-LD format, wants to see this happen.

Unbundling Meaning for Unbundled Content

The intelligent content approach stresses the importance of unbundling content. The web page as a unit of content is dying. Unbundled content can adapt to the display and interactive needs of mobile devices, and allow for content customization.

Metadata needs to describe content components, not just pages of content. Some of this metadata will describe the purpose of the component. Other metadata will describe the entities discussed in the component.

There are arguments whether to annotate entities in content with metadata, or whether to re-create the entities in a supplemental file. Part of the debate concerns the effort involved: the effort for inputting the content structure, verses the effort involved re-entering the data described by the structure. One expert, Alex Miłowski at the University of California Berkeley, suggests a hybrid approach could be most efficient and accurate. Regardless of format, structured content will be more precise and accurate if it refers to a reusable content component, rather than a changeable sentence or changeable web page.[3] Components are swappable and connectable by design. They are units of communication expressing a unified purpose, which can be described in an integrated way with less worry that something will change that will render the description inaccurate. It is easier to verify the accuracy of the structured data when it is closely associated with the content. Since content components are designed for reuse, one can reuse the structured data linked to the component.

While the idea of content components is not new, it still is not widely embraced as the default way of thinking about content. People still think about pages, or fragments. Even content strategists talk suggestively about chunks of content, instead of trying to define what a chunk would be in practice. As a first step, I would like to see discussion of chunks disappear, to be replaced by discussion of components. Thinking about reusable components does not preclude the reuse of more granular elements such as variables and standardized copy. But the concept of a component provides a way to discuss pieces of content based around a common theme.

Components need to be defined as units to manage internally in content management systems before they will be recognized as a unit that matters externally. A section of content in HTML may not map to standard templates in a CMS right now, but that can change — if we define a component as a section. A section of content in HTML may not mean much to a search engine right now, but that can change — if search engines perceive such a unit as having a coherent meaning. The case for both intelligent content and semantic search will be more compelling if we can make such changes.

Final note

More dialog is needed between the semantic search community and the intelligent content community about how to integrate each approach. Both these approaches involve significant complexity, and understanding by each side of the other seems limited. I’ve discovered that some ideas about structured data and the semantic representation of entities have political sensitivities and a stormy past, which can make exploration of these topics challenging for outsiders. In this post I have questioned a current idea in structured data best practice, separating data from content, even though this practice wasn’t common a year ago, or even widely practical. Practices used in semantic search (such as favored formats and vocabulary terms) seem to fluctuate noticeably, compared to the long established principles guiding content strategy. The cause of structured data will benefit when it is discussed in the wider context of content production, management and governance, instead of in isolation from these issues. For its part, content strategy should become more specific with how to implement principles, especially as adaptive content becomes more common. I foresee possibilities to refine concepts in intelligent content through dialog with semantic search experts.

— Michael Andrews


  1. I am merely suggesting kinds of HTML structures that correspond to content components, rather than attempting to provide a formal definition. HTML5 has its quirks and nuances, and the topic deserves a wider discussion.  ↩
  2. A notable exception is Joe Pairman’s article, “Connecting with real-world entities: is structured content missing a trick?”.  ↩
  3. Embedding JSON-LD in components seems like it could offer benefits, though I hesitate to suggest casually standards on such a multifaceted issue. I don’t want the merits of a particular solution to detract attention from a thorough examination of the core issues associated with the problem.  ↩
Categories
Intelligent Content

Key Verbs: Actions in Taxonomies

When authors tell stories, verbs provide the action. Verbs move audiences. We want to know “what happened next?” But verbs are hard to categorize in ways computers understand and can act on. Despite that challenge, verbs are important enough that we must work harder to capture their intent, so we can align content with the needs of audiences. I will propose two approaches to overcome these challenges: task-focused and situational taxonomies. These approaches involve identifying the “key verbs” in our content.

Nouns and Verbs in Writing

I recently re-read a classic book on writing by Sir Ernest Gowers entitled The Complete Plain Words. Published immediately after the Second World War, the book was one of the first to advocate the use of plain language.

Gowers attacks obtuse, abstract writing. He quotes with approval a now forgotten essayist G.M. Young:

“Excessive reliance on the noun at the expense of the verb will in the end detach the mind of the writer from the realities of here and now, from when and how, and in what mood this thing was done and insensibly induce a habit of abstraction, generalization and vagueness.”

If we look past the delicious irony — a critique of abstraction that is abstract — we learn that writing that emphasizes verbs is vivid.

Gowers refers to this snippet as an example of abstract writing:

  • “Communities where anonymity in personal relationships prevails.”

Instead, he says the wording should be:

  • “Communities where people do not know one another.”

Without doubt the second example is easier to read, and feels more relevant to us as individuals. But the meaning of the two sentences, while broadly similar, is subtly different. We can see this by diagramming the key components. The strengthening of the verb in the second example has the effect of making the subject and object more vague.

role of verb in sentence

It is easy to see the themes in the first example, which are explicitly called out. The first diagram highlights themes of anonymity and personal relationships in a way the second diagram does not. The different levels of detail in the wording will draw attention to different dimensions.

With a more familiar style of writing, the subject is often personalized or implied. The subject is about you, or people like you. This may be one reason why lawyers and government officials like to use abstract words. They aren’t telling a specific story; they are trying to make a point about a more general concept.

Abstract vs Familiar Styles

I will make a simple argument. Abstract writing focuses on nouns, which are easier for computers to understand. Conversely, content written in a familiar style is more difficult for computers to understand and act on. Obviously, people — and not computers — are the audience for our content. I am not advocating an abstract style of writing. But we should understand and manage the challenges that familiar styles of writing pose for computers. Computers do matter. Until natural language processing by computers truly matches human abilities, humans are going to need to help computers understand what we mean. Because it’s hard for computers to understand discussions about actions, it is even more important that we have metadata that describes those actions.

The table below summarizes the orientations of each style. These sweeping characterizations won’t be true in all cases. Nonetheless, these tendencies are prevalent, and longstanding.

Abstract Style Familiar Style
Emphasis
  • Nouns
  • General concepts
  • Reader is outside of the article context
  • Verbs
  • Specific advice
  • Reader is within the article context
Major uses
  • Represents a class of concepts or events
  • Good for navigation
  • Shows instance of a concept or event
  • Good for reading
Benefits
  • Promotes analytic tracking
  • Promotes automated content recommendations
  • Promotes content engagement
  • Promotes social referrals
Limitations
  • Can trigger weak writing
  • Can trigger weak metadata

These tendencies are not destiny. Steven Pinker, the MIT cognitive scientist turned prose guru, can write about abstract topics in an accessible manner — he makes an effort to do so. Likewise, it is possible to develop good metadata for narrative content. It requires the ability to sense what is missing and implied.

Challenges of a Taxonomy of Verbs

Why is metadata difficult for narrative content? Why is so much metadata tilted toward abstract content? There are three main issues:

  • Indexing relies on basic vocabulary matching
  • Taxonomies are noun-centric
  • Verbs are difficult to specify

Indexing and Vocabulary Matching

Computers rely on indexes to identify content. Metadata is a type of index that identifies and describes the content. Metadata indexes may be based the manual tagging of content (application of metadata) with descriptive terms, or be based on auto-indexing and auto-categorization.

Computers can easily identify and index nouns, often referred to as entities. Named entity recognition can identify proper nouns such as personal names. It is also comparatively easy to identify common nouns in a text when a list of nouns of interest has been identified ahead of time. This is done either through string indexing (matching the character string to the index term) or assigned indexing (matching a character string to a concept term that has been identified as equivalent.)

The manual tagging of entities is also straightforward. A person will identify the major nouns used in a text, and select the appropriate index term that corresponds to the noun. When they decide what things are most important in the article (often the things mentioned most frequently), they find tags that describe those things.

When the text has entities that are proper or common nouns, it isn’t too hard to identify which ones are important and should be indexed. Abstract content is loaded with such nouns, and computers (and people) have an easy time identifying key words that describe the content. But as we will see, when the meaning of a text is based on the heavy use of pronouns and descriptive verbs, the task of matching terms to an index vocabulary becomes more difficult. Narrative content, where verbs are especially important to the meaning, is challenging to index. Nouns are easier to decipher than verbs.

Taxonomies are Noun-centric

When we offer a one-word description, we tend to label stuff using nouns. The headings in an encyclopedia are nouns. Taxonomies similarly rely on nouns to identify what an article is about. It’s our default way of thinking about descriptions.

Because we focus on the nouns, we can easily overlook the meaning carried by the verbs when tagging our content. But verbs can carry great meaning. Consider an article entitled “How to feel more energetic.” There are no nouns in the title to match up with taxonomy terms. Depending on the actual content of the article, it might relate to exercise, or diet, or mental attitude, but those topics are secondary in purpose to the theme of the article, which is about feeling better. A taxonomy may have granular detail, and include a thesaurus of equivalent and related terms, but the most critical issue is that the explicit wording of the article can be translated into the vocabulary used in the taxonomy.

Verbs are Difficult to Specify

Verbs also can be included in descriptive vocabularies for content, but they are more challenging to use. Verbs are sometimes looser in meaning than nouns. Sometimes they are figurative.

graph of verb definition
Verbs such as to make can have many different meanings

A verb may have many meanings. These meanings are sometimes fuzzy. Actions and sentiments can be described by multiple verbs and verbal phrases. Consider the most overworked and meaningless verb used on the web today: to like. If Ralph “likes” this, what does that really mean? Compared to what else? The English language has a range of nuanced verbs (love, being fond of, being interested in, being obsessed with, etc.) to express positive sentiment, though it is hard to demarcate their exact equivalences and differences.

Many common verbs (such as work, make or do) have a multitude of meanings. When the meaning of a verbs is nebulous, it is takes more work to identify the preferred synonym used in a taxonomy. Consider this example from a text-tagging tool. The person reading the text needs to make the mental leap that the verb “moving” refers to “money transfer.” The task is not simply to match a word, but to represent a concept for an activity. We often use imprecise verb like move instead of more precise verb like transfer money. Such verbal informality makes tagging more difficult.

Tagging a verb with a taxonomy term.  Screenshot via Brat.
Tagging a verb with a taxonomy term. Screenshot via Brat.

With the semantic web, predicates play the role of verbs defining the relationship between subjects and objects. The predicates can have many variants to express related concepts. If we say, “Jane Blogs was formerly married to Joe Blogs,” we don’t know what other verbal phrase would be equivalent. Did Jane Blogs divorce Joe Blogs? Did Joe Blogs die? Another piece of information may be needed to infer the full meaning. Verbal phrases can carry a degree of ambiguity, and this makes using a standard vocabulary for verbs harder to do.

Samuel Goto, a software engineer at Google, has said: “Verbs … they are kind of weird.”

Computers can’t understand verbs easily. Verb concepts are challenging for humans to describe with standardized vocabulary. Tagging verbs requires thought.

Why Verb Metadata Matters

If verbs are a pain to tag, why bother? So we can satisfy both the needs of our audiences and the needs of the computers that must be able to offer our audiences precisely what they want. As an organization, we need to make sure all this is happening effectively. We need to harmonize three buckets of needs: audience, IT, and brand.

Audience needs: Most audiences expect content written in familiar style, and want content with strong, active verbs. Those verbs often carry a big share of the meaning of what you are communicating. Audiences also want precise content, rather than hoping to stumble on something they like by accident. This requires good metadata.

IT needs: Computers have trouble understanding the meaning of verbs. Computers need a good taxonomy to support navigation through the content, and deliver good recommendations.

Brand needs: Brands need to be able to manage and analyze content according to the activities discussed in the content, not just the static nouns mentioned in it. If they don’t have a plan in place to identify key verbs in their content, and tag their meaning, they run the risk of having a hollow taxonomy that doesn’t deliver the results needed.

A solution to these competing needs is to have our metadata represent the actions mentioned in the content. I’m calling this approach finding your key verbs.[1]

Approaches to a Metadata of Actions

Two approaches are available to represent verb concepts. The first is to make verbs part of your taxonomy. The second is to translate verbs in your content into nouns in your taxonomy.

Task-focused Taxonomies

The first approach is to develop a list of verbs that express the actions discussed in your content. Starting with the general topics about which you produce content, you can do an analysis and see what specific activities the content discusses. We’ll call these activities “tasks.”

Think about the main tasks for the people we want to reach. How do they talk about these tasks? People don’t label themselves as a new-home buyer: they are looking for a new home. They may never actually buy, but they are looking. Verbs help us focus on what the story is. There may be sub tasks that our reader would do, and would want to read about. Not only are they looking for a new home, they are evaluating kitchens and getting recommendations on renovations. This task focus is important to help us manage content components, and track their value to audience segments. We can do this using a task-focused taxonomy.

I am aware of two general-purpose taxonomies that incorporate verbs. The tasks these taxonomies address may differ from your needs, but they may provide a starting point for building your own.

The new “actions” vocabulary available in schema.org is the better known of the two. Schema.org has identified around 100 actions “to describe what can be done with resources.” The purpose is to be able not only to find content items according to the action discussed, but to enable actions to be taken with the content. As a simple example, you might find an event, and click a button to confirm your attendance. Behind the scenes, that action will be managed by the vocabulary.

The schema actions are diverse. Some describe high-level activities such as to travel, while others refer to very granular activities, such as to follow somebody on a social network. Some task are real world tasks, and others strictly digital ones. I presume real-world actions are included to support activity-reporting from the Internet of Things (IoT) devices that monitor real-world phenomena such as exercise.

screenshot of schema.org actions terms
Schema.org actions taxonomy (partial)

Framenet, a semantic tagging vocabulary used by linguists, is a another general vocabulary that provides coverage of verbs. If a sentence uses the verb “freeze” (in the sense of “to stop”), it is tagged with the concept of “activity_pause.” It is easiest to see how Framenet verb vocabulary works using an example from David Caswell’s project, Structured Stories. Verbs that encapsulate events form the core of each story element. [2]

screenshot structured stories
Screenshot from the Structured Stories project, which uses Framenet.

Applications of Task Taxonomies

While both these vocabularies describe actions at the sentence or statement level, they can be applied to an entire article or section of content as well.

A task focus offers several benefits. Brands can track and manage content about activities independently of who specifically is featured doing the activity, or where/what the object or outcome of the activity is. So if brands produce content discussing options to travel, they might want to examine the performance of travel as a theme, rather than the variants of who travels or where they travel.

Task taxonomies also enable task-focused navigation, which lets people to start with an activity, then narrow down aspects of it. A sequence might start: What do you want to do? Then ask: Where would you like to do that? The sequence can work in reverse as well: people can discover something of interest (a destination) and then want to explore what to do there (a task).

Situational taxonomies

A second option uses nouns to indicate the notable events or situations discussed. Using nouns as proxies for actions unfortunately doesn’t capture a sense of dynamic movement. But if you can’t support a faceted taxonomy that can mix nouns and verbs, it may be the most practical option. When you have a list of descriptors that express actions discussed in your content, you are more likely to tag these qualities than if your taxonomy is entirely thing-centric. I’ll call a taxonomy that represents occasions using noun phrases a situational taxonomy. The terms in a situational taxonomy describe situations and events that may involve one or more activities.

If you have ever done business process modeling, you are familiar with the idea of describing things as passing through a routine lifecycle. We reify activities by giving them statuses: a project activity is under development, in review, launched, and so on. Many dimensions of our work and life involve routines with stages or statuses. When we produce content about these dimensions, we should tag the situation discussed.

One way to develop a situational taxonomy is by creating a blueprint of a detailed user journey that includes an end-to-end analysis of the various stages that real-world users go through, including the “unhappy path” where they encounter a situation they don’t want. Andrew Hinton has made a compelling case in his book Understanding Context that the situations that people find themselves in drive the needs they have. Many user journey maps don’t name the circumstances, they jump immediately into the actions people might do. Try to avoid doing that. Name each distinct situation: both the ones actively chosen by them as well as those foisted on them. Then map these terms to your content.

Situational taxonomies are suited to content about third parties (news for example) or when emphasizing the outcomes of a process rather than the factors that shape it. Processes that are complex or involve chance (financial gyrations or a health misfortune, for example) are suited to situational taxonomies. A situational taxonomy term describes “what happened?” at a high level. Thinking about events as a process or sequence can help to identify terms to describe the action discussed in the content.

The technical word for making nouns out of verbs is “nominalization.” For example, the verb “decide” becomes the noun “decision.” Not all nominalizations are equal: some are very clunky or empty of meaning. Decision is a better word than determination, for example. Try to keep situational terms from becoming too abstract.

Situational taxonomies are less granular than task-based ones. They provide an umbrella term that can represent several related actions. They can enhance tracking, navigation and recommendations, but not as precisely as task-based terms. Task taxonomies express more, suggesting not only what happens, but also how it happens.

Key Verbs Mark the Purpose of the Content

Identifying key verbs can be challenging work. Not all headlines will contain verbs. But ideally the opening paragraph should reveal verbs that frame the purpose of the article. Content strategists know that too much content is created without a well-defined purpose. Taxonomy terms focused on actions indicate what happens in the content, and suggest why that matters. Headlines, and taxonomy terms that rely entirely on nouns, don’t offer that.

We will look at some text from an animal shelter. I have intentionally removed the headline so we can focus on the content, to find the core concepts discussed. A simple part-of-speech app will allow us to isolate different kinds of words. First we will focus on the verbs in the text, which includes the terms “match”, “spot”, “suit”, “ask”, and “arrange”. The verb focus seems to be “matching.” Matching could be a good candidate term in a task taxonomy.

part of search view of verbs in narrative

Now we’ll look at nouns. In additional to common nouns such as dogs and families, we see some nouns that suggest a process. Specifically, several nouns include the word “adoption.” Adoption would be a candidate term in a situational taxonomy. Note the shift in focus: adoption suggests a broader discussion about the process, whereas matching suggests a more specific goal.

part of search view of nouns in narrative

When you look at content through the lens of verbs, questions arise. What verbs capture what the content is describing? Why is the content here? What is the reader or viewer supposed to do with this information? Could they tell someone else what is said here?

If you are having trouble finding key verbs, that could indicate problems with the content. Your content may not describe an activity. There is plenty of content that is “background content,” where readers are not expected to take any action after reading the content. If your goal for producing the content is simply to offer information for reference purposes, then it is unlikely you will find key verbs, because the content will probably be very noun-centric. The other possibility is that the writing is not organized clearly, and so key actions discussed are not readily seen. Both possibilities suggest a strategy check-up might be useful.

Avoid a Hollow Taxonomy

Even when tagging well-written content, capturing what activity is represented will require some effort. This can’t be automated, and the people doing the tagging need to pay close attention to what is implied in the content. They are identifying concepts, not simply matching words.

Tagging is easier to do when one already has vocabulary to describe the activities mentioned in your content. That requires auditing, discovery and planning. If your taxonomy only addresses things and not actions, it may be hollow. It can have gaps.

Most content is created to deliver an outcome. Metadata shouldn’t only describe the things that are mentioned. It should describe the actions that the content discusses, which will be explicitly or implicitly related to the actions you would like your customers to take. You want to articulate within metadata the intent of the content, and thus be in a position to use the content more effectively as a result. Key verbs let you capture the essence of your content.

By identifying key verbs, brands can use active terminology in their metadata to deliver content that is aligned with the intent of audiences.

diagram of key verb roles
How key verb metadata can support content outcomes

The Future Web of Verbs

Web search is moving “from the noun to the verb,” according to Prabhakar Raghavan, Google’s Vice President of Engineering.

We are at the start of a movement toward a web of verbs, the fusing of content and actions. Taxonomy is moving away from its bookish origins as the practice of describing documents. Its future will increasingly be focused on supporting user actions, not just finding content. But before we can reach that stage, we need to understand the relationship between the content and actions of interest to the user.

Taxonomies need to reflect the intent of the user. We can understand that intent better when we can track content according to the actions it discusses. We can serve that intent better when we can offer options (recommendations or choices) centered on the actions of greatest interest to the user.

The first area that verb taxonomies will be implemented will likely be transactional ones, such as making reservations using Schema actions. But the applications are much broader than these “bottom of the funnel” interventions. Brands should start to think about using action-oriented taxonomy terms through their content offerings. This is an uncharted area, linking our metadata to our desired content outcomes.

— Michael Andrews


  1. Key verbs build on the pre-semantic idea of key words, but are specific to activities, and represent concepts (semantic meaning) instead of literal word strings.  ↩
  2. You can watch a great video of the process on YouTube.  ↩